Use Ssh to Start a Background Process on a Remote Server, and Exit Session

Getting ssh to execute a command in the background on target machine

I had this problem in a program I wrote a year ago -- turns out the answer is rather complicated. You'll need to use nohup as well as output redirection, as explained in the wikipedia artcle on nohup, copied here for your convenience.

Nohuping backgrounded jobs is for
example useful when logged in via SSH,
since backgrounded jobs can cause the
shell to hang on logout due to a race
condition [2]. This problem can also
be overcome by redirecting all three
I/O streams:

nohup myprogram > foo.out 2> foo.err < /dev/null &

Running ssh script with background process

ssh mike@127.0.0.1 /home/mike/test.sh

When you run ssh in this fashion, the remote ssh server will create a set of pipes (or socketpairs) which become the standard input, output, and error for the process which you requested it to run, in this case the script process. The ssh server doesn't end the session based on when the script process exits. Instead, it ends the session when it reads and end-of-file indication on the script process's standard output and standard error.

In your case, the script process creates a child process which inherits the script's standard input, output, and error. A pipe (or socketpair) only returns EOF when all possible writers have exited or closed their end of the pipe. As long as the child process is running and has a copy of the standard output/error file descriptors, the ssh server won't read an EOF indication on those descriptors and it won't close the session.

You can get around this by redirecting standard input and standard output in the command that you pass to the remote server:

ssh mike@127.0.0.1 '/home/mike/test.sh > /dev/null 2>&1'
(note the quotes are important)

This avoids passing the standard output and standard error created by the ssh server to the script process or the subprocesses that it creates.

Alternately, you could add a redirection to the script:

#!/bin/bash

(
exec > /dev/null 2>&1
sleep 2
echo "done"
) &

This causes the script's child process to close its copies of the original standard output and standard error.

How to run a command in background using ssh and detach the session

There are some situations when you want to execute/start some scripts on a remote machine/server (which will terminate automatically) and disconnect from the server.

eg: A script running on a box which when executed

  1. takes a model and copies it to a remote server
  2. creates a script for running a simulation with the model and push it to server
  3. starts the script on the server and disconnect
  4. The duty of the script thus started is to run the simulation in the server and once completed (will take days to complete) copy the results back to client.

I would use the following command:

ssh remoteserver 'nohup /path/to/script `</dev/null` >nohup.out 2>&1 &'

@CKeven, you may put all those commands on one script, push it to the remote server and initiate it as follows:

echo '#!/bin/bash  
rm -rf statuslist
mkdir statuslist
chmod u+x ~/monitor/concat.sh
chmod u+x ~/monitor/script.sh
nohup ./monitor/concat.sh &
' > script.sh

chmod u+x script.sh

rsync -azvp script.sh remotehost:/tmp

ssh remotehost '/tmp/script.sh `</dev/null` >nohup.out 2>&1 &'

Hope this works ;-)

Edit:
You can also use
ssh user@host 'screen -S SessionName -d -m "/path/to/executable"'

Which creates a detached screen session and runs target command within it

Start background process with ssh, run experiments script, then stop it

Track your PIDs and wait for them individually.

This also lets you track failures, as shown below:

ssh "user@${server}" runserver & main_pid=$!
for t in 0 1 2 3 4 5 6 7 8 9; do
ssh "user@${client1}" "runclient --threads=${t}" & client1_pid=$!
ssh "user@${client2}" "runclient --threads=${t}" & client2_pid=$!
wait "$client1_pid" || echo "ERROR: $client1 exit status $? when run with $t threads"
wait "$client2_pid" || echo "ERROR: $client2 exit status $? when run with $t threads"
done
kill "$main_pid"

python subprocess run a remote process in background and immediately close the connection

After playing about a bit I found that nohup doesn't seem to be properly disconnecting the child process from the parent ssh session (as it should be). This means you have to manually close stdout or point it at a file, e.g.

Using bash:

ssh user@host "nohup PATH/XProgram >&- &"

Shell agnostic (as far as I know):

ssh user@host "nohup PATH/XProgram >/dev/null 2>&1 &"

In python:

from shlex import split
from subprocess import Popen

p = Popen(split('ssh user@host "nohup PATH/XProgram >&- &"'))
p.communicate() # returns (None, None)

Start a background process on remote machine using python

Login with ssh and start any job process with job&. Login with ssh in a different window and do ps to check for your job: you should see it running. Now logout of your first ssh and check again for your job process. You will notice that it is now gone. This happens because jobs are attached to a terminal by default and are sent a SIGHUP when the terminal is closed.

Now repeat the process with running nohup job& or disown job&. These both prevent the SIGHUP from killing the job process.

To fix your code you can use either of the following:

import fabric.api
fabric.api.execute(run_burnP6_bg, hosts=[remote_machine])

def run_burnP6_bg():
fabric.api.run("nohup burnP6 &")

or with subprocess

import subprocess
cmd = 'ssh -f xyz@{1} '.format(ip_addr) + "'nohup burnP6 &'"
subprocess.call(cmd,shell=True)

these should prevent your job from dying when the ssh session ends.



Related Topics



Leave a reply



Submit