Run a Persistent Process via Ssh

Run a persistent process via ssh

As an alternative to nohup, you could run your remote application inside a terminal multiplexor, such as GNU screen or tmux.

Using these tools makes it easy to reconnect to a session from another host, which means that you can kick a long build or download off before you leave work and check on its status when you get home. For instance. I find this particularly useful when doing development work on servers that are very remote (in a different country) with unreliable connectivity between me and them, if the connection drops, I can simply reconnect and carry on without losing any state.

Getting ssh to execute a command in the background on target machine

I had this problem in a program I wrote a year ago -- turns out the answer is rather complicated. You'll need to use nohup as well as output redirection, as explained in the wikipedia artcle on nohup, copied here for your convenience.

Nohuping backgrounded jobs is for
example useful when logged in via SSH,
since backgrounded jobs can cause the
shell to hang on logout due to a race
condition [2]. This problem can also
be overcome by redirecting all three
I/O streams:

nohup myprogram > foo.out 2> foo.err < /dev/null &

Running ssh script with background process


ssh mike@127.0.0.1 /home/mike/test.sh

When you run ssh in this fashion, the remote ssh server will create a set of pipes (or socketpairs) which become the standard input, output, and error for the process which you requested it to run, in this case the script process. The ssh server doesn't end the session based on when the script process exits. Instead, it ends the session when it reads and end-of-file indication on the script process's standard output and standard error.

In your case, the script process creates a child process which inherits the script's standard input, output, and error. A pipe (or socketpair) only returns EOF when all possible writers have exited or closed their end of the pipe. As long as the child process is running and has a copy of the standard output/error file descriptors, the ssh server won't read an EOF indication on those descriptors and it won't close the session.

You can get around this by redirecting standard input and standard output in the command that you pass to the remote server:

ssh mike@127.0.0.1 '/home/mike/test.sh > /dev/null 2>&1'
(note the quotes are important)

This avoids passing the standard output and standard error created by the ssh server to the script process or the subprocesses that it creates.

Alternately, you could add a redirection to the script:

#!/bin/bash

(
exec > /dev/null 2>&1
sleep 2
echo "done"
) &

This causes the script's child process to close its copies of the original standard output and standard error.

Start background process with ssh, run experiments script, then stop it

Track your PIDs and wait for them individually.

This also lets you track failures, as shown below:

ssh "user@${server}" runserver & main_pid=$!
for t in 0 1 2 3 4 5 6 7 8 9; do
ssh "user@${client1}" "runclient --threads=${t}" & client1_pid=$!
ssh "user@${client2}" "runclient --threads=${t}" & client2_pid=$!
wait "$client1_pid" || echo "ERROR: $client1 exit status $? when run with $t threads"
wait "$client2_pid" || echo "ERROR: $client2 exit status $? when run with $t threads"
done
kill "$main_pid"

How to start process via SSH, so it keeps running?

You should be able to use:

sudo nohup python ./webCheck &

sudo nohup python ./apiCheck &

I don't think your monitor.sh will need it, since it should take a relatively short time to start the other two. However I'm not positive if the two checks would become children of monitor.sh, which may end up being an issue.

SSH into a box, immediately background the process, then continue as the original user in a bash script


My goal is to run systemctl --user commands as root via script. If your familiar with systemctl --user domain, there is no way to manage systemctl --user units, without the user being logged in via traditional methods (ssh, direct terminal, or gui). I cannot "su - user1" as root either. So I want to force an ssh session as root to the vdns11 user via runuser commands. Once the user is authenticated and shows up via who I can run systemctl --user commands. How can I keep the ssh session active in my code?

With this additional info, the question essentially boils down to 'How can I start and background an interactive ssh session?'.

You could use script for that. It can be used to trick applications into thinking they are being run interactively:

echo "[+] Starting SSH session in background"
runuser -l user1 -c "script -c 'ssh localhost'" &>/dev/null &
pid=$!
...
echo "[+] Killing active SSH session"
kill ${pid}

Original answer before OP provided additional details (for future reference):


Let's dissect what is going on here.

I assume you start your script as root:

echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &

So root runs runuser -l user1 -c '...', which itself runs ssh -q localhost 2>/dev/null as user1. All this takes place in the background due to &.

ssh will print Pseudo-terminal will not be allocated because stdin is not a terminal. (hidden due to 2>/dev/null) and immediately exit. That's why you don't see anything when running who or when running ps.

Your echo says [+] Becoming user1, which is quite different from what's happening.

sleep 1

The script sleeps for a second. Nothing wrong with that.

echo "[+] Running systemctl --user commands as root."
#runuser -l user 1 -c 'systemctl --user list-units'
# ^ typo!
runuser -l user1 -c 'systemctl --user list-units'

Ignoring the typo, root again runs runuser, which itself runs systemctl --user list-units as user1 this time.

Your echo says [+] Running systemctl --user commands as root., but actually you are running systemctl --user list-units as user1 as explained above.

echo "[+] Killing active ssh sessions."
kill $(ps aux | grep ssh | grep "^user1.*" | grep localhost | awk '{print$2}') 2>/dev/null

This would kill the ssh process that had been started at the beginning of the script, but it already exited, so this does nothing. As a side note, this could be accomplished a lot easier:

echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
pid=$!
...
echo "[+] Killing active ssh sessions."
kill $(pgrep -P $pid)

So this should give you a better understanding about what the script actually does, but between the goals you described and the conflicting echos within the script it's really hard to figure out where this is supposed to be going.

How to run a command in background using ssh and detach the session

There are some situations when you want to execute/start some scripts on a remote machine/server (which will terminate automatically) and disconnect from the server.

eg: A script running on a box which when executed

  1. takes a model and copies it to a remote server
  2. creates a script for running a simulation with the model and push it to server
  3. starts the script on the server and disconnect
  4. The duty of the script thus started is to run the simulation in the server and once completed (will take days to complete) copy the results back to client.

I would use the following command:

ssh remoteserver 'nohup /path/to/script `</dev/null` >nohup.out 2>&1 &'

@CKeven, you may put all those commands on one script, push it to the remote server and initiate it as follows:

echo '#!/bin/bash  
rm -rf statuslist
mkdir statuslist
chmod u+x ~/monitor/concat.sh
chmod u+x ~/monitor/script.sh
nohup ./monitor/concat.sh &
' > script.sh

chmod u+x script.sh

rsync -azvp script.sh remotehost:/tmp

ssh remotehost '/tmp/script.sh `</dev/null` >nohup.out 2>&1 &'

Hope this works ;-)

Edit:
You can also use
ssh user@host 'screen -S SessionName -d -m "/path/to/executable"'

Which creates a detached screen session and runs target command within it



Related Topics



Leave a reply



Submit