Running Command With Paramiko Exec_Command Causes Process to Sleep Before Finishing

run command with paramiko for a certain amount of time

If the remote host is running Unix, you can pass a shell script to do this as mycommand:

stdin, stdout, stderr = client.exec_command(
"""
# simple command that prints nonstop output; & runs it in background
python -c '
import sys
import time

while 1:
print time.time()
sys.stdout.flush()
time.sleep(0.1)
' &
KILLPID=$!; # save PID of background process
sleep 2; # or however many seconds you'd like to sleep
kill $KILLPID
""")

When run, this prints the current time at 100ms intervals for 2 seconds:

... 1388989588.39
... 1388989588.49
... 1388989588.59
... 1388989588.69
... 1388989588.79
... 1388989588.89
... 1388989588.99
... 1388989589.1
... 1388989589.2
... 1388989589.3
... 1388989589.4
... 1388989589.5
... 1388989589.6
... 1388989589.71
... 1388989589.81
... 1388989589.91
... 1388989590.01
... 1388989590.11
... 1388989590.21
... 1388989590.32

and then gracefully stops.

Wait until task is completed on Remote Machine through Python

This is indeed a duplicate of paramiko SSH exec_command(shell script) returns before completion, but the answer there is not terribly detailed. So...

As you noticed, exec_command is a non-blocking call. So you have to wait for completion of the remote command by using either:

  • Channel.exit_status_ready if you want a non-blocking check of the command completion (i.e.: pooling)
  • Channel.recv_exit_status if you want to block until the command completion (and get back the exit status — an exit status of 0 means normal completion).

In your particular case, you need the later:

stdin, stdout, stderr = client.exec_command(filedelete)  # Non-blocking call
exit_status = stdout.channel.recv_exit_status() # Blocking call
if exit_status == 0:
print ("File Deleted")
else:
print("Error", exit_status)
client.close()

paramiko: automatically terminate remotely started processes

When the SSH connection is closed it'll not kill the running command on remote host.

The easiest solution is:

ssh.exec_command('python /home/me/loop.py', get_pty=True)
# ... do something ...
ssh.close()

Then when the SSH connection is closed, the pty (on remote host) will also be closed and the kernel (on remote host) will send the SIGHUP signal to the remote command. By default SIGHUP will terminate the process so the remote command will be killed.


According to the APUE book:

SIGHUP is sent to the controlling process (session leader) associated
with a controlling terminal if a disconnect is detected by the terminal
interface.

Paramiko exec command failure based on time

Note: exec_command(command) is non-blocking..

I usually try to read the output from the buffer(which consumes some time - before returning), or I use a time.sleep which you've used in this case.

If you use(should) stdout.read()/readlines(), it forces your script to return the output in the stdout buffer, and in turn wait for exec_command to finish.

After executing a command by Python Paramiko how could I save result?

Imagine that stdout is an ordinary file. What do you expect to get if you call file.read() the second time? -- nothing (empty string) unless the file has changed outside.

To save the string:

output = stdout.read()

You might find Fabric simpler to use (it uses paramiko to execute commands under the hood).

Get output from a Paramiko SSH exec_command continuously

As specified in the read([size]) documentation, if you don't specify a size, it reads until EOF, that makes the script wait until the command ends before returning from read() and printing any output.

Check this answers: How to loop until EOF in Python? and How to do a "While not EOF" for examples on how to exhaust the File-like object.



Related Topics



Leave a reply



Submit