Python C Program Subprocess Hangs at "For Line in Iter"

Python C program subprocess hangs at for line in iter

It is a block buffering issue.

What follows is an extended for your case version of my answer to Python: read streaming input from subprocess.communicate() question.

Fix stdout buffer in C program directly

stdio-based programs as a rule are line buffered if they are running interactively in a terminal and block buffered when their stdout is redirected to a pipe. In the latter case, you won't see new lines until the buffer overflows or flushed.

To avoid calling fflush() after each printf() call, you could force line buffered output by calling in a C program at the very beginning:

setvbuf(stdout, (char *) NULL, _IOLBF, 0); /* make line buffered stdout */

As soon as a newline is printed the buffer is flushed in this case.

Or fix it without modifying the source of C program

There is stdbuf utility that allows you to change buffering type without modifying the source code e.g.:

from subprocess import Popen, PIPE

process = Popen(["stdbuf", "-oL", "./main"], stdout=PIPE, bufsize=1)
for line in iter(process.stdout.readline, b''):
print line,
process.communicate() # close process' stream, wait for it to exit

There are also other utilities available, see Turn off buffering in pipe.

Or use pseudo-TTY

To trick the subprocess into thinking that it is running interactively, you could use pexpect module or its analogs, for code examples that use pexpect and pty modules, see Python subprocess readlines() hangs. Here's a variation on the pty example provided there (it should work on Linux):

#!/usr/bin/env python
import os
import pty
import sys
from select import select
from subprocess import Popen, STDOUT

master_fd, slave_fd = pty.openpty() # provide tty to enable line buffering
process = Popen("./main", stdin=slave_fd, stdout=slave_fd, stderr=STDOUT,
bufsize=0, close_fds=True)
timeout = .1 # ugly but otherwise `select` blocks on process' exit
# code is similar to _copy() from pty.py
with os.fdopen(master_fd, 'r+b', 0) as master:
input_fds = [master, sys.stdin]
while True:
fds = select(input_fds, [], [], timeout)[0]
if master in fds: # subprocess' output is ready
data = os.read(master_fd, 512) # <-- doesn't block, may return less
if not data: # EOF
input_fds.remove(master)
else:
os.write(sys.stdout.fileno(), data) # copy to our stdout
if sys.stdin in fds: # got user input
data = os.read(sys.stdin.fileno(), 512)
if not data:
input_fds.remove(sys.stdin)
else:
master.write(data) # copy it to subprocess' stdin
if not fds: # timeout in select()
if process.poll() is not None: # subprocess ended
# and no output is buffered <-- timeout + dead subprocess
assert not select([master], [], [], 0)[0] # race is possible
os.close(slave_fd) # subproces don't need it anymore
break
rc = process.wait()
print("subprocess exited with status %d" % rc)

Or use pty via pexpect

pexpect wraps pty handling into higher level interface:

#!/usr/bin/env python
import pexpect

child = pexpect.spawn("/.main")
for line in child:
print line,
child.close()

Q: Why not just use a pipe (popen())? explains why pseudo-TTY is useful.

Python subprocess readlines() hangs

I assume you use pty due to reasons outlined in Q: Why not just use a pipe (popen())? (all other answers so far ignore your "NOTE: I don't want to print out everything at once").

pty is Linux only as said in the docs:

Because pseudo-terminal handling is highly platform dependent, there
is code to do it only for Linux. (The Linux code is supposed to work
on other platforms, but hasn’t been tested yet.)

It is unclear how well it works on other OSes.

You could try pexpect:

import sys
import pexpect

pexpect.run("ruby ruby_sleep.rb", logfile=sys.stdout)

Or stdbuf to enable line-buffering in non-interactive mode:

from subprocess import Popen, PIPE, STDOUT

proc = Popen(['stdbuf', '-oL', 'ruby', 'ruby_sleep.rb'],
bufsize=1, stdout=PIPE, stderr=STDOUT, close_fds=True)
for line in iter(proc.stdout.readline, b''):
print line,
proc.stdout.close()
proc.wait()

Or using pty from stdlib based on @Antti Haapala's answer:

#!/usr/bin/env python
import errno
import os
import pty
from subprocess import Popen, STDOUT

master_fd, slave_fd = pty.openpty() # provide tty to enable
# line-buffering on ruby's side
proc = Popen(['ruby', 'ruby_sleep.rb'],
stdin=slave_fd, stdout=slave_fd, stderr=STDOUT, close_fds=True)
os.close(slave_fd)
try:
while 1:
try:
data = os.read(master_fd, 512)
except OSError as e:
if e.errno != errno.EIO:
raise
break # EIO means EOF on some systems
else:
if not data: # EOF
break
print('got ' + repr(data))
finally:
os.close(master_fd)
if proc.poll() is None:
proc.kill()
proc.wait()
print("This is reached!")

All three code examples print 'hello' immediately (as soon as the first EOL is seen).


leave the old more complicated code example here because it may be referenced and discussed in other posts on SO

Or using pty based on @Antti Haapala's answer:

import os
import pty
import select
from subprocess import Popen, STDOUT

master_fd, slave_fd = pty.openpty() # provide tty to enable
# line-buffering on ruby's side
proc = Popen(['ruby', 'ruby_sleep.rb'],
stdout=slave_fd, stderr=STDOUT, close_fds=True)
timeout = .04 # seconds
while 1:
ready, _, _ = select.select([master_fd], [], [], timeout)
if ready:
data = os.read(master_fd, 512)
if not data:
break
print("got " + repr(data))
elif proc.poll() is not None: # select timeout
assert not select.select([master_fd], [], [], 0)[0] # detect race condition
break # proc exited
os.close(slave_fd) # can't do it sooner: it leads to errno.EIO error
os.close(master_fd)
proc.wait()

print("This is reached!")

python subprocess blocked by while loop

p.communicate() will run until the C program finishes execution. Since you haven't passed in a 3, your program has not finished its execution. Maybe you're looking for something like (edited per Sebastian's comment):

import subprocess

p=subprocess.Popen('./Cprogram', stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write('1\n')
p.stdin.write('3\n')
out, err = p.communicate()
print(out)

Killing subprocess after first line

What am I doing wrong?

Your code is ok (if you want to kill the subprocess after 'Authentication Required' line regardless its position) if the child process flushes its stdout buffer in time. See Python: read streaming input from subprocess.communicate()

The observed behavior indicates that the child uses a block-buffering mode and therefore your parent script sees the 'Authentication Required' line too late or that killing the shell with process.kill() doesn't kill its descendants (processes created by the command).

To workaround it:

  • See whether you could pass a command-line argument such as --line-buffered (accepted by grep), to force a line-buffered mode
  • Or see whether stdbuf, unbuffer, script utilities work in your case
  • Or provide a pseudo-tty to hoodwink the process into thinking that it runs in a terminal directly — it may also force the line-buffered mode.

See code examples in:

  • Python subprocess readlines() hangs
  • Python C program subprocess hangs at "for line in iter"
  • Last unbuffered line can't be read

And - not always I want to kill program after first line. Only if first line is 'Authentication required'

Assuming the block-buffering issue is fixed, to kill the child process if the first line contains Authentication Required:

with Popen(shlex.split(command), 
stdout=PIPE, bufsize=1, universal_newlines=True) as process:
first_line = next(process.stdout)
if 'Authentication Required' in first_line:
process.kill()
else: # whatever
print(first_line, end='')
for line in process.stdout:
print(line, end='')

If shell=True is necessary in your case then see How to terminate a python subprocess launched with shell=True.

subprocess stdin PIPE does not return until program terminates

The problem was that the subprocess being called was not flushing after writing to stdout. Thanks to J.F. and tdelaney for pointing me in the right direction. I have raised this with the developer here: http://www.imagemagick.org/discourse-server/viewtopic.php?f=2&t=26276&p=115545#p115545

There doesn't appear to be a work-around for this in Windows other than to alter the subprocess source. Perhaps if you redirected the output of the subprocess to a NamedTemporaryFile that might work, but I have not tested it and I think it would be locked in Windows so only one of the parent and child could open it at once. Not insurmountable but annoying. There might also be a way to exec the application through unixutils port of stdbuf or something similar as J.F. suggested here: Python C program subprocess hangs at "for line in iter"

If you have access to the source code of the subprocess you're calling you can always recompile it with buffering disabled. It's simple to disable buffering on stdout in C:

setbuf(stdout, NULL)

or set per-line buffering instead of block buffering:

setvbuf(stdout, (char *) NULL, _IOLBF, 0);

See also: Python C program subprocess hangs at "for line in iter"

Hope this helps someone else down the road.

Process in python - fetch stdout of non-terminating process

Using subprocess.Popen():

>>> import subprocess
>>> p = subprocess.Popen(['/your/cpp/program'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
>>> p.stdin.write('1\n')
>>> p.stdout.readline()
'1\n'
>>> p.stdin.write('10\n')
>>> p.stdout.readline()
'10\n'
>>> p.stdin.write('0\n')
>>> p.stdout.readline()
''
>>> p.wait()
0


Related Topics



Leave a reply



Submit