How to Pipe Input to Python Line by Line from Linux Program

How to pipe input to python line by line from linux program?

Instead of using command line arguments I suggest reading from standard input (stdin). Python has a simple idiom for iterating over lines at stdin:

import sys

for line in sys.stdin:
sys.stdout.write(line)

My usage example (with above's code saved to iterate-stdin.py):

$ echo -e "first line\nsecond line" | python iterate-stdin.py 
first line
second line

With your example:

$ echo "days go by and still" | python iterate-stdin.py
days go by and still

Pipe output from shell command to a python script

You need to read from stdin to retrieve the data in the python script e.g.

#!/usr/bin/env python

import sys

def hello(variable):
print variable

data = sys.stdin.read()
hello(data)

If all you want to do here is grab some data from a mysql database and then manipulate it with Python I would skip piping it into the script and just use the Python MySql module to do the SQL query.

Pipe input to script and later get input from user

There isn't a general solution to this problem. The best resource seems to be this mailing list thread.

Basically, piping into a program connects the program's stdin to that pipe, rather than to the terminal.

The mailing list thread has a couple of relatively simple solutions for *nix:

Open /dev/tty to replace sys.stdin:

sys.stdin = open('/dev/tty')
a = raw_input('Prompt: ')

Redirect stdin to another file handle when you run your script, and read from that:

sys.stdin = os.fdopen(3)
a = raw_input('Prompt: ')
$ (echo -n test | ./x.py) 3<&0

as well as the suggestion to use curses. Note that the mailing list thread is ancient so you may need to modify the solution you pick.

piping string input into python

import sys

for line in sys.stdin:
sys.stdout.write(line)

Input:

echo "Hello StackOverflow" | python3 hello.py

Output:

Hello StackOverflow

How do I pipe output from one python script as input to another python script?

If I undestand your issue correctly, your two scripts each write out a prompt for input. For instance, they could both be something like this:

in_string = input("Enter something")
print(some_function(in_string))

Where some_function is a function that has different output depending on the input string (which may be different in each script).

The issue is that the "Enter something" prompt doesn't get displayed to the user correctly when the output of one script is being piped to another script. That's because the prompt is written to standard output, so the first script's prompt is piped to the second script, while the second script's prompt is displayed. That's misleading, since it's the first script that will (directly) receive input from the user. The prompt text may also mess up the data being passed between the two scripts.

There's no perfect solution to this issue. One partial solution is to write the prompt to standard error, rather than standard output. This would let you see both prompts (though you'd only actually be able to respond to one of them). I don't think you can directly do that with input, but print can write to other file streams if you want: print("prompt", file=sys.stderr)

Another partial solution is to check if your input and output streams and skip printing the prompts if either one is not a "tty" (terminal). In Python, you can do sys.stdin.isatty(). Many command line programs have a different "interactive mode" if they're connected directly to the user, rather than to a pipe or a file.

If piping the output around is a main feature of your program, you may not want to use prompts ever! Many standard Unix command-line programs (like cat and grep) don't have any interactive behavior at all. They require the user to pass command line arguments or set environment variables to control how they run. That lets them work as expected even when they don't have access to standard input and standard output.

How do I pipe output of one python script to another python script

This is due to the internal buffering in File Objects (for line in sys.stdin).

So, if we fetch line by line:

import sys
import time
import datetime

while True:
line = sys.stdin.readline()
if not line:
break
sys.stdout.write(str(datetime.datetime.now().strftime("%H:%M:%S"))+'\t')
sys.stdout.write(line)
sys.stdout.flush()
time.sleep(1)

The code will work as expected:

$ cat data.txt | python receiver.py |  python receiver.py
09:43:46 09:43:46 Line-A
09:43:47 09:43:47 Line-B
09:43:48 09:43:48 Line-C
09:43:49 09:43:49 Line-D

Documentation

... Note that there is internal buffering in file.readlines() and File
Objects (for line in sys.stdin) which is not influenced by this
option. To work around this, you will want to use file.readline()
inside a while 1: loop.

NOTE: The File Object thing was fixed in Python 3

pipe input to and read output from another executable

The problem is that read() tries to read the entire stream, which means it waits until the subprocess terminates. You need to determine a way to know when a character is available. Here are some ways to do it:

  1. Read one character at a time until the return character (end-of-line) is encoutered.
  2. The sub application can send constant length outputs. You can specify the length of characters in the read method.
  3. The sub application can announce how many characters it will print.

You also need a condition to tell the subprocess to end. For example, when it receives a special string.

Another problem can come from buffering: data may not be transmitted immediately after a write operation. In this case, you can use flush() to guarantee delivery.

I know your code above is in python3, but to avoid the problems of unicode conversions, the following programs are in python2. You should have no problems converting them to python3.

Program client.py

# pyhton2                             
import sys
do_run = True
while do_run:
i = ''
line = ''
while i != '\n': # read one char at a time until RETURN
i = sys.stdin.read(1)
line += i
#
if line.startswith("END"):
do_run = False
else:
sys.stdout.write("printing : " + line) # RET already in line
sys.stdout.flush()

Program main.py

from subprocess import Popen, PIPE

proc = Popen(["python2","client.py"], stdout=PIPE, stdin=PIPE, stderr=PIPE )

for text in ('A', 'B', 'C', 'D', 'E'):
print text
proc.stdin.write(text+"\n")
proc.stdin.flush()
i = ''
result_list=[]
while i != '\n':
i = proc.stdout.read(1)
result_list.append(i)
print ("result " + "".join(result_list))

proc.stdin.write("END\n")

I ran the following programs on a Raspberry Pi (Rasbian) and it worked. However, if I commented the lines with flush(), the program jammed.

These program use the first option (read one char at a time), which is probably the slowest. You can improve speed by using the other two, at the expense of a more complicated code.



Related Topics



Leave a reply



Submit