Python: Get the Print Output in an Exec Statement

python: get the print output in an exec statement

Since Python 3.4 there is a solution is the stdlib: https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout

from io import StringIO
from contextlib import redirect_stdout

f = StringIO()
with redirect_stdout(f):
help(pow)
s = f.getvalue()

In older versions you can write a context manager to handle replacing stdout:

import sys
from io import StringIO
import contextlib

@contextlib.contextmanager
def stdoutIO(stdout=None):
old = sys.stdout
if stdout is None:
stdout = StringIO()
sys.stdout = stdout
yield stdout
sys.stdout = old

code = """
i = [0,1,2]
for j in i :
print j
"""
with stdoutIO() as s:
exec(code)

print("out:", s.getvalue())

How to get execution of python print statements as a string?

If you really need a print statement in the list of strings (as opposed to a print-like function with a different name, as suggested in another answer), you can reassign the name print to your own function, after carefully, carefully, carefully saving the old print function so you can carefully, carefully, carefully restore the name print to its proper definition. Like this:

>>> g_list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"]
>>> old_print = print
>>> def print(s): # redefining the built-in print function! terrible idea
... global catstr
... catstr += s
...
>>> catstr = ""
>>> for s in g_list: exec(s)
...
>>> catstr
'Wow!Great!Epic!'
>>> print = old_print # Don't forget this step!

This is completely immoral, and I did not advise you to do it. I only said you can do it.

To stress the inadvisability of this plan: exec should be used rarely; it is dangerous; reassigning the names of built-in functions to different functions should be done very rarely; it is dangerous. Doing both in the same code could really ruin your day, especially after a maintenance programmer edits your code "just a little," without realizing the potential impact.

How do I set the output of exec to variable python?

Don't. Eval and Exec are very dangerous from untrusted inputs and there is no reasonable way to make them safe:

Using eval or exec from an untrusted input means you can get queries like this:

eval("os.system('clear')", {})

Use ast.literal_eval to evaluate safely evaluate a string to a Python object.

>>> import ast
>>> string = '{}'
>>> b = ast.literal_eval(string)
>>> b
{}
>>> type(b)
<class 'dict'>

If you can trust the input, use eval.

>>> string = '{}'
>>> b = eval(string)
>>> b
{}
>>> type(b)
<class 'dict'>

The Dangers of Eval and Exec

To explain why eval is dangerous, and why ast.literal_eval is safe, you have to understand how eval works: it merely takes a string and then interprets the input as Python code, allowing essentially anything to occur.

There are many ways to try to make eval and exec safe, however, all have various ways of getting around them. You can prevent nearly everything, including imports, builtins, etc., and someone can still find a way to get around it (see the link above).

ast.literal_eval gets around this by only allowing valid Python datatypes to be evaluated, such as dict, list, tuple, None, int, string, and float. This prevents the malicious execution of any code, but the application is much more limited. However, this extra security is well worth the loss of functionality for code coming from unknown sources.

A good example on how ast.literal_eval keeps you safe is the following snippet:

>>> import ast
>>> eval('__import__("os")')
<module 'os' from '/usr/lib/python3.4/os.py'>

>>> ast.literal_eval('__import__("os")')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/ast.py", line 84, in literal_eval
return _convert(node_or_string)
File "/usr/lib/python3.4/ast.py", line 83, in _convert
raise ValueError('malformed node or string: ' + repr(node))
ValueError: malformed node or string: <_ast.Call object at 0x7f68e03db2e8>

Eval allows direct code execution, in this case, allowing access to the os module and therefore system tasks that could wipe your drive, while ast.literal_eval raises an error because it is not a recognized data type.

Running shell command and capturing the output

In all officially maintained versions of Python, the simplest approach is to use the subprocess.check_output function:

>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'

check_output runs a single program that takes only arguments as input.1 It returns the result exactly as printed to stdout. If you need to write input to stdin, skip ahead to the run or Popen sections. If you want to execute complex shell commands, see the note on shell=True at the end of this answer.

The check_output function works in all officially maintained versions of Python. But for more recent versions, a more flexible approach is available.

Modern versions of Python (3.5 or higher): run

If you're using Python 3.5+, and do not need backwards compatibility, the new run function is recommended by the official documentation for most tasks. It provides a very general, high-level API for the subprocess module. To capture the output of a program, pass the subprocess.PIPE flag to the stdout keyword argument. Then access the stdout attribute of the returned CompletedProcess object:

>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'

The return value is a bytes object, so if you want a proper string, you'll need to decode it. Assuming the called process returns a UTF-8-encoded string:

>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'

This can all be compressed to a one-liner if desired:

>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'

If you want to pass input to the process's stdin, you can pass a bytes object to the input keyword argument:

>>> cmd = ['awk', 'length($0) > 5']
>>> ip = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=ip)
>>> result.stdout.decode('utf-8')
'foofoo\n'

You can capture errors by passing stderr=subprocess.PIPE (capture to result.stderr) or stderr=subprocess.STDOUT (capture to result.stdout along with regular output). If you want run to throw an exception when the process returns a nonzero exit code, you can pass check=True. (Or you can check the returncode attribute of result above.) When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.

Later versions of Python streamline the above further. In Python 3.7+, the above one-liner can be spelled like this:

>>> subprocess.run(['ls', '-l'], capture_output=True, text=True).stdout
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'

Using run this way adds just a bit of complexity, compared to the old way of doing things. But now you can do almost anything you need to do with the run function alone.

Older versions of Python (3-3.4): more about check_output

If you are using an older version of Python, or need modest backwards compatibility, you can use the check_output function as briefly described above. It has been available since Python 2.7.

subprocess.check_output(*popenargs, **kwargs)  

It takes takes the same arguments as Popen (see below), and returns a string containing the program's output. The beginning of this answer has a more detailed usage example. In Python 3.5+, check_output is equivalent to executing run with check=True and stdout=PIPE, and returning just the stdout attribute.

You can pass stderr=subprocess.STDOUT to ensure that error messages are included in the returned output. When security is not a concern, you can also run more complex shell commands by passing shell=True as described at the end of this answer.

If you need to pipe from stderr or pass input to the process, check_output won't be up to the task. See the Popen examples below in that case.

Complex applications and legacy versions of Python (2.6 and below): Popen

If you need deep backwards compatibility, or if you need more sophisticated functionality than check_output or run provide, you'll have to work directly with Popen objects, which encapsulate the low-level API for subprocesses.

The Popen constructor accepts either a single command without arguments, or a list containing a command as its first item, followed by any number of arguments, each as a separate item in the list. shlex.split can help parse strings into appropriately formatted lists. Popen objects also accept a host of different arguments for process IO management and low-level configuration.

To send input and capture output, communicate is almost always the preferred method. As in:

output = subprocess.Popen(["mycmd", "myarg"], 
stdout=subprocess.PIPE).communicate()[0]

Or

>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE,
... stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo

If you set stdin=PIPE, communicate also allows you to pass data to the process via stdin:

>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo

Note Aaron Hall's answer, which indicates that on some systems, you may need to set stdout, stderr, and stdin all to PIPE (or DEVNULL) to get communicate to work at all.

In some rare cases, you may need complex, real-time output capturing. Vartec's answer suggests a way forward, but methods other than communicate are prone to deadlocks if not used carefully.

As with all the above functions, when security is not a concern, you can run more complex shell commands by passing shell=True.

Notes

1. Running shell commands: the shell=True argument

Normally, each call to run, check_output, or the Popen constructor executes a single program. That means no fancy bash-style pipes. If you want to run complex shell commands, you can pass shell=True, which all three functions support. For example:

>>> subprocess.check_output('cat books/* | wc', shell=True, text=True)
' 1299377 17005208 101299376\n'

However, doing this raises security concerns. If you're doing anything more than light scripting, you might be better off calling each process separately, and passing the output from each as an input to the next, via

run(cmd, [stdout=etc...], input=other_output)

Or

Popen(cmd, [stdout=etc...]).communicate(other_output)

The temptation to directly connect pipes is strong; resist it. Otherwise, you'll likely see deadlocks or have to do hacky things like this.

Send `exec()` output to another stream without redirecting stdout

If you're dead set on having a single process, then depending on how willing you are to dive into obscure C-level features of the CPython implementation, you might try looking into subinterpreters. Those are, as far as I know, the highest level of isolation CPython provides in a single process, and they allow things like separate sys.stdout objects for separate subinterpreters.

How to return value from exec in function?

exec() doesn't just evaluate expressions, it executes code. You would have to save a reference within the exec() call.

def test(w, sli):
exec('s = "{}"{}'.format(w, sli))
return s

If you just want to evaluate an expression, use eval(), and save a reference to the returned value:

def test(w,sli):
s = "'{0}'{1}".format(w,sli)
s = eval(s)
return s

However, I would recommend avoiding exec() and eval() in any real code whenever possible. If you use it, make sure you have a very good reason to do so.



Related Topics



Leave a reply



Submit