Disable Assertions in Python

Disable assertions in Python

#How do I disable assertions in Python?

There are multiple approaches that affect a single process, the environment, or a single line of code.

I demonstrate each.

For the whole process

Using the -O flag (capital O) disables all assert statements in a process.

For example:

$ python -Oc "assert False"

$ python -c "assert False"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError

Note that by disable I mean it also does not execute the expression that follows it:

$ python -Oc "assert 1/0"

$ python -c "assert 1/0"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero

For the environment

You can use an environment variable to set this flag as well.

This will affect every process that uses or inherits the environment.

E.g., in Windows, setting and then clearing the environment variable:

C:\>python -c "assert False"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError
C:\>SET PYTHONOPTIMIZE=TRUE

C:\>python -c "assert False"

C:\>SET PYTHONOPTIMIZE=

C:\>python -c "assert False"
Traceback (most recent call last):
File "<string>", line 1, in <module>
AssertionError

Same in Unix (using set and unset for respective functionality)

Single point in code

You continue your question:

if an assertion fails, I don't want it to throw an AssertionError, but to keep going.

You can either ensure control flow does not reach the assertion, for example:

if False:
assert False, "we know this fails, but we don't get here"

or if you want the assert expression to be exercised then you can catch the assertion error:

try:
assert False, "this code runs, fails, and the exception is caught"
except AssertionError as e:
print(repr(e))

which prints:

AssertionError('this code runs, fails, and the exception is caught')

and you'll keep going from the point you handled the AssertionError.

References

From the assert documentation:

An assert statement like this:

assert expression #, optional_message

Is equivalent to

if __debug__:
if not expression: raise AssertionError #(optional_message)

And,

the built-in variable __debug__ is True under normal circumstances, False when optimization is requested (command line option -O).

and further

Assignments to __debug__ are illegal. The value for the built-in variable is determined when the interpreter starts.

From the usage docs:

-O

Turn on basic optimizations. This changes the filename extension for compiled (bytecode) files from .pyc to .pyo. See also PYTHONOPTIMIZE.

and

PYTHONOPTIMIZE

If this is set to a non-empty string it is equivalent
to specifying the -O option. If set to an integer, it is equivalent to
specifying -O multiple times.

Enabling/disabling assert statements in python using code

I think there is no comparable way to achieve this in Python. The -O flag sets the built-in variable __debug__ to False, but Python does not allow for changing it at run-time.

One possible solution would be to encapsulate your assertions in if-statements, using a global variable to control whether assert statements get executed or not, but I doubt this is the answer you're looking for.

For more information about the topic, you might want to look at this answer to a related question.

Disabling python's assert() without -0 flag

The docs say,

The value for the built-in variable [__debug__] is determined when the
interpreter starts.

So, if you can not control how the python interpreter is started, then it looks like you can not disable assert.

Here then are some other options:

  1. The safest way is to manually remove all the assert statements.
  2. If all your assert statements occur on lines by themselves, then
    perhaps you could remove them with

    sed -i 's/assert /pass #assert /g' script.py

    Note that this will mangle your code if other code comes after the assert. For example, the sed command above would comment-out the return in a line like this:

    assert x; return True

    which would change the logic of your program.

    If you have code like this, it would probably be best to manually remove the asserts.

  3. There might be a way to remove them programmatically by parsing your
    script with the tokenize module, but writing such a program to
    remove asserts may take more time than it would take to manually
    remove the asserts, especially if this is a one-time job.

  4. If the other piece of software accepts .pyc files, then there is a
    dirty trick which seems to work on my machine, though note a Python
    core developer warns against this (See Éric Araujo's comment on 2011-09-17). Suppose your script is called script.py.

    • Make a temporary script called, say, temp.py:

      import script
    • Run python -O temp.py. This creates script.pyo.
    • Move script.py and script.pyc (if it exists) out of your PYTHONPATH
      or whatever directory the other software is reading to find your
      script.
    • Rename script.pyo --> script.pyc.

    Now when the other software tries to import your script, it will
    only find the pyc file, which has the asserts removed.

    For example, if script.py looks like this:

    assert False
    print('Got here')

    then running python temp.py will now print Got here instead of raising an AssertionError.

What is the use of assert in Python?

The assert statement exists in almost every programming language. It has two main uses:

  1. It helps detect problems early in your program, where the cause is clear, rather than later when some other operation fails. A type error in Python, for example, can go through several layers of code before actually raising an Exception if not caught early on.

  2. It works as documentation for other developers reading the code, who see the assert and can confidently say that its condition holds from now on.

When you do...

assert condition

... you're telling the program to test that condition, and immediately trigger an error if the condition is false.

In Python, it's roughly equivalent to this:

if not condition:
raise AssertionError()

Try it in the Python shell:

>>> assert True # nothing happens
>>> assert False
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError

Assertions can include an optional message, and you can disable them when running the interpreter.

To print a message if the assertion fails:

assert False, "Oh no! This assertion failed!"

Do not use parenthesis to call assert like a function. It is a statement. If you do assert(condition, message) you'll be running the assert with a (condition, message) tuple as first parameter.

As for disabling them, when running python in optimized mode, where __debug__ is False, assert statements will be ignored. Just pass the -O flag:

python -O script.py

See here for the relevant documentation.

Is it possible to change PyTest's assert statement behaviour in Python

You are using pytest, which gives you ample options to interact with failing tests. It gives you command line options and and several hooks to make this possible. I'll explain how to use each and where you could make customisations to fit your specific debugging needs.

I'll also go into more exotic options that would allow you to skip specific assertions entirely, if you really feel you must.

Handle exceptions, not assert

Note that a failing test doesn’t normally stop pytest; only if you enabled the explicitly tell it to exit after a certain number of failures. Also, tests fail because an exception is raised; assert raises AssertionError but that’s not the only exception that’ll cause a test to fail! You want to control how exceptions are handled, not alter assert.

However, a failing assert will end the individual test. That's because once an exception is raised outside of a try...except block, Python unwinds the current function frame, and there is no going back on that.

I don't think that that's what you want, judging by your description of your _assertCustom() attempts to re-run the assertion, but I'll discuss your options further down nonetheless.

Post-mortem debugging in pytest with pdb

For the various options to handle failures in a debugger, I'll start with the --pdb command-line switch, which opens the standard debugging prompt when a test fails (output elided for brevity):

$ mkdir demo
$ touch demo/__init__.py
$ cat << EOF > demo/test_foo.py
> def test_ham():
> assert 42 == 17
> def test_spam():
> int("Vikings")
> EOF
$ pytest demo/test_foo.py --pdb
[ ... ]
test_foo.py:2: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/test_foo.py(2)test_ham()
-> assert 42 == 17
(Pdb) q
Exit: Quitting debugger
[ ... ]

With this switch, when a test fails pytest starts a post-mortem debugging session. This is essentially exactly what you wanted; to stop the code at the point of a failed test and open the debugger to take a look at the state of your test. You can interact with the local variables of the test, the globals, and the locals and globals of every frame in the stack.

Here pytest gives you full control over whether or not to exit after this point: if you use the q quit command then pytest exits the run too, using c for continue will return control to pytest and the next test is executed.

Using an alternative debugger

You are not bound to the pdb debugger for this; you can set a different debugger with the --pdbcls switch. Any pdb.Pdb() compatible implementation would work, including the IPython debugger implementation, or most other Python debuggers (the pudb debugger requires the -s switch is used, or a special plugin). The switch takes a module and class, e.g. to use pudb you could use:

$ pytest -s --pdb --pdbcls=pudb.debugger:Debugger

You could use this feature to write your own wrapper class around Pdb that simply returns immediately if the specific failure is not something you are interested in. pytest uses Pdb() exactly like pdb.post_mortem() does:

p = Pdb()
p.reset()
p.interaction(None, t)

Here, t is a traceback object. When p.interaction(None, t) returns, pytest continues with the next test, unless p.quitting is set to True (at which point pytest then exits).

Here is an example implementation that prints out that we are declining to debug and returns immediately, unless the test raised ValueError, saved as demo/custom_pdb.py:

import pdb, sys

class CustomPdb(pdb.Pdb):
def interaction(self, frame, traceback):
if sys.last_type is not None and not issubclass(sys.last_type, ValueError):
print("Sorry, not interested in this failure")
return
return super().interaction(frame, traceback)

When I use this with the above demo, this is output (again, elided for brevity):

$ pytest test_foo.py -s --pdb --pdbcls=demo.custom_pdb:CustomPdb
[ ... ]
def test_ham():
> assert 42 == 17
E assert 42 == 17

test_foo.py:2: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Sorry, not interested in this failure
F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'

test_foo.py:4: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../test_foo.py(4)test_spam()
-> int("Vikings")
(Pdb)

The above introspects sys.last_type to determine if the failure is 'interesting'.

However, I can't really recommend this option unless you want to write your own debugger using tkInter or something similar. Note that that is a big undertaking.

Filtering failures; pick and choose when to open the debugger

The next level up is the pytest debugging and interaction hooks; these are hook points for behaviour customisations, to replace or enhance how pytest normally handles things like handling an exception or entering the debugger via pdb.set_trace() or breakpoint() (Python 3.7 or newer).

The internal implementation of this hook is responsible for printing the >>> entering PDB >>> banner above as well, so using this hook to prevent the debugger from running means you won't see this output at all. You can have your own hook then delegate to the original hook when a test failure is 'interesting', and so filter test failures independent of the debugger you are using! You can access the internal implementation by accessing it by name; the internal hook plugin for this is named pdbinvoke. To prevent it from running you need to unregister it but save a reference do we can call it directly as needed.

Here is a sample implementation of such a hook; you can put this in any of the locations plugins are loaded from; I put it in demo/conftest.py:

import pytest

@pytest.hookimpl(trylast=True)
def pytest_configure(config):
# unregister returns the unregistered plugin
pdbinvoke = config.pluginmanager.unregister(name="pdbinvoke")
if pdbinvoke is None:
# no --pdb switch used, no debugging requested
return
# get the terminalreporter too, to write to the console
tr = config.pluginmanager.getplugin("terminalreporter")
# create or own plugin
plugin = ExceptionFilter(pdbinvoke, tr)

# register our plugin, pytest will then start calling our plugin hooks
config.pluginmanager.register(plugin, "exception_filter")

class ExceptionFilter:
def __init__(self, pdbinvoke, terminalreporter):
# provide the same functionality as pdbinvoke
self.pytest_internalerror = pdbinvoke.pytest_internalerror
self.orig_exception_interact = pdbinvoke.pytest_exception_interact
self.tr = terminalreporter

def pytest_exception_interact(self, node, call, report):
if not call.excinfo. errisinstance(ValueError):
self.tr.write_line("Sorry, not interested!")
return
return self.orig_exception_interact(node, call, report)

The above plugin uses the internal TerminalReporter plugin to write out lines to the terminal; this makes the output cleaner when using the default compact test status format, and lets you write things to the terminal even with output capturing enabled.

The example registers the plugin object with pytest_exception_interact hook via another hook, pytest_configure(), but making sure it runs late enough (using @pytest.hookimpl(trylast=True)) to be able to un-register the internal pdbinvoke plugin. When the hook is called, the example tests against the call.exceptinfo object; you can also check the node or the report too.

With the above sample code in place in demo/conftest.py, the test_ham test failure is ignored, only the test_spam test failure, which raises ValueError, results in the debug prompt opening:

$ pytest demo/test_foo.py --pdb
[ ... ]
demo/test_foo.py F
Sorry, not interested!

demo/test_foo.py F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'

demo/test_foo.py:4: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/test_foo.py(4)test_spam()
-> int("Vikings")
(Pdb)

To re-iterate, the above approach has the added advantage that you can combine this with any debugger that works with pytest, including pudb, or the IPython debugger:

$ pytest demo/test_foo.py --pdb --pdbcls=IPython.core.debugger:Pdb
[ ... ]
demo/test_foo.py F
Sorry, not interested!

demo/test_foo.py F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'

demo/test_foo.py:4: ValueError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /.../demo/test_foo.py(4)test_spam()
1 def test_ham():
2 assert 42 == 17
3 def test_spam():
----> 4 int("Vikings")

ipdb>

It also has much more context about what test was being run (via the node argument) and direct access to the exception raised (via the call.excinfo ExceptionInfo instance).

Note that specific pytest debugger plugins (such as pytest-pudb or pytest-pycharm) register their own pytest_exception_interact hooksp. A more complete implementation would have to loop over all plugins in the plugin-manager to override arbitrary plugins, automatically, using config.pluginmanager.list_name_plugin and hasattr() to test each plugin.

Making failures go away altogether

While this gives you full control over failed test debugging, this still leaves the test as failed even if you opted not to open the debugger for a given test. If you want to make failures go away altogether, you can make use a different hook: pytest_runtest_call().

When pytest runs tests, it'll run the test via the above hook, which is expected to return None or raise an exception. From this a report is created, optionally a log entry is created, and if the test failed, the aforementioned pytest_exception_interact() hook is called. So all you need to do is change what the result that this hook produces; instead of an exception it should just not return anything at all.

The best way to do that is to use a hook wrapper. Hook wrappers don't have to do the actual work, but instead are given a chance to alter what happens to the result of a hook. All you have to do is add the line:

outcome = yield

in your hook wrapper implementation and you get access to the hook result, including the test exception via outcome.excinfo. This attribute is set to a tuple of (type, instance, traceback) if an exception was raised in the test. Alternatively, you could call outcome.get_result() and use standard try...except handling.

So how do you make a failed test pass? You have 3 basic options:

  • You could mark the test as an expected failure, by calling pytest.xfail() in the wrapper.
  • You could mark the item as skipped, which pretends that the test was never run in the first place, by calling pytest.skip().
  • You could remove the exception, by using the outcome.force_result() method; set the result to an empty list here (meaning: the registered hook produced nothing but None), and the exception is cleared entirely.

What you use is up to you. Do make sure to check the result for skipped and expected-failure tests first as you don't need to handle those cases as if the test failed. You can access the special exceptions these options raise via pytest.skip.Exception and pytest.xfail.Exception.

Here's an example implementation which marks failed tests that don't raise ValueError, as skipped:

import pytest

@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item):
outcome = yield
try:
outcome.get_result()
except (pytest.xfail.Exception, pytest.skip.Exception, pytest.exit.Exception):
raise # already xfailed, skipped or explicit exit
except ValueError:
raise # not ignoring
except (pytest.fail.Exception, Exception):
# turn everything else into a skip
pytest.skip("[NOTRUN] ignoring everything but ValueError")

When put in conftest.py the output becomes:

$ pytest -r a demo/test_foo.py
============================= test session starts =============================
platform darwin -- Python 3.8.0, pytest-3.10.0, py-1.7.0, pluggy-0.8.0
rootdir: ..., inifile:
collected 2 items

demo/test_foo.py sF [100%]

=================================== FAILURES ===================================
__________________________________ test_spam ___________________________________

def test_spam():
> int("Vikings")
E ValueError: invalid literal for int() with base 10: 'Vikings'

demo/test_foo.py:4: ValueError
=========================== short test summary info ============================
FAIL demo/test_foo.py::test_spam
SKIP [1] .../demo/conftest.py:12: [NOTRUN] ignoring everything but ValueError
===================== 1 failed, 1 skipped in 0.07 seconds ======================

I used the -r a flag to make it clearer that test_ham was skipped now.

If you replace the pytest.skip() call with pytest.xfail("[XFAIL] ignoring everything but ValueError"), the test is marked as an expected failure:

[ ... ]
XFAIL demo/test_foo.py::test_ham
reason: [XFAIL] ignoring everything but ValueError
[ ... ]

and using outcome.force_result([]) marks it as passed:

$ pytest -v demo/test_foo.py  # verbose to see individual PASSED entries
[ ... ]
demo/test_foo.py::test_ham PASSED [ 50%]

It's up to you which one you feel fits your use case best. For skip() and xfail() I mimicked the standard message format (prefixed with [NOTRUN] or [XFAIL]) but you are free to use any other message format you want.

In all three cases pytest will not open the debugger for tests whose outcome you altered using this method.

Altering individual assert statements

If you want to alter assert tests within a test, then you are setting yourself up for a whole lot more work. Yes, this is technically possible, but only by rewriting the very code that Python is going to execute at compile time.

When you use pytest, this is actually already being done. Pytest rewrites assert statements to give you more context when your asserts fail; see this blog post for a good overview of exactly what is being done, as well as the _pytest/assertion/rewrite.py source code. Note that that module is over 1k lines long, and requires that you understand how Python's abstract syntax trees work. If you do, you could monkeypatch that module to add your own modifications there, including surrounding the assert with a try...except AssertionError: handler.

However, you can't just disable or ignore asserts selectively, because subsequent statements could easily depend on state (specific object arrangements, variables set, etc.) that a skipped assert was meant to guard against. If an assert tests that foo is not None, then a later assert relies on foo.bar to exist, then you simply will run into an AttributeError there, etc. Do stick to re-raising the exception, if you need to go this route.

I'm not going to go into further detail on rewriting asserts here, as I don't think this is worth pursuing, not given the amount of work involved, and with post-mortem debugging giving you access to the state of the test at the point of assertion failure anyway.

Note that if you do want to do this, you don't need to use eval() (which wouldn't work anyway, assert is a statement, so you'd need to use exec() instead), nor would you have to run the assertion twice (which can lead to issues if the expression used in the assertion altered state). You would instead embed the ast.Assert node inside a ast.Try node, and attach an except handler that uses an empty ast.Raise node re-raise the exception that was caught.

Using the debugger to skip assertion statements.

The Python debugger actually lets you skip statements, using the j / jump command. If you know up front that a specific assertion will fail, you can use this to bypass it. You could run your tests with --trace, which opens the debugger at the start of every test, then issue a j <line after assert> to skip it when the debugger is paused just before the assert.

You can even automate this. Using the above techniques you can build a custom debugger plugin that

  • uses the pytest_testrun_call() hook to catch the AssertionError exception
  • extracts the line 'offending' line number from the traceback, and perhaps with some source code analysis determines the line numbers before and after the assertion required to execute a successful jump
  • runs the test again, but this time using a Pdb subclass that sets a breakpoint on the line before the assert, and automatically executes a jump to the second when the breakpoint is hit, followed by a c continue.

Or, instead of waiting for an assertion to fail, you could automate setting breakpoints for each assert found in a test (again using source code analysis, you can trivially extract line numbers for ast.Assert nodes in an an AST of the test), execute the asserted test using debugger scripted commands, and use the jump command to skip the assertion itself. You'd have to make a tradeoff; run all tests under a debugger (which is slow as the interpreter has to call a trace function for every statement) or only apply this to failing tests and pay the price of re-running those tests from scratch.

Such a plugin would be a lot of work to create, I'm not going to write an example here, partly because it wouldn't fit in an answer anyway, and partly because I don't think it is worth the time. I'd just open up the debugger and make the jump manually. A failing assert indicates a bug in either the test itself or the code-under-test, so you may as well just focus on debugging the problem.

shield (invalidate) assert in python

Use command line option -O. As described in the docs:

In the current implementation, the built-in variable __debug__ is True
under normal circumstances, False when optimization is requested
(command line option -O). The current code generator emits no code for
an assert statement when optimization is requested at compile time.



Related Topics



Leave a reply



Submit