Good or Bad Practice in Python: Import in the Middle of a File

Good or bad practice in Python: import in the middle of a file

PEP 8 authoritatively states:

Imports are always put at the top of
the file, just after any module
comments and docstrings, and before module globals and constants.

PEP 8 should be the basis of any "in-house" style guide, since it summarizes what the core Python team has found to be the most effective style, overall (and with individual dissent of course, as on any other language, but consensus and the BDFL agree on PEP 8).

Should import statements always be at the top of a module?

Module importing is quite fast, but not instant. This means that:

  • Putting the imports at the top of the module is fine, because it's a trivial cost that's only paid once.
  • Putting the imports within a function will cause calls to that function to take longer.

So if you care about efficiency, put the imports at the top. Only move them into a function if your profiling shows that would help (you did profile to see where best to improve performance, right??)


The best reasons I've seen to perform lazy imports are:

  • Optional library support. If your code has multiple paths that use different libraries, don't break if an optional library is not installed.
  • In the __init__.py of a plugin, which might be imported but not actually used. Examples are Bazaar plugins, which use bzrlib's lazy-loading framework.

Is it good practice to use `import __main__`?

I think there are two main (ha ha) reasons one might prescribe an avoidance of this pattern.

  • It obfuscates the origin of the variables you're importing.
  • It breaks (or at least it's tough to maintain) if your program has multiple entry points. Imagine if someone, very possibly you, wanted to extract some subset of your functionality into a standalone library--they'd have to delete or redefine every one of those orphaned references to make the thing usable outside of your application.

If you have total control over the application and there will never be another entry point or another use for your features, and you're sure you don't mind the ambiguity, I don't think there's any objective reason why the from __main__ import foo pattern is bad. I don't like it personally, but again, it's basically for the two reasons above.


I think a more robust/developer-friendly solution may be something like this, creating a special module specifically for holding these super-global variables. You can then import the module and refer to module.VAR anytime you need the setting. Essentially, just creating a special module namespace in which to store super-global runtime configuration.

# conf.py (for example)
# This module holds all the "super-global" stuff.
def init(args):
global DEBUG
DEBUG = '--debug' in args
# set up other global vars here.

You would then use it more like this:

# main.py
import conf
import app

if __name__ == '__main__':
import sys
conf.init(sys.argv[1:])

app.run()

# app.py
import conf

def run():
if conf.DEBUG:
print('debug is on')

Note the use of conf.DEBUG rather than from conf import DEBUG. This construction means that you can alter the variable during the life of the program, and have that change reflected elsewhere (assuming a single thread/process, obviously).


Another upside is that this is a fairly common pattern, so other developers will readily recognize it. It's easily comparable to the settings.py file used by various popular apps (e.g. django), though I avoided that particular name because settings.py is conventionally a bunch of static objects, not a namespace for runtime parameters. Other good names for the configuration namespace module described above might be runtime or params, for example.

Import statement inside class/function definition - is it a good idea?

It's the most common style to put every import at the top of the file. PEP 8 recommends it, which is a good reason to do it to start with. But that's not a whim, it has advantages (although not critical enough to make everything else a crime). It allows finding all imports at a glance, as opposed to looking through the whole file. It also ensures everything is imported before any other code (which may depend on some imports) is executed. NameErrors are usually easy to resolve, but they can be annoying.

There's no (significant) namespace pollution to be avoided by keeping the module in a smaller scope, since all you add is the actual module (no, import * doesn't count and probably shouldn't be used anyway). Inside functions, you'd import again on every call (not really harmful since everything is imported once, but uncalled for).

Is it bad practice in Python to define a function in the middle of operational code?

All code in a python script or module (which is basically the same thing - the difference being how it is used) is "operational code" one way or another. The clean way to structure a script's code is to have everything in functions - what you name "operational code" being in a "main()" function - and just have a call to that "main" function if the module is used as a script, ie:

# mymoduleorscript.py
import something
from somewhere import somethingelse

def somefunc(arg1, arg2):
# ...

def otherfunc(arga, argb):
# ...

def main(*args):
# "operational" code here
return status_code # 0 if ok, anything else if error

# only call main if used as a script
if __name__ == "__main__":
import sys
sys.exit(main(*sys.argv[1:]))

Note that this is not a "in my opinion" answer, but the officially blessed OneTrueWay of doing things.

Why is import * bad?

  • Because it puts a lot of stuff into your namespace (might shadow some other object from previous import and you won't know about it).

  • Because you don't know exactly what is imported and can't easily find from which module a certain thing was imported (readability).

  • Because you can't use cool tools like pyflakes to statically detect errors in your code.

Local import statements in Python

The other answers evince a mild confusion as to how import really works.

This statement:

import foo

is roughly equivalent to this statement:

foo = __import__('foo', globals(), locals(), [], -1)

That is, it creates a variable in the current scope with the same name as the requested module, and assigns it the result of calling __import__() with that module name and a boatload of default arguments.

The __import__() function handles conceptually converts a string ('foo') into a module object. Modules are cached in sys.modules, and that's the first place __import__() looks--if sys.modules has an entry for 'foo', that's what __import__('foo') will return, whatever it is. It really doesn't care about the type. You can see this in action yourself; try running the following code:

import sys
sys.modules['boop'] = (1, 2, 3)
import boop
print boop

Leaving aside stylistic concerns for the moment, having an import statement inside a function works how you'd want. If the module has never been imported before, it gets imported and cached in sys.modules. It then assigns the module to the local variable with that name. It does not not not modify any module-level state. It does possibly modify some global state (adding a new entry to sys.modules).

That said, I almost never use import inside a function. If importing the module creates a noticeable slowdown in your program—like it performs a long computation in its static initialization, or it's simply a massive module—and your program rarely actually needs the module for anything, it's perfectly fine to have the import only inside the functions in which it's used. (If this was distasteful, Guido would jump in his time machine and change Python to prevent us from doing it.) But as a rule, I and the general Python community put all our import statements at the top of the module in module scope.



Related Topics



Leave a reply



Submit