Why Do Python Classes Inherit Object

Why do Python classes inherit object?

Is there any reason for a class declaration to inherit from object?

In Python 3, apart from compatibility between Python 2 and 3, no reason. In Python 2, many reasons.


Python 2.x story:

In Python 2.x (from 2.2 onwards) there's two styles of classes depending on the presence or absence of object as a base-class:

  1. "classic" style classes: they don't have object as a base class:

    >>> class ClassicSpam:      # no base class
    ... pass
    >>> ClassicSpam.__bases__
    ()
  2. "new" style classes: they have, directly or indirectly (e.g inherit from a built-in type), object as a base class:

    >>> class NewSpam(object):           # directly inherit from object
    ... pass
    >>> NewSpam.__bases__
    (<type 'object'>,)
    >>> class IntSpam(int): # indirectly inherit from object...
    ... pass
    >>> IntSpam.__bases__
    (<type 'int'>,)
    >>> IntSpam.__bases__[0].__bases__ # ... because int inherits from object
    (<type 'object'>,)

Without a doubt, when writing a class you'll always want to go for new-style classes. The perks of doing so are numerous, to list some of them:

  • Support for descriptors. Specifically, the following constructs are made possible with descriptors:

    1. classmethod: A method that receives the class as an implicit argument instead of the instance.
    2. staticmethod: A method that does not receive the implicit argument self as a first argument.
    3. properties with property: Create functions for managing the getting, setting and deleting of an attribute.
    4. __slots__: Saves memory consumptions of a class and also results in faster attribute access. Of course, it does impose limitations.
  • The __new__ static method: lets you customize how new class instances are created.

  • Method resolution order (MRO): in what order the base classes of a class will be searched when trying to resolve which method to call.

  • Related to MRO, super calls. Also see, super() considered super.

If you don't inherit from object, forget these. A more exhaustive description of the previous bullet points along with other perks of "new" style classes can be found here.

One of the downsides of new-style classes is that the class itself is more memory demanding. Unless you're creating many class objects, though, I doubt this would be an issue and it's a negative sinking in a sea of positives.


Python 3.x story:

In Python 3, things are simplified. Only new-style classes exist (referred to plainly as classes) so, the only difference in adding object is requiring you to type in 8 more characters. This:

class ClassicSpam:
pass

is completely equivalent (apart from their name :-) to this:

class NewSpam(object):
pass

and to this:

class Spam():
pass

All have object in their __bases__.

>>> [object in cls.__bases__ for cls in {Spam, NewSpam, ClassicSpam}]
[True, True, True]

So, what should you do?

In Python 2: always inherit from object explicitly. Get the perks.

In Python 3: inherit from object if you are writing code that tries to be Python agnostic, that is, it needs to work both in Python 2 and in Python 3. Otherwise don't, it really makes no difference since Python inserts it for you behind the scenes.

Do I need to inherit object in my Python classes?

This only matters if you are using Python 2, class Foo() will create an old-style class so I suggest you always use class Foo(object): to create a new-style class.

But if you are using Python 3, class Foo: is the same as class Foo(): and class Foo(object):, so you can use any of those because all of them will create a new-style class. I personally use the first one.

Is it necessary or useful to inherit from Python's object in Python 3.x?

You don't need to inherit from object to have new style in python 3. All classes are new-style.

Should all Python classes extend object?

In Python 2, not inheriting from object will create an old-style class, which, amongst other effects, causes type to give different results:

>>> class Foo: pass
...
>>> type(Foo())
<type 'instance'>

vs.

>>> class Bar(object): pass
...
>>> type(Bar())
<class '__main__.Bar'>

Also the rules for multiple inheritance are different in ways that I won't even try to summarize here. All good documentation that I've seen about MI describes new-style classes.

Finally, old-style classes have disappeared in Python 3, and inheritance from object has become implicit. So, always prefer new style classes unless you need backward compat with old software.

Why does inheriting from object make a difference in Python?

In Python 3, those two are the same. In Python 2, however:

class A: pass  # old-style class

class B(object): pass # new-style class

From New-style and classic classes in the documentation:

Up to Python 2.1, old-style classes were the only flavour available to the user. The concept of (old-style) class is unrelated to the concept of type: if x is an instance of an old-style class, then x.__class__ designates the class of x, but type(x) is always <type 'instance'>. This reflects the fact that all old-style instances, independently of their class, are implemented with a single built-in type, called instance.

New-style classes were introduced in Python 2.2 to unify classes and types. A new-style class is neither more nor less than a user-defined type. If x is an instance of a new-style class, then type(x) is the same as x.__class__.

The major motivation for introducing new-style classes is to provide a unified object model with a full meta-model. It also has a number of immediate benefits, like the ability to subclass most built-in types, or the introduction of "descriptors", which enable computed properties.

For these reasons, it's a good idea to use new-style classes whenever you can. The only reason old-style classes even exist in Python 2.2+ is for backwards compatibility; in Python 3, old-style classes were removed.

Why would a python class that inherits 'object' call 'super' in its __init__ method?

As mentioned in Corley Brigman's comment, it's unnecessary but harmless.

For some background, the BaseResponse class was added during Kenneth's sprint on Requests 1.0. The 1.0 code change introduced transport adapters, which make it possible to define specific behaviour for some HTTP endpoints (or indeed non-HTTP endpoints). An important part of the Transport Adapter interface is the HTTPAdapter.build_response() method, which takes the returned raw response from HTTPAdapter.send() and builds a Requests Response object from it.

It is clear that Kenneth saw potential utility in having some form of abstract base class for Responses, which would allow transport adapters to return Responses with very different behaviours to the standard HTTP Response object. For this reason, the refactor into an ABC with the bulk of the logic in the subclass seemed to make sense.

Later in the refactor this got yanked out again as unnecessary complexity. The reality is that people wanting to define specialised Response objects can simply subclass Response, rather than having an ABC that does nothing much. This makes the mainline use case (vanilla Requests) much cleaner in the code, and takes away almost no utility.

When the BaseRequest class got pulled out, this line got overlooked, but since it causes no problems there's never been need to remove it.

See all inherited classes in python

just combining the answerts from above :

  1. my_class.__subclasses__ will return the classes, which subclass from my_class

  2. C.__mro__ shows the inheritence hierarchy in your case :
    (<class '__main__.C'>, <class '__main__.A'>, <class '__main__.B'>, <type 'object'>)

  object
/ \
A B
\ /
C

In short, __subclasses__ goes down the object hierarchy ladder and the __mro__ goes up. Good luck :)

Why inheritance in python require the parent class to inherit object explicitly?

super() only works for new-style classes(Any class which inherits from object).So You couldn't pass class object to super.

There are two typical use cases for super. In a class hierarchy with single inheritance, super can be used to refer to parent classes without naming them explicitly, thus making the code more maintainable. This use closely parallels the use of super in other programming languages.

The second use case is to support cooperative multiple inheritance in a dynamic execution environment. This use case is unique to Python and is not found in statically compiled languages or languages that only support single inheritance. This makes it possible to implement “diamond diagrams” where multiple base classes implement the same method. Good design dictates that this method have the same calling signature in every case (because the order of calls is determined at runtime, because that order adapts to changes in the class hierarchy, and because that order can include sibling classes that are unknown prior to runtime).

a typical superclass call looks like this:

class C(B):
def method(self, arg):
super(C, self).method(arg)


Related Topics



Leave a reply



Submit