What Is the Purpose of Subclassing the Class "Object" in Python

What is the purpose of subclassing the class object in Python?

Note: new-style classes are the default in Python 3. Subclassing object is unnecessary there. Read below for more information on usage with Python 2.

In short, it sets free magical ponies.

In long, Python 2.2 and earlier used "old style classes". They were a particular implementation of classes, and they had a few limitations (for example, you couldn't subclass builtin types). The fix for this was to create a new style of class. But, doing this would involve some backwards-incompatible changes. So, to make sure that code which is written for old style classes will still work, the object class was created to act as a superclass for all new-style classes.
So, in Python 2.X, class Foo: pass will create an old-style class and class Foo(object): pass will create a new style class.

In longer, see Guido's Unifying types and classes in Python 2.2.

And, in general, it's a good idea to get into the habit of making all your classes new-style, because some things (the @property decorator is one that comes to mind) won't work with old-style classes.

Why would it be necessary to subclass from object in Python?

This is oldstyle and new style classes in python 2.x. The second form is the up to date version and exist from python 2.2 and above. For new code you should only use new style classes.

In Python 3.x you can again use both form indifferently as the new style is the only one left and both form are truly equivalent. However I believe you should continue to use the MyClass(object) form even for 3.x code at least until python 3.x is widely adopted to avoid any misunderstanding with potential readers of your code used to 2.x.

Behavior between old style and new style classes is very different regarding to certain features like use of super().

See here : New Style classes

You can also see here on SO.

Subclassing type vs object in Python3

There is no cross-inheritance between object and type. In fact, cross-inheritance is impossible.

# A type is an object
isinstance(int, object) # True

# But an object is not necessarily a type
isinstance(object(), type) # False

What is true in Python is that...

Everything is an object

Absolutly everything, object is the only base type.

isinstance(1, object) # True
isinstance('Hello World', object) # True
isinstance(int, object) # True
isinstance(object, object) # True
isinstance(type, object) # True

Everything has a type

Everything has a type, either built-in or user-defined, and this type can be obtained with type.

type(1) # int
type('Hello World') # str
type(object) # type

Not everything is a type

That one is fairly obvious

isinstance(1, type) # False
isinstance(isinstance, type) # False
isinstance(int, type) # True

type is its own type

This is the behaviour which is specific to type and that is not reproducible for any other class.

type(type) # type

In other word, type is the only object X in Python such that type(X) is X

type(type) is type # True

# While...
type(object) is object # False

This is because type is the only built-in metaclass. A metaclass is simply a class, but its instances are also classes themselves. So in your example...

# This defines a class
class Foo(object):
pass

# Its instances are not types
isinstance(Foo(), type) # False

# While this defines a metaclass
class Bar(type):
pass

# Its instances are types
MyClass = Bar('MyClass', (), {})

isinstance(MyClass, type) # True

# And it is a class
x = MyClass()

isinstance(x, MyClass) # True

Should all Python classes extend object?

In Python 2, not inheriting from object will create an old-style class, which, amongst other effects, causes type to give different results:

>>> class Foo: pass
...
>>> type(Foo())
<type 'instance'>

vs.

>>> class Bar(object): pass
...
>>> type(Bar())
<class '__main__.Bar'>

Also the rules for multiple inheritance are different in ways that I won't even try to summarize here. All good documentation that I've seen about MI describes new-style classes.

Finally, old-style classes have disappeared in Python 3, and inheritance from object has become implicit. So, always prefer new style classes unless you need backward compat with old software.

Possible for a class to look down at subclass constructor?

You can assign any attribute to a class, including a method. This is called monkey patching

# save the old init function
A.__oldinit__ = A.__init__

# create a new function that calls the old one
def custom_init(self):
self.__oldinit__()
self.timeout = 10

# overwrite the old function
# the actual old function will still exist because
# it's referenced as A.__oldinit__ as well
A.__init__ = custom_init

# optional cleanup
del custom_init

What are the differences between type() and isinstance()?

To summarize the contents of other (already good!) answers, isinstance caters for inheritance (an instance of a derived class is an instance of a base class, too), while checking for equality of type does not (it demands identity of types and rejects instances of subtypes, AKA subclasses).

Normally, in Python, you want your code to support inheritance, of course (since inheritance is so handy, it would be bad to stop code using yours from using it!), so isinstance is less bad than checking identity of types because it seamlessly supports inheritance.

It's not that isinstance is good, mind you—it's just less bad than checking equality of types. The normal, Pythonic, preferred solution is almost invariably "duck typing": try using the argument as if it was of a certain desired type, do it in a try/except statement catching all exceptions that could arise if the argument was not in fact of that type (or any other type nicely duck-mimicking it;-), and in the except clause, try something else (using the argument "as if" it was of some other type).

basestring is, however, quite a special case—a builtin type that exists only to let you use isinstance (both str and unicode subclass basestring). Strings are sequences (you could loop over them, index them, slice them, ...), but you generally want to treat them as "scalar" types—it's somewhat incovenient (but a reasonably frequent use case) to treat all kinds of strings (and maybe other scalar types, i.e., ones you can't loop on) one way, all containers (lists, sets, dicts, ...) in another way, and basestring plus isinstance helps you do that—the overall structure of this idiom is something like:

if isinstance(x, basestring)
return treatasscalar(x)
try:
return treatasiter(iter(x))
except TypeError:
return treatasscalar(x)

You could say that basestring is an Abstract Base Class ("ABC")—it offers no concrete functionality to subclasses, but rather exists as a "marker", mainly for use with isinstance. The concept is obviously a growing one in Python, since PEP 3119, which introduces a generalization of it, was accepted and has been implemented starting with Python 2.6 and 3.0.

The PEP makes it clear that, while ABCs can often substitute for duck typing, there is generally no big pressure to do that (see here). ABCs as implemented in recent Python versions do however offer extra goodies: isinstance (and issubclass) can now mean more than just "[an instance of] a derived class" (in particular, any class can be "registered" with an ABC so that it will show as a subclass, and its instances as instances of the ABC); and ABCs can also offer extra convenience to actual subclasses in a very natural way via Template Method design pattern applications (see here and here [[part II]] for more on the TM DP, in general and specifically in Python, independent of ABCs).

For the underlying mechanics of ABC support as offered in Python 2.6, see here; for their 3.1 version, very similar, see here. In both versions, standard library module collections (that's the 3.1 version—for the very similar 2.6 version, see here) offers several useful ABCs.

For the purpose of this answer, the key thing to retain about ABCs (beyond an arguably more natural placement for TM DP functionality, compared to the classic Python alternative of mixin classes such as UserDict.DictMixin) is that they make isinstance (and issubclass) much more attractive and pervasive (in Python 2.6 and going forward) than they used to be (in 2.5 and before), and therefore, by contrast, make checking type equality an even worse practice in recent Python versions than it already used to be.

Why does super closure not use the new base class given by a metaclass?

The problem here is that your dynamic class does not inherit from itself (Class) at all - the implicit __class__ variable inside your __init__ points to the "hardcoded" "Class" version - but the self received when __init__ is called is an instance of the dynamic class, which does not have Class as its superclass. Thus, the arguments filled implicitly to super(): __class__ and self will mismatch (self is not an instance of the class defined in __class__ or of a subclass of it).

The reliable way to fix this is to allow proper inheritance, and forget copying the class __dict__ attributes around: let the inheritance machinery take care of calling the methods in the appropriate order.

By simply making the dynamic class inherit from your static class and the dynamic-base, all methods are in place, and self will be a proper instance from the baked-in __class__ from __init__.__class__ still points to the static Class, but the conditions for super to be called are fulfilled - and super does the right thing, by looking for the supermethods starting from the self parameter - the newly created dynamic subclass which inherits both from your static Class and the new Base, and calls the methods on Base as they are part of the __mro__ of the new class, in the correct place.

Ok - sounds complicated - but with some print statements added we can see the thing working:


class Base:
def __init__(self):
print("at base __init__")

class Meta(type):
def __call__(cls, obj, *args, **kwargs):
dynamic_ancestor = type(obj)
bases = (cls, dynamic_ancestor,)

new_cls = type(f"{cls.__name__}__{dynamic_ancestor.__name__}", bases , {})
instance = new_cls.__new__(new_cls, *args, **kwargs)
instance.__init__(*args, **kwargs)
return instance

class Class(metaclass=Meta):
def __init__(self, *args, **kwargs):
print(__class__.__bases__)
print(self.__class__.__bases__)
print(isinstance(self, Class))
print(isinstance(Class, Meta))
print(isinstance(self, __class__))
print(isinstance(self, self.__class__))
print(self.__class__.__mro__, __class__.__mro__)
super().__init__(*args, **kwargs)

Class(Base())

Outputs:

at base __init__
(<class 'object'>,)
(<class '__main__.Class'>, <class '__main__.Base'>)
True
True
True
True
(<class '__main__.Class__Base'>, <class '__main__.Class'>, <class '__main__.Base'>, <class 'object'>) (<class '__main__.Class'>, <class 'object'>)
at base __init__

Enforcing Class Variables in a Subclass

Abstract Base Classes allow to declare a property abstract, which will force all implementing classes to have the property. I am only providing this example for completeness, many pythonistas think your proposed solution is more pythonic.

import abc

class Base(object):
__metaclass__ = abc.ABCMeta

@abc.abstractproperty
def value(self):
return 'Should never get here'

class Implementation1(Base):

@property
def value(self):
return 'concrete property'

class Implementation2(Base):
pass # doesn't have the required property

Trying to instantiate the first implementing class:

print Implementation1()
Out[6]: <__main__.Implementation1 at 0x105c41d90>

Trying to instantiate the second implementing class:

print Implementation2()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-bbaeae6b17a6> in <module>()
----> 1 Implementation2()

TypeError: Can't instantiate abstract class Implementation2 with abstract methods value


Related Topics



Leave a reply



Submit