Calling Destructor Explicitly

Is calling destructor manually always a sign of bad design?

Calling the destructor manually is required if the object was constructed using an overloaded form of operator new(), except when using the "std::nothrow" overloads:

T* t0 = new(std::nothrow) T();
delete t0; // OK: std::nothrow overload

void* buffer = malloc(sizeof(T));
T* t1 = new(buffer) T();
t1->~T(); // required: delete t1 would be wrong
free(buffer);

Outside managing memory on a rather low level as above calling destructors explicitly, however, is a sign of bad design. Probably, it is actually not just bad design but outright wrong (yes, using an explicit destructor followed by a copy constructor call in the assignment operator is a bad design and likely to be wrong).

With C++ 2011 there is another reason to use explicit destructor calls: When using generalized unions, it is necessary to explicitly destroy the current object and create a new object using placement new when changing the type of the represented object. Also, when the union is destroyed, it is necessary to explicitly call the destructor of the current object if it requires destruction.

calling destructor explicitly


It seems to me that we can call destructor explicitly in this case, could you explain to me why?

Do you mean why can we? Because the language allows explicit destructor calls on any object. As you say, it usually gives undefined behaviour since most objects will be destroyed in some other way, and it's undefined behaviour to destroy anything twice (or more generally to access it after destruction). But that just means that you mustn't do it, not that the language will prevent you from doing it.

Or do you mean why would we want to? Because that's how you destroy an object created by placement new.

What does those destructor call mean in this example?

They both mean the same thing, and are equivalent to p->~A(); they call the object's destructor. The example is demonstrating that you can provide template arguments here if you want to. I'm not sure why you'd want to.

What are the cases that we can call destructors explicitly besides placement delete?

I think that you're allowed to call a trivial destructor (one that doesn't do anything) whenever you like; but there's no point. I think destroying something created with placement new is the only legitimate reason to do it.

Automatic destruction of object even after calling destructor explicitly

You've introduced undefined behavior.

Per the standard:

§ 12.4 Destructors

(11) A destructor is invoked implicitly

(11.3) — for a constructed object with automatic storage duration (3.7.3) when the block in which an object is
created exits
(6.7),

and

15 Once a destructor is invoked for an object, the object no longer exists; the behavior is undefined if the
destructor is invoked for an object whose lifetime has ended (3.8). [ Example: if the destructor for an
automatic object is explicitly invoked, and the block is subsequently left in a manner that would ordinarily
invoke implicit destruction of the object, the behavior is undefined. —end example ]

You explicitly call the destructor or by calling t.~Test(), it is then implicitly invoked when the object leaves scope. This is undefined.

The standard provides this note as well:

14 [ Note: explicit calls of destructors are rarely needed. One use of such calls is for objects placed at specific
addresses using a placement new-expression. Such use of explicit placement and destruction of objects can
be necessary to cope with dedicated hardware resources and for writing memory management facilities.

Is it allowed to call destructor explicitly followed by placement new on a variable with fixed lifetime?

First, [basic.life]/8 clearly states that any pointers or references to the original foo shall refer to the new object you construct at foo in your case. In addition, the name foo will refer to the new object constructed there (also [basic.life]/8).

Second, you must ensure that there is an object of the original type the storage used for foo before exiting its scope; so if anything throws, you must catch it and terminate your program ([basic.life]/9).

Overall, this idea is often tempting, but almost always a horrible idea.

  • (8) If, after the lifetime of an object has ended and before the storage which the object occupied is reused or released, a new object is created at the storage location which the original object occupied, a pointer that pointed to the original object, a reference that referred to the original object, or the name of the original object will automatically refer to the new object and, once the lifetime of the new object has started, can be used to manipulate the new object, if:

    • (8.1) the storage for the new object exactly overlays the storage location which the original object occupied, and
    • (8.2) the new object is of the same type as the original object (ignoring the top-level cv-qualifiers), and
    • (8.3) the type of the original object is not const-qualified, and, if a class type, does not contain any non-static
      data member whose type is const-qualified or a reference type, and
    • (8.4) the original object was a most derived object (1.8) of type
      T and the new object is a most derived
      object of type T (that is, they are not base class subobjects).
  • (9) If a program ends the lifetime of an object of type T with static (3.7.1), thread (3.7.2), or automatic (3.7.3) storage duration and if T has a non-trivial destructor, the program must ensure that an object of the
    original type occupies that same storage location when the implicit destructor call takes place; otherwise the behavior of the program is undefined. This is true even if the block is exited with an exception.

There are reasons to manually run destructors and do placement new. Something as simple as operator= is not one of them, unless you are writing your own variant/any/vector or similar type.

If you really, really want to reassign an object, find a std::optional implementation, and create/destroy objects using that; it is careful, and you almost certainly won't be careful enough.

Manual call of destructor


Why destructor is called twice.

The first call is from the line i.~Test();.

The second call is the automatic call to the destructor when the variable i gets out of scope (before returning from main).

In first call of destructor memeber value is changed from 6 to 7 , still in second call it comes as 6.

That's caused by undefined behavior. When an object's destructor gets called twice, you should expect undefined behavior. Don't try to make logical sense when a program enters undefined behavior territory.

Can we stop second call of destructor (I want to keep only manually call of destructor).

You can't disable the call to the destructor of an automatic variable when variable goes out of scope.

If you want to control when the destructor is called, create an object using dynamic memory (by calling new Test) and destroy the object by calling delete.

GB::Test* t = new GB::Test(); // Calls the constructor
t->i = 6;
delete t; // Calls the destructor

Even in this case, calling the destructor explicitly is almost always wrong.

t->~Test();  // Almost always wrong. Don't do it.

Please note that if you want to create objects using dynamic memory, it will be better to use smart pointers. E.g.

auto t = std::make_unique<GB::Test>();  // Calls the constructor
t->i = 6;
t.reset(); // Calls the destructor

If t.reset(); is left out, the dynamically allocated object's destructor will be called and the memory will be deallocated when t gets out of scope. t.reset(); allows you to control when the underlying object gets deleted.

Does calling a destructor explicitly destroy an object completely?

The answer is... nearly always.

If your object has a non-virtual destructor, and is then sub-classed to add child elements that need freeing... then calling the destructor on the object base class will not free the child elements. This is why you should always declare destructors virtual.

We had an interesting case where two shared libraries referenced an object. We changed the definition to add child objects which needed freeing. We recompiled the first shared library which contained the object definition.

HOWEVER, the second shared library was not recompiled. This means that it did not know of the newly added virtual object definition. Delete's invoked from the second shared library simply called free, and did not invoke the virtual destructor chain. Result was a nasty memory leak.



Related Topics



Leave a reply



Submit