Why Do We Even Need the "Delete[]" Operator

Why do we even need the delete[] operator?

It's so that the destructors of the individual elements will be called. Yes, for arrays of PODs, there isn't much of a difference, but in C++, you can have arrays of objects with non-trivial destructors.

Now, your question is, why not make new and delete behave like new[] and delete[] and get rid of new[] and delete[]? I would go back Stroustrup's "Design and Evolution" book where he said that if you don't use C++ features, you shouldn't have to pay for them (at run time at least). The way it stands now, a new or delete will behave as efficiently as malloc and free. If delete had the delete[] meaning, there would be some extra overhead at run time (as James Curran pointed out).

delete vs delete[] operators in C++

The delete operator deallocates memory and calls the destructor for a single object created with new.

The delete [] operator deallocates memory and calls destructors for an array of objects created with new [].

Using delete on a pointer returned by new [] or delete [] on a pointer returned by new results in undefined behavior.

Why do we even need the delete operator? (Can't we just use delete[])

A single object is not an array of size 1. An array of size 1, created with new char[1], needs to record the number of objects that were allocated, so that delete[] knows how many objects to destroy.

Arguably, new could be implemented internally as new[1] (and delete implemented as delete[]) - that would be a correct implementation. However, having separate new and new[1] allows the optimization of not storing the object count for single objects (a very common case).

The C++ principle of not paying for what you don't use comes into play here. If we create an array of objects, we pay the memory and processing price of storing the count, but if we create a single object, we don't incur the overhead.

Why does the delete[] syntax exist in C++?

Objects in C++ often have destructors that need to run at the end of their lifetime. delete[] makes sure the destructors of each element of the array are called. But doing this has unspecified overhead, while delete does not. This is why there are two forms of delete expressions. One for arrays, which pays the overhead and one for single objects which does not.

In order to only have one version, an implementation would need a mechanism for tracking extra information about every pointer. But one of the founding principles of C++ is that the user shouldn't be forced to pay a cost that they don't absolutely have to.

Always delete what you new and always delete[] what you new[]. But in modern C++, new and new[] are generally not used anymore. Use std::make_unique, std::make_shared, std::vector or other more expressive and safer alternatives.

The difference between delete and delete[] in C++

You delete [] when you newed an array type, and delete when you didn't. Examples:

typedef int int_array[10];

int* a = new int;
int* b = new int[10];
int* c = new int_array;

delete a;
delete[] b;
delete[] c; // this is a must! even if the new-line didn't use [].

Why [] is used in delete ( delete [] ) to free dynamically allocated array ?

Scott Meyers says in his Effective C++ book: Item 5: Use the same form in corresponding uses of new and delete.

The big question for delete is this: how many objects reside in the memory being deleted? The answer to that determines how many destructors must be called.

Does the pointer being deleted point to a single object or to an array of objects? The only way for delete to know is for you to tell it. If you don't use brackets in your use of delete, delete assumes a single object is pointed to.

Also, the memory allocator might allocate more space that required to store your objects and in this case dividing the size of the memory block returned by the size of each object won't work.

Depending on the platform, the _msize (windows), malloc_usable_size (linux) or malloc_size (osx) functions will tell you the real length of the block that was allocated. This information can be exploited when designing growing containers.

Another reason why it won't work is that Foo* foo = new Foo[10] calls operator new[] to allocate the memory. Then delete [] foo; calls operator delete[] to deallocate the memory. As those operators can be overloaded, you have to adhere to the convention otherwise delete foo; calls operator delete which may have an incompatible implementation with operator delete []. It's a matter of semantics, not just keeping track of the number of allocated object to later issue the right number of destructor calls.

See also:

[16.14] After p = new Fred[n], how does the compiler know there are n objects to be destructed during delete[] p?

Short answer: Magic.

Long answer: The run-time system stores the number of objects, n, somewhere where it can be retrieved if you only know the pointer, p. There are two popular techniques that do this. Both these techniques are in use by commercial-grade compilers, both have tradeoffs, and neither is perfect. These techniques are:

  • Over-allocate the array and put n just to the left of the first Fred object.
  • Use an associative array with p as the key and n as the value.

EDIT: after having read @AndreyT comments, I dug into my copy of Stroustrup's "The Design and Evolution of C++" and excerpted the following:

How do we ensure that an array is correctly deleted? In particular, how do we ensure that the destructor is called for all elements of an array?

...

Plain delete isn't required to handle both individual objects an arrays. This avoids complicating the common case of allocating and deallocating individual objects. It also avoids encumbering individual objects with information necessary for array deallocation.

An intermediate version of delete[] required the programmer to specify the number of elements of the array.

...

That proved too error prone, so the burden of keeping track of the number of elements was placed on the implementation instead.

As @Marcus mentioned, the rational may have been "you don't pay for what you don't use".


EDIT2:

In "The C++ Programming Language, 3rd edition", §10.4.7, Bjarne Stroustrup writes:

Exactly how arrays and individual objects are allocated is implementation-dependent. Therefore, different implementations will react differently to incorrect uses of the delete and delete[] operators. In simple and uninteresting cases like the previous one, a compiler can detect the problem, but generally something nasty will happen at run time.

The special destruction operator for arrays, delete[], isn’t logically necessary. However, suppose the implementation of the free store had been required to hold sufficient information for every object to tell if it was an individual or an array. The user could have been relieved of a burden, but that obligation would have imposed significant time and space overheads on some C++ implementations.

Time complexity of delete[] operator

::operator delete[] is documented on cplusplus.com (which is sometimes frowned upon) as:

operator delete[] can be called explicitly as a regular function, but in C++, delete[] is an operator with a very specific behavior: An expression with the delete[] operator, first calls the appropriate destructors for each element in the array (if these are of a class type), and then calls function operator delete[] (i.e., this function) to release the storage.

so the destructor is called n times (once for each element), and then the memory freeing "function" is called once.

Notice that each destruction might take a different time (or even complexity) than the others. Generally most destructions are quick, and have the same complexity.... But that won't be the case if each destroyed element is a complex tree or node or graph...

For primitive types like int the fictitious destructor of int is a no-op. The compiler probably would optimize that (if asked).

You should check the real C++11 standard, or at least its late n3337 working draft, which says (thanks to Matteo Italia for pointing that in a comment) in §5.3.5.6 page 110 of n3337:

If the value of the operand of the delete-expression is not a null pointer value, the delete-expression will
invoke the destructor (if any) for the object or the elements of the array being deleted. In the case of an
array, the elements will be destroyed in order of decreasing address (that is, in reverse order of the completion
of their constructor; see 12.6.2)

If you use -and trust enough- GCC 4.8 or better, you could have used the g++ compiler with the -fdump-tree-phiopt or -fdump-tree-all option (beware, they are dumping a lot of files!), or the MELT plugin, to query the intermediate Gimple representation of some example. Or use -S -fverbose-asm to get the assembler code. And you also want to add optimization flags like -O1 or -O2 ...

NB: IMHO, cppreference.com is also an interesting site about C++, see there about delete (as commented by Cubbi)

Can I use `operator delete[]` for a single element array allocation?

You must use delete[] and not delete. The compiler is not allowed to change new int[1] to new int.

(As int is a POD type it's quite possible that new int and new int[1] do exactly the same thing under the covers, but if this is the case then delete[] on and int* and delete on an int* will also do exactly the same thing.)

ISO/IEC 14882:2011 5.3.5 [expr.delete] / 2:

In the first alternative (delete object), the value of the operand of delete may be a null pointer value, a pointer to a non-array object created by a previous new-expression, or a pointer to a subobject (1.8) representing a base class of such an object (Clause 10). If not, the behavior is undefined.

As int[1] is an array object, if you try and delete it with delete and not delete[], the behavior is undefined.

Are 'new' and 'delete' getting deprecated in C++?

Well, for starters, new/delete are not getting deprecated.

In your specific case, they're not the only solution, though. What you pick depends on what got hidden under your "do something with array" comment.

Your 2nd example uses a non-standard VLA extension which tries to fit the array on the stack. This has certain limitations - namely limited size and the inability to use this memory after the array goes out of scope. You can't move it out, it will "disappear" after the stack unwinds.

So if your only goal is to do a local computation and then throw the data away, it might actually work fine. However, a more robust approach would be to allocate the memory dynamically, preferrably with std::vector. That way you get the ability to create space for exactly as many elements as you need basing on a runtime value (which is what we're going for all along), but it will also clean itself up nicely, and you can move it out of this scope if you want to keep the memory in use for later.

Circling back to the beginning, vector will probably use new a few layers deeper, but you shouldn't be concerned with that, as the interface it presents is much superior. In that sense, using new and delete can be considered discouraged.



Related Topics



Leave a reply



Submit