Are Memory Leaks Ever Ok

Are memory leaks ever ok?

No.

As professionals, the question we should not be asking ourselves is, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.

I like to keep things simple. And the simple rule is that my program should have no memory leaks.

That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.

It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.

But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.

To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?

Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posted on SurgeonOverflow.com and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.

If a third party library forced this situation on me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it portrays a lack of commitment to quality, so I would consider alternatives.

Are memory leaks ever ok?

No.

As professionals, the question we should not be asking ourselves is, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.

I like to keep things simple. And the simple rule is that my program should have no memory leaks.

That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.

It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.

But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.

To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?

Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posted on SurgeonOverflow.com and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.

If a third party library forced this situation on me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it portrays a lack of commitment to quality, so I would consider alternatives.

Do Small Memory Leaks Matter Anymore?

This is completely a personal decision.

However, if:

So the question I am really asking is, if I know I have a program that leaks, say 40 bytes every time it is run, does that matter?

In this case, I'd say no. The memory will be reclaimed when the program terminates, so if it's only leaking 40 bytes one time during the operation of an executable, that's practically meaningless.

If, however, it's leaking 40 bytes repeatedly, each time you do some operation, that might be more meaningful. The longer running the application, the more significant that becomes.

I would say, though, that fixing memory leaks often is worthwhile, even if the leak is a "meaningless" leak. Memory leaks are typically indicators of some underlying problem, so understanding and correcting the leak will often make your program more reliable over time.

What are the long term consequences of memory leaks?

A memory leak can diminish the performance of the computer by reducing the amount of available memory. Eventually, in the worst case, too much of the available memory may become allocated and all or part of the system or device stops working correctly, the application fails, or the system slows down unacceptably due to thrashing.

Memory leaks may not be serious or even detectable by normal means. In modern operating systems, normal memory used by an application is released when the application terminates. This means that a memory leak in a program that only runs for a short time may not be noticed and is rarely serious.

Much more serious leaks include those:

  • where the program runs for an extended time and consumes additional memory over time, such as background tasks on servers, but especially in embedded devices which may be left running for many years
  • where new memory is allocated frequently for one-time tasks, such as when rendering the frames of a computer game or animated video
  • where the program can request memory — such as shared memory — that is not released, even when the program terminates
  • where memory is very limited, such as in an embedded system or portable device
  • where the leak occurs within the operating system or memory manager
  • when a system device driver causes the leak
  • running on an operating system that does not automatically release memory on program termination. Often on such machines if memory is lost, it can only be reclaimed by a reboot, an example of such a system being AmigaOS.

Check out here for more info.

Do memory leaks have any effect after the program ends?

Any modern OS will reclaim all the memory allocated by any process after it terminates.

Each process has it's own virtual address space in all common operating systems nowdays, so it's easy for the OS to claim all memory back.

Needless to say it's a bad practice to rely on the OS for that.

It essentially means such code can't be used in a program that runs for a long while.
Also, in real world applications destructors may do far more than just deallocate memory.

A network client may send a termination message, a database related object may commit transactions, and a file wrapping object may write some closure data to it's file.

In other words: don't let your memory leak.

Is there an acceptable limit for memory leaks?

Be careful that Valgrind isn't picking up false positives in its measurements.

Many naive implementations of memory analyzers flag lost memory as a leak when it isn't really.

Maybe have a read of some of the papers in the external links section of the Wikipedia article on Purify. I know that the documentation that comes with Purify describes several scenarios where you get false positives when trying to detect memory leaks and then goes on to describe the techniques Purify uses to get around the issues.

BTW I'm not affiliated with IBM in any way. I've just used Purify extensively and will vouch for its effectiveness.

Edit: Here's an excellent introductory article covering memory monitoring. It's Purify specific but the discussion on types of memory errors is very interesting.

HTH.

cheers,

Rob

Whether following code is prone to memory leak?

As @Mahmoud Al-Qudsi mentioned anything you new must also be deleted otherwise it will be leaked.

In most situation you do not want to use delete but rather you want to use a smart pointer to auto-delete the object. This is because in situations with exceptions you could again leak memory (while RAII) the smart pointer will guarantee that the object is deleted and thus the destructor is called.

It is important the at the destructor is called (especially in this case). If you do not call the destructor there is a potential that the not everything in the stream will be flushed to the underlying file.

#include <iostream>
#include <fstream>

int doStuff()
{
try
{
bool bDump;

std::cout<<"bDump bool"<<std::endl;
std::cin>>bDump;

// Smart pointer to store any dynamic object.
std::auto_ptr<std::ofstream> osPtr;

if(bDump)
{
// If needed create a file stream
osPtr.reset(new std::ofstream("dump.txt"));
}

// Create a reference to the correct stream.
std::ostream& log = bDump ? *osPtr : std::cout;

log << "hello";
}
catch(...) {throw;}

} // Smart pointer will correctly delete the fstream if it exists.
// This makes sure the destructor is called.
// This is guaranteed even if exceptions are used.


Related Topics



Leave a reply



Submit