Is It Abusive to Use Idisposable and "Using" as a Means for Getting "Scoped Behavior" for Exception Safety

Is it abusive to use IDisposable and using as a means for getting scoped behavior for exception safety?

I don't think so, necessarily. IDisposable technically is meant to be used for things that have non-managed resources, but then the using directive is just a neat way of implementing a common pattern of try .. finally { dispose }.

A purist would argue 'yes - it's abusive', and in the purist sense it is; but most of us do not code from a purist perspective, but from a semi-artistic one. Using the 'using' construct in this way is quite artistic indeed, in my opinion.

You should probably stick another interface on top of IDisposable to push it a bit further away, explaining to other developers why that interface implies IDisposable.

There are lots of other alternatives to doing this but, ultimately, I can't think of any that will be as neat as this, so go for it!

Take 2: Is it abusive to use IDisposable and “using” as a means for getting “scoped behavior”?

The real meaning of IDisposable is that an object knows of something, somewhere which has been put into a state that should be cleaned up, and it has the information and impetus necessary to perform such cleanup. Although the most common "states" associated with IDisposable are things like files being open, unmanaged graphic objects being allocated, etc. those are only examples of uses, and not a definition of "proper" use.

The biggest issue to consider when using IDisposable and using for scoped behavior is that there is no way for the Dispose method to distinguish scenarios where an exception is thrown from a using block from those where it exits normally. This is unfortunate, since there are many situations where it would be useful to have scoped behavior which was guaranteed to have one of two exit paths depending upon whether the exit was normal or abnormal.

Consider, for example, a reader-writer lock object with a method that returns an IDisposable "token" when the lock is acquired. It would be nice to say:

using (writeToken = myLock.AcquireForWrite())
{
... Code to execute while holding write lock
}

If one were to manually code the acquisition and release of the lock without a try/catch or try/finally lock, an exception thrown while the lock was held would cause any code that was waiting on the lock to wait forever. That is a bad thing. Employing a using block as shown above will cause the lock to be released when the block exits, whether normally or via exception. Unfortunately, that may also be a bad thing.

If an unexpected exception is thrown while a write-lock is held, the safest course of behavior would be to invalidate the lock so that any present or future attempt to acquire the lock will throw an immediate exception. If the program cannot usefully proceed without the locked resource being usable, such behavior would cause it to shut down quickly. If it can proceed e.g. by switching to some alternate resource, invalidating the resource will allow it to get on with that much more effectively than would leaving the lock uselessly acquired. Unfortunately, I don't know of any nice pattern to accomplish that. One could do something like:

using (writeToken = myLock.AcquireForWrite())
{
... Code to execute while holding write lock
writeToken.SignalSuccess();
}

and have the Dispose method invalidate the token if it's called before success has been signaled, but an accidental failure to signal the success could cause the resource to become invalid without offering indication as to where or why that happened. Having the Dispose method throw an exception if code exits a using block normally without calling SignalSuccess might be good, except that throwing an exception when it exits because of some other exception would destroy all information about that other exception, and there's no way Dispose can tell which method applies.

Given those considerations, I think the best bet is probably to use something like:

using (lockToken = myLock.CreateToken())
{
lockToken.AcquireWrite(Describe how object may be invalid if this code fails");
... Code to execute while holding write lock
lockToken.ReleaseWrite();
}

If code exits without calling ReleaseWrite, other threads that try to acquire the lock will receive exceptions that include the indicated message. Failure to properly manually pair the AcquireWrite and ReleaseWrite will leave the locked object unusable, but not leave other code waiting for it to become usable. Note that an unbalanced AcquireRead would not have to invalidate the lock object, since code inside the read would never put the object into an invalid state.

Is abusing IDisposable to benefit from using statements considered harmful?

So in asp.net MVC views, we see the following construct:

using(Html.BeginForm())
{
//some form elements
}

An abuse? Microsoft says no (indirectly).

If you have a construct that requires something to happen once you're done with it, IDisposable can often work out quite nicely. I've done this more than once.

Abuse using and Dispose() for scope handling of not to be released objects?

This is not an abuse at all - that is a common scope-handling idiom of C#. For example, ADO.NET objects (connections, statements, query results) are commonly enclosed in using blocks, even though some of these objects get released back to their pools inside their Dispose methods:

using (var conn = new SqlConnection(dbConnectionString)) {
// conn is visible inside this scope
...
} // conn gets released back to its connection pool

Using 'Using' for things other than resource disposal

There are certainly precedents for (ab)using the using statement in this way, for example FormExtensions.BeginForm in the ASP.NET MVC framework. This renders a <form> closing tag when it's disposed, and it's main purpose is to enable a more terse syntax in MVC views. The Dispose method attempts to render the closing tag even if an exception is thrown, which is slightly odd: if an exception is thrown while rendering a form, you probably don't want to attempt to render the end tag.

Another example is the (now deprecated) NDC.Push method in the log4net framework, which returns an IDisposable whose purpose is to pop the context.

Some purists would say it's an abuse, I suggest you form your own judgement on a case-by-case basis.
Personally I don't see anything wrong with your example for rendering an hourglass cursor.

The discussion linked in a comment by @I4V has some interesting opinions - including arguments against this type of "abuse" from the ubiquitous Jon Skeet.

Is there a design pattern for section of code defined by a Disposable object?

What you describe is a common scoped-behavior pattern. It has one significant weakness, however, which is that there's no way for the code in the Dispose method to know whether it is being called because an exception occurred within the using block, or whether it's being called as a result of a non-exception exit. The using pattern is not usable in cases where code needs to take different action based upon whether an exception occurred. Worse, the relative difference in convenience between employing the using pattern and one which can distinguish between the "exception" and "non-exception" cases means that a lot of code which should distinguish those cases, doesn't.

Among other things, if the semantics of a transaction scope require that any exception which is begun must be either committed or rolled back before code leaves the scope, it would be helpful if an attempt to leave the scope without performing a commit or rollback could trigger an exception. Unfortunately, if the scoping block can't determine whether the scope is being left because of an exception, it must generally select among three ugly alternatives:

  • Silently perform a rollback, thus providing no indication of improper usage in the event that code exits non-exceptionally without having committing or rolling back a transaction.

  • Throw an exception, thus overwriting any exception that may have been pending.

  • Include code to record improper usage, thus introducing a dependency upon whatever method of logging is used.

Having have scope departure throw an exception if there isn't one pending, or throw an exception but wrap any pending exception within it, would be cleaner than any of the above alternatives, but isn't possible except with a rather clunky try/finally pattern.

What are the rules for when dispose() is required?


I understand the principle of dispose(), which is to release memory reserved for variables and/or resources.

You do not understand the purpose of dispose. It is not for releasing memory associated with variables.

What I don't understand is just exactly what requires a release, when to employ dispose().

Dispose anything that implements IDisposable when you are sure that you are done with it.

For example, we don't dispose variables like string, integer or booleans. But somewhere we cross 'a line' and the variables and/or resources we use need to be disposed. I don't understand where the line is.

The line is demarcated for you. When an object implements IDisposable, it should be disposed.

I note that variables are not things that are disposed at all. Objects are disposed. Objects are not variables and variables are not objects. Variables are storage locations for values.

Is there a single principle or a few broad principles to apply when knowing when to use dispose()?

A single principle: dispose when the object is disposable.

I don't feel like I understand the basics of knowing when to use dispose().

Dispose all objects that are disposable.

One comment I saw asked if memory is released when a variable goes out of scope, and that got my attention because until I saw the response was no, it doesn't get released just because it goes out of scope, I would have thought that it does get released when it goes out of scope.

Be careful in your use of language. You are confusing scope and lifetime, and you are confusing variables with the contents of the variables.

First: the scope of a variable is the region of program text in which that variable may be referred to by name. The lifetime of a variable is the period of time during the execution of the program in which the variable is considered to be a root of the garbage collector. Scope is purely a compile-time concept, lifetime is purely a run-time concept.

The connection between scope and lifetime is that a local variable's lifetime is often starting when control enters the scope of the variable, and ending when it leaves. However, various things can change the lifetime of a local, including being closed over, in an iterator block, or in an async method. The jitter optimizer may also shorten or extend the life of a local.

Remember also that a variable is storage, and that it may refer to storage. When the lifetime of a local ends, the storage associated with the local might be reclaimed. But there is no guarantee whatsoever that the storage associated with the thing the local refers to will be reclaimed, at that time or ever.

So that's why my question is "What determines when a dispose() is really necessary?"

Dispose is necessary when the object implements IDisposable. (There are a small number of objects that are disposable that need not be disposed. Tasks, for instance. But as a general rule, if it is disposable, dispose it.)

My question is less one of how and more one of when.

Only dispose a thing when you are done with it. Not before, and not later.

When should I dispose my objects in .NET?

Dispose objects when they implement IDisposable, and you are done using them.

How do I know if something falls into the category of what I must dispose?

When it is implementing IDisposable.

I just don't know what objects or things I create require that I am responsible for disposing.

The disposable ones.

most experienced and professional developers know when something they've created needs to be disposed of. I don't understand how to know that.

They check to see if the object is disposable. If it is , they dispose it.

The point of Dispose is to free unmanaged resources". OK, but my question is how do I know by looking at something that it is an unmanaged resource?

It implements IDisposable.

I don't understand how to use that type of explanation to categorize what I need to dispose() and what I don't. There's all kinds of stuff in the .net framework; how do I separate out things that require I dispose() of them? What do I look at to tell me I'm responsible for it?

Check to see if it is IDisposable.

After that, the answer goes on to speak at great length about how to dispose(), but I'm still stuck back at what needs to be disposed.

Anything that implements IDisposable needs to be disposed.

my question is about when, meaning what needs to be disposed(), as opposed to when I'm done with it. I know when I'm done with things, I just don't know which things I'm responsible for when I'm done with them.

The things that implement IDisposable.

I was hoping there is a general principle for what must be disposed rather than a long list of specific items which would not be particularly useful for people like me who are looking for simple guidelines.

The simple guideline is that you should dispose disposable things.

Again, I get it that memory release is important, and also that a lot of experience and expertise goes into learning why and how, but I'm still left struggling to understand what needs to be disposed. Once I understand what I have to dispose(), then I can begin the struggle to learn how to do it.

Dispose things that implement IDisposable by calling Dispose().

So is this still a bad question?

It is a very repetitive question.

Your patience is a kindness.

Thanks for taking this somewhat silly answer in the humour in which it was intended!

WRT scope != lifetime & variables != objects, very helpful.

These are very commonly confused, and most of the time, it makes little difference. But I find that often people who are struggling to understand a concept are not at all well served by vagueness and imprecision.

In VS is it so simple as looking in Object Browser / Intellisense to see if the object includes Dispose()?

The vast majority of the time, yes.

There are some obscure corner cases. As I already mentioned, the received wisdom from the TPL team is that disposing Task objects is not only unnecessary, but can be counterproductive.

There are also some types that implement IDisposable, but use the "explicit interface implementation" trick to make the "Dispose" method only accessible by casting to IDisposable. In the majority of these cases there is a synonym for Dispose on the object itself, typically called "Close" or some such thing. I don't much like this pattern, but some people use it.

For those objects, the using block will still work. If for some reason you want to explicitly dispose such objects without using using then either (1) call the "Close" method, or whatever it is called, or (2) cast to IDisposable and dispose it.

The general wisdom is: if the object is disposable then it does not hurt to dispose it, and it is a good practice to do so when you're done with it.

The reason being: disposable objects often represent a scarce shared resource. A file, for example, might be opened in a mode that denies the rights of other processes to access that file while you have it opened. It's the polite thing to do to ensure that the file is closed as soon as you're done with it. If one process wanted to use a file, odds are pretty good another one will soon.

Or a disposable might represent something like a graphics object. The operating system will stop giving out new graphics objects if there are more than ten thousand active in a process, so you've got to let them go when you're done with them.

WRT implementing IDisposable @Brian's comment suggests in "normal" coding I likely don't need to. So would I only do that if my class pulled in something unmanaged?

Good question. There are two scenarios in which you should implement IDisposable.

(1) The common scenario: you are writing an object that holds onto another IDisposable object for a long time, and the lifetime of the "inner" object is the same as the lifetime of the "outer" object.

For example: you are implementing a logger class that opens a log file and keeps it open until the log is closed. Now you have a class that is holding onto a disposable, and so it itself should also be disposable.

I note that there is no need in this case for the "outer" object to be finalizable. Just disposable. If for some reason the dispose is never called on the outer object, the finalizer of the inner object will take care of finalization.

(2) the rare scenario: you are implementing a new class which asks the operating system or other external entity for a resource that must be aggressively cleaned up, and the lifetime of that resource is the same as the lifetime of the object holding onto it.

In this exceedingly rare case you should first, ask yourself if there is any way to avoid it. This is a bad situation to be in for a beginner-to-intermediate programmer. You really need to understand how the CLR interacts with unmanaged code in order to get this stuff solid.

If you cannot avoid it, you should prefer to not attempt to implement the disposal and finalization logic yourself, particularly if the unmanaged object is represented by a Windows handle. There should already be wrappers around most of the OS services represented by handles, but if there is not one, what you want to do is carefully study the relationship between IntPtr, SafeHandle and HandleRef. IntPtr, SafeHandle and HandleRef - Explained

If you really truly do need to write the disposal logic for an unmanaged, non-handle-based resource, and the resource requires backstopping the disposal with finalization, then you have a significant engineering challenge.

The standard dispose pattern code may look simple but there are real subtleties to writing correct finalization logic that is robust in the face of error conditions. Remember, a finalizer runs on a different thread and can run on that thread concurrent with the constructor in thread abort scenarios. Writing threadsafe logic which cleans up an object while it is still being constructed on another thread can be extraordinarily difficult and I recommend against trying.

For more on the challenges of writing finalizers, see my series of articles on the subject: http://ericlippert.com/2015/05/18/when-everything-you-know-is-wrong-part-one/

A question you did not ask but I will answer anyways:

Are there scenarios in which I should not be implementing IDisposable?

Yes. Many people implement IDisposable any time they wish to have a coding pattern that has the semantics of:

  • Make a change in the world
  • Do stuff in the new world
  • Revert the change

So, for example, "impersonate an administrator, do some admin tasks, revert to normal user". Or "start handling an event, do stuff when the event happens, stop handling the event". Or "create an in-memory error tracker, do some stuff that might make errors, stop tracking errors". And so on. You get the general pattern.

This is a poor fit for the disposable pattern, but that doesn't stop people from writing up classes that represent no unmanaged resource whatsoever, but still implement IDisposable as though they did.

This opinion puts me in a minority; lots of people have no problem whatsoever with this abuse of the mechanism. But when I see a disposable I think "the author of this class wishes me to be polite and clean up after myself when I am good and ready." But the actual contract of the class is often "you must dispose this at a particular point in the program and if you do not then the rest of the program logic will be wrong until you do". That is not the contract I expect to have to implement when I see a disposable. I expect that I have to make a good-faith effort to clean up a resource at my convenience.

Are IDisposable objects meant to be created and disposed just once?

@Jon Skeet has answered the question really well, but let me chip in with a comment that I feel should be an answer by its own.

It is quite common to use the using code block to temporarily acquire some resource, or enter some form of scoped code you want a clean exit from. I do that all the time, in particular in my business logic controllers, where I have a system that postpones change-events until a block of code has executed, to avoid side-effects multiple times, or before I'm ready for them, etc.

In order to make the code look more obvious to the programmer that uses it, you should look at using a temporary value that you use instead of the object that has the resource, and returning it from a method name that tells the programmer what it is doing, ie. acquiring some resources temporarily.

Let me show an example.

Instead of this:

using (node) { ... }

you do this:

using (node.ResourceScope()) { ... }

Thus, you're not actually disposing of anything more than once, since ResourceScope will return a new value you dispose of, and the underlying node will be left as is.

Here's an example implementation (unverified, typing from memory):

public class Node
{
private Resource _Resource;

public void AcquireResource()
{
if (_Resource == null)
_Resource = InternalAcquireResource();
}

public void ReleaseResource()
{
if (_Resource != null)
{
InternalReleaseResource();
_Resource = null;
}
}

public ResourceScopeValue ResourceScope()
{
if (_Resource == null)
return new ResourceScopeValue(this);
else
return new ResourceScopeValue(null);
}

public struct ResourceScopeValue : IDisposable
{
private Node _Node;

internal ResourceScopeValue(Node node)
{
_Node = node;
if (node != null)
node.AcquireResource();
}

public void Dispose()
{
Node node = _Node;
_Node = null;
if (node != null)
node.ReleaseResource();
}
}
}

This allows you to do this:

Node node = ...
using (node.ResourceScope()) // first call, acquire resource
{
CallSomeMethod(node);
} // and release it here

...
private void CallSomeMethod(Node node)
{
using (node.ResourceScope()) // due to the code, resources will not be 2x acquired
{
} // nor released here
}

The fact that I return a struct, and not IDisposable means that you won't get boxing overhead, instead the public .Dispose method will just be called on exit from the using-block.

Should I use IDisposable for purely managed resources?

This was discussed before. Your case is much easier though, you are also implementing the finalizer. That's fundamentally wrong, you are hiding a bug in the client code. Beware that finalizers run on a separate thread. Debugging a consistent deadlock is much easier than dealing with locks that disappear randomly and asynchronously.

Recommendation: follow the .NET framework lead: don't help too much. Microsoft abandoned the Synchronized method for the same reason.



Related Topics



Leave a reply



Submit