Is the Use of Dynamic Considered a Bad Practice

Is the use of dynamic considered a bad practice?

The short answer is YES, it is a bad practice to use dynamic.

Why?

dynamic keyword refers to type late binding, which means the system will check type only during execution instead of
during compilation. It will then mean that user, instead of programmer, is left to discover the potential error. The error could be
a MissingMethodException, but it could be also a not intended call to an existing method with a bad behavior.
Imagine a call to a method that ends in computing a bad price or in computing a bad level of oxygen.

Generally speaking, type checking helps to get deterministic computing, and so, when you can, you should use it. Here's a question on shortcomings of dynamic.

However, dynamic can be useful...

  • Interop with COM like with Office
  • Interop with languages where dynamic types are part of the language (IronPython, IronRuby) as dynamic was introduced to help porting them to .Net.
  • Can replace reflection complex code with low ceremony, elegant code (however depending on the case, you still should profile both approaches to check which one is the most appropriate in terms of performance and compile-time checks).

Code base is evolving throughout the application life cycle and even if dynamic seems ok now,
it set a precedent which can implies an increase of dynamic keyword usage by your team. It can lead to increased
maintenance costs (in case the above stated signature evolves, you can notice it too late). Of course, you could
rely on unit tests, non regression human tests and so on. But when you have to choose between human discipline related
quality and automatically checked by computer related quality, choose the later. It's less error prone.

In your case...

In your case, it seems you can use the common inheritance scheme (the first one below and the one you mention in your question),
as dynamic won't give you any additional benefit (it will just cost you more processing power and make you incurring
the risk of future potential bugs).

It depends on whether you can change code of MyClass hierarchy and/or Caller.InvokeMethod.

Let's enumerate
the different possible alternatives to dynamic...

  • Compiled type-checked alternative to dynamic keyword method call:

The most common is using interface virtual call like this instance.InvokeMethod() with inheritance calling the right implementation.

public interface IInvoker : { void InvokeMethod(); }
public abstract class MyBaseClass : IInvoker { public abstract void InvokeMethod(); }
public class MyAnotherClass : MyBaseClass { public override void InvokeMethod() { /* Do something */ } }
public class MyClass : MyBaseClass { public override void InvokeMethod() { /* Do something */ } }

Another a little less performant is by using Extension Methods

public static class InvokerEx:
{
public static void Invoke(this MyAnotherClass c) { /* Do something */ } }
public static void Invoke(this MyClass c) { /* Do something */ } }
}

If there are several "visitors" of MyBaseClass hierarchy, you can use the Visitor pattern:

public interface IVisitor 
{
void Visit(this MyAnotherClass c);
void Visit(this MyClass c);
}

public abstract class MyBaseClass : IInvoker { public abstract void Accept(IVisitor visitor); }
public class MyAnotherClass : MyBaseClass { public override void Accept(IVisitor visitor) { visitor.Visit(this); } }
public class MyClass : MyBaseClass { public override void Accept(IVisitor visitor) { visitor.Visit(this); } }

Other variants though not very useful here (Generic method) but interesting for the performance comparison:

public void InvokeMethod<T>(T instance) where T : IInvoker { return instance.InvokeMethod(); }
  • Dynamic alternative to dynamic keyword method call :

If you need to call a method not known at compile time, I've added below the different techniques you could use and updated the performance results:

MethodInfo.CreateDelegate

        _method = typeof (T).GetMethod("InvokeMethod");
_func = (Func<T, int>)_method.CreateDelegate(typeof(Func<T, int>));

Note: Cast to Func is needed to avoid call DynamicInvoke (as it is generally slower).

DynamicMethod and ILGenerator.Emit

It actually build the full call from scratch, it's the most flexible but you must have some assembler background to fully appreciate it.

        _dynamicMethod = new DynamicMethod("InvokeMethod", typeof (int), new []{typeof(T)}, GetType().Module);
ILGenerator il = _dynamicMethod.GetILGenerator();
il.Emit(OpCodes.Ldarg_0);
il.Emit(OpCodes.Call, _method);
il.Emit(OpCodes.Ret);
_func = (Func<T, int>) _dynamicMethod.CreateDelegate(typeof (Func<T, int>));

Linq Expression

It's similar to DynamicMethod, however you don't control the IL generated. Though, it's really more readable.

        _method = typeof (T).GetMethod("InvokeMethod");
var instanceParameter = Expression.Parameter(typeof (T), "instance");
var call = Expression.Call(instanceParameter, _method);
_delegate = Expression.Lambda<Func<T, int>>(call, instanceParameter).Compile();
_func = (Func<T, int>) _delegate;

MethodInfo.Invoke

The last but not the least, the standard known reflection call.
However, even if it's easy to mess with it, don't use it as it's really a bad performer (look at the benchmark results). Prefer CreateDelegate which is really faster.

        _method = typeof (T).GetMethod("InvokeMethod");
return (int)_method.Invoke(instance, _emptyParameters);

Code of the benchmark test can be found on GitHub.

Benchmark of the different methods to get an order of magnitude (for 10 Millions of call) (.NET Framework 4.5):

For Class standard call:
Elapsed: 00:00:00.0532945
Call/ms: 188679
For MethodInfo.CreateDelegate call:
Elapsed: 00:00:00.1131495
Call/ms: 88495
For Keyword dynamic call:
Elapsed: 00:00:00.3805229
Call/ms: 26315
For DynamicMethod.Emit call:
Elapsed: 00:00:00.1152792
Call/ms: 86956
For Linq Expression call:
Elapsed: 00:00:00.3158967
Call/ms: 31746
For Extension Method call:
Elapsed: 00:00:00.0637817
Call/ms: 158730
For Generic Method call:
Elapsed: 00:00:00.0772658
Call/ms: 129870
For Interface virtual call:
Elapsed: 00:00:00.0778103
Call/ms: 129870
For MethodInfo Invoke call:
Elapsed: 00:00:05.3104416
Call/ms: 1883
For Visitor Accept/Visit call:
Elapsed: 00:00:00.1384779
Call/ms: 72463
== SUMMARY ==
Class standard call: 1
Extension Method call : 1,19
Generic Method call : 1,45
Interface virtual call : 1,45
MethodInfo.CreateDelegate call : 2,13
DynamicMethod.Emit call : 2,17
Visitor Accept/Visit call : 2,60
Linq Expression call : 5,94
Keyword dynamic call : 7,17
MethodInfo Invoke call : 100,19

EDIT:

So comparing to Visitor pattern, dynamic dispatch is just about 3 times slower. It can be acceptable for some applications as it can remove cumbersome code. It's always up to you to choose.
Just keep in mind all the drawbacks.


EDIT: (as an answer to multiple dispatch benefit)

Using trendy pattern name like 'multiple dispatch' and just state that it's cleaner because it uses less code, doesn't make it an added benefit IMHO.
If you want to write trendy code or don't care about type safety and production stability, there are already a lot of language out there offering full feature dynamic typing. I see dynamic keyword introduction in C# as a way to close the gap between the strong typed language family and not so strongly typed other languages. It doesn't mean you should change the way you develop and put type checks to trash.

UPDATE: 2016/11/08 (.NET Framework 4.6.1)

Orders of magnitude remain the same (even if some of them have improved a bit):

Class standard call: 1
Extension Method call : 1,19
Interface virtual call : 1,46
Generic Method call : 1,54
DynamicMethod.Emit call : 2,07
MethodInfo.CreateDelegate call : 2,13
Visitor Accept/Visit call : 2,64
Linq Expression call : 5,55
Keyword dynamic call : 6,70
MethodInfo Invoke call : 102,96

Is this good or bad practice with dynamic memory allocation?

No, this is not clever at all. It takes a function that could be simpler and more general and reduces its capabilities for no reason, while at the same time creating an entry point into your program for difficult-to-debug bugs.

It's not clear to me exactly what Foo::new_num is meant to do (right now it doesn't compile), so I won't address your example directly, but consider the following two code samples:

void bad_function(int i, F * f)
{
f->doSomething(i);
delete f;
}

// ...

bad_function(0, new F(1, 2, 3));

versus

void good_function(int i, F & f)
{
f.doSomething(i);
}

// ...

good_function(0, F(1, 2, 3));

In both cases you allocate a new F object as part of the method call and it's destroyed once you're done using it, so you get no advantage by using bad_function instead of good function. However there's a bunch of stuff you can do with good_function that's not so easy to do with bad_function, e.g.

void multi_function(const std::vector<int> & v, F & f)
{
for(int i : v) { good_function(i, f); }
}

Using the good_function version means you're also prevented by the language itself from doing various things you don't want to do, e.g.

F * f;  // never initialized
bad_function(0, f); // undefined behavior, resulting in a segfault if you're lucky

It's also just better software engineering, because it makes it a lot easier for people to guess what your function does from its signature. If I call a function whose purpose involves reading in a number from the console and doing arithmetic, I absolutely do not expect it to delete the arguments I pass in, and after I spent half an hour figuring out what's causing some obscure crash in some unrelated part of the code I'm going to be furious with whoever wrote that function.


By the way, assuming that F::doSomething doesn't alter the value of the current instance of F in any way, it should be declared const:

class F
{
void doSomething(int i) const;
// ...
};

and good_function should also take a const argument:

void good_function(int i, const F & f);

This lets anyone looking at the signature confidently deduce that the function won't do anything stupid like mess up the value of f that's passed into the function, because the compiler will prevent it. And that in turn lets them write code more quickly, because it means there's one less thing to worry about.

In fact if I see a function with a signature like bad_function's and there's not an obvious reason for it, then I'd immediately be worried that it's going to do something I don't want and I'd probably read the function before using it.

Why is dynamic_cast considered bad practice in C++?

In this specific example, I would have added a protected method:

protected:
virtual int getKickassNess() = 0;

// Bullet:
int getKickassNess() override { return 10; }
// Rocket:
int getKickassNess() override { return 9001; } // over 9000!

void Bullet::OnCollision(Projectile& other)
{
if (other.getKickassNess() > this->getKickassNess())
// wimp out and die
}

IMO Bullet and Rocket should not know that each other exist. Putting in these kinds of knows-about relationships often make things difficult, and in this specific case I can imagine it causing messy cyclic include issues.

Is it good practice to cast objects to dynamic so the correct overloaded method is called?

So long as you don't abuse it, duck typing like this is part of the reason dynamic was added to the language.

Why is creating STL containers dynamically considered bad practice?

As a rule of thumb, you don't do this because the less you allocate on the heap, the less you risk leaking memory. :)

std::vector is useful also because it automatically manages the memory used for the vector in RAII fashion; by allocating it on the heap now you require an explicit deallocation (with delete result) to avoid leaking its memory. The thing is made complicated because of exceptions, that can alter your return path and skip any delete you put on the way. (In C# you don't have such problems because inaccessible memory is just recalled periodically by the garbage collector)

If you want to return an STL container you have several choices:

  • just return it by value; in theory you should incur in a copy-penality because of the temporaries that are created in the process of returning result, but newer compilers should be able to elide the copy using NRVO1. There may also be std::vector implementations that implement copy-on-write optimization like many std::string implementations do, but I've never heard about that.

    On C++0x compilers, instead, the move semantics should trigger, avoiding any copy.

  • Store the pointer of result in an ownership-transferring smart pointer like std::auto_ptr (or std::unique_ptr in C++0x), and also change the return type of your function to std::auto_ptr<std::vector<Point > >; in that way, your pointer is always encapsulated in a stack-object, that is automatically destroyed when the function exits (in any way), and destroys the vector if its still owned by it. Also, it's completely clear who owns the returned object.

  • Make the result vector a parameter passed by reference by the caller, and fill that one instead of returning a new vector.

  • Hardcore STL option: you would instead provide your data as iterators; the client code would then use std::copy+std::back_inserter or whatever to store such data in whichever container it wants. Not seen much (it can be tricky to code right) but it's worth mentioning.



  1. As @Steve Jessop pointed out in the comments, NRVO works completely only if the return value is used directly to initialize a variable in the calling method; otherwise, it would still be able to elide the construction of the temporary return value, but the assignment operator for the variable to which the return value is assigned could still be called (see @Steve Jessop's comments for details).

Is dynamic_casting through inheritance hierarchy bad practice?

There are a lot of questions on dynamic_cast here on SO. I read only a few and also don't use that method often in my own code, so my answer reflects my opinion on this subject rather than my experience. Watch out.

(1.) Why is casting up/down considered bad design, besides the fact that it looks horrible?

(3.) Are there any rules of thumb how i can avoid a design like the one i explained above?

When reading the Stroustrup C++ FAQ, imo there is one central message: don't trust the people which say never use a certain tool. Rather, use the right tool for the task at hand.

Sometimes, however, two different tools can have a very similar purpose, and so is it here. You basically can recode any functionality using dynamic_cast through virtual functions.

So when is dynamic_cast the right tool? (see also What is the proper use case for dynamic_cast?)

  • One possible situation is when you have a base class which you can't extend, but nevertheless need to write overloaded-like code. With dynamic-casting you can do that non-invasive.

  • Another one is where you want to keep an interface, i.e. a pure virtual base class, and don't want to implement the corresponding virtual function in any derived class.

Often, however, you rather want to rely on virtual function -- if only for the reduced uglyness. Further it's more safe: a dynamic-cast can fail and terminate your program, a virtual function call (usually) won't.

Moreover, implemented in terms of pure functions, you will not forget to update it in all required places when you add a new derived class. On the other hand, a dynamic-cast can easily be forgotten in the code.

Virtual function version of your example

Here is the example again:

Element* currentElement;

void addToCurrentElement(const C *c) {
if(A *a = dynamic_cast<A*>(currentElement)) {
//doSomething, if not, check if currentElement is actually a B
}
}

To rewrite it, in your base add a (possibly pure) virtual functions add(A*), add(B*) and add(C*) which you overload in the derived classes.

struct A : public Element
{
virtual add(A* c) { /* do something for A */ }
virtual add(B* c) { /* do something for B */ }
virtual add(C* c) { /* do something for C */ }
};
//same for B, C, ...

and then call it in your function or possibly write a more concise function template

template<typename T>
void addToCurrentElement(T const* t)
{
currentElement->add(t);
}

I'd say this is the standard approach. As mentioned, the drawback could be that for pure virtual functions you require N*N overloads where maybe N might be enough (say, if only A::add requires a special treatment).

Other alternatives might use RTTI, the CRTP pattern, type erasure, and possibly more.

(2.) Is a dynamic_cast slow?

When considering what the majority of answers throughout the net state, then yes, a dynamic cast seems to be slow, see here for example.
Yet, I don't have practical experience to support or disconfirm this statement.



Related Topics



Leave a reply



Submit