Why Implicit Conversion Is Harmful in C++

why implicit conversion is harmful in C++

Use explicit when you would prefer a compiling error.

explicit is only applicable when there is one parameter in your constructor (or many where the first is the only one without a default value).

You would want to use the explicit keyword anytime that the programmer may construct an object by mistake, thinking it may do something it is not actually doing.

Here's an example:

class MyString
{
public:
MyString(int size)
: size(size)
{
}

//... other stuff

int size;
};

With the following code you are allowed to do this:

int age = 29;
//...
//Lots of code
//...
//Pretend at this point the programmer forgot the type of x and thought string
str s = x;

But the caller probably meant to store "3" inside the MyString variable and not 3. It is better to get a compiling error so the user can call itoa or some other conversion function on the x variable first.

The new code that will produce a compiling error for the above code:

class MyString
{
public:
explicit MyString(int size)
: size(size)
{
}

//... other stuff

int size;
};

Compiling errors are always better than bugs because they are immediately visible for you to correct.

Will implicit conversions lose information?

The first article talks about promotions, which are a specific type of implicit conversion. There are other types of conversions out there that are also implicit conversions but aren't promotions. A promotion is a specific type of implicit conversion and it can't lose information as you are always going to a wider type, i.e. a type where all the values representable by the type being promoted are representable by the promoted to type (int -> long long for example)

Other implicit conversions include: going from signed to unsigned, narrowing conversions, floating point to integer conversions. These conversion may lose information unlike promotions.

Is implicit casting considered to be a bad concept?

First: the large number of implicit conversions in C++ is due to
historical reasons, and nothing else. I don't think any one considers
all of them a good idea. On the other hand, there are many different
types of implicit conversions, and some of them are almost essential to
the language: you wouldn't like it if you needed an explicit conversion
to pass a MyType x; to a function taking a MyType const&; I'm pretty
sure that there is a consensus that const conversions adding const, like
this one, should be implicit.

With regards to conversions where there isn't a consensus:

  • Almost no one seems to have a problem with non-lossy conversion;
    things like int to long, or float to double. Most people also
    seem to accept conversions from integral types to floating point (eg
    int to double), although these can loose precision in some cases.
    (int i = 123456789; float f = i;, for example.)

  • There was a proposal during the standardization of C++98 to deprecate
    narrowing conversions, like float to int. (The author of the
    proposal was Stroustrup; if you don't like such conversions, you're in
    good company.) It didn't pass; I don't know why exactly, but I suspect
    that it was a question of breaking too much from the traditions of C.
    In C++11, such conversions are forbidden in some newer constructs,
    like the new initialization sequences. So it sounds to me like there is
    a consensus that these implicit conversions aren't really a good idea,
    but that they can't be removed for fear of breaking code or maybe just
    breaking with the tradition in C. (I know that more than a few people
    don't like the fact that someString += 3.14159; is a legal statement,
    adding an ETX character to the end of the string.)

  • The original proposal for bool proposed deprecating all of the
    conversions of numeric and pointer types to bool. This was removed;
    it soon became apparent that the proposal wouldn't pass if it made
    things like if ( somePointer ) (as opposed to
    if ( somePointer != NULL )) illegal. There is still a large body of
    people (myself included) who consider such conversions "bad", and avoid
    them.

Finally: a compiler is free to issue a warning for anything it feels
like. If the market insisted warnings for such conversions, compilers
would implement them (probably as an option). I suspect that the
reason they don't is that the warnings have a bad reputation, due to
the initial implementations generating too many warnings. Integral
promotion leads to a number of narrowing conversions that no one wants
to eliminate:

char ch = '0' + v % 10;

for example, involves an int to char conversion (which is
narrowing); in C++11:

char ch{ '0' + v % 10 };

is illegal (but both VC++ and g++ accept it, g++ with a warning). I
suspect that to be usable, banning narrowing conversions would at least
have to make exceptions for cases where the wider type is itself due to
integral promotion, mixed type arithmetic and cases where the source
expression is a compile time constant which "fits" in the target type.

How you avoid implicit conversion from short to integer during addition?

How you avoid implicit conversion from short to integer during addition?

You don't.

C has no arithmetic operations on integer types narrower than int and unsigned int. There is no + operator for type short.

Whenever an expression of type short is used as the operand of an arithmetic operator, it is implicitly converted to int.

For example:

short s = 1;
s = s + s;

In s + s, s is promoted from short to int and the addition is done in type int. The assignment then implicitly converts the result of the addition from int to short.

Some compilers might have an option to enable a warning for the narrowing conversion from int to short, but there's no way to avoid it.

Why it's bad to provide an implicit operator of String(char[])?

Implicit conversions are a compiler feature. There's nothing in the CLI spec that permits them, all conversions in IL are explicit. Even the simple ones like int to long and float to double. So it is up to the C# team in your case to make that syntax work.

The way the C# team thinks about that is well published. Every possible feature starts at -100 points and needs some serious motivation to get to +100 to justify the work involved with designing the feature, implementing it, documenting it and maintaining it. I can't speak for them, but I seriously doubt this one makes it past 0. The alternative is obvious and simple so it just isn't worth it.

No implicit conversion warnings when passing integer literals?

The compiler does not warn you because it knows at compile time that this conversion if safe, i.e., the original and target value are identical. When you do:

vec[1.0F]

From a compiler point-of-view, there is no change of value (loss of precision) between 1.0F and 1, so the compiler does not warn you. If you try:

vec[1.2F]

...the compiler will warn you because, eventhough 1.2F will be converted to 1, there is a precision loss.

If you use a value that is not known at compile time, e.g.:

float get_float();

vec[get_float()];

You will get a warning as expected because the compiler does not know the value of get_float() beforehand and thus cannot be sure that the conversion is safe.

Note that you will never get such warning when constant expressions are expected (e.g., in std::array<int, 10>), because, by definition, constant expressions are known at compile time, so the compiler knows if there is an issue between the given value and the converted one.

Why does c++ forbid implicit conversion of void*?

The reason you cannot implicitly convert from void * is because doing so is type unsafe and potentially dangerous. C++ tries a little harder than C in this respect to protect you, thus the difference in behavior between the two languages.

Consider the following example:

short s = 10; // occupies 2 bytes in memory
void *p = &s;
long *l = p; // occupies 8 bytes in memory
printf("%ld\n", *l);

A C compiler accepts the above code (and prints garbage) while a C++ compiler will reject it.

By casting "through" void *, we lose the type information of the original data, allowing us to treat what in reality is a short as a long.



Related Topics



Leave a reply



Submit