C++ Floating Point to Integer Type Conversions

C convert floating point to int

my_var = (int)my_var;

As simple as that.
Basically you don't need it if the variable is int.

Type-Casting in C from Float to Int results in wildly different number, why?

The maximum value a 4-bytes integer could hold is ranged from -2,147,483,648 to 2,147,483,647. The result of 10^10 is 10,000,000,000, which exceeds the limit and the conversion overflows which gives unexpected results. That's all.

Implicit type conversion in C

No, in the case of the equality operator, the "usual arithmetic conversions" occur, which start off:

  • First, if the corresponding real type of either operand is long double, the other operand is converted, without change of type
    domain, to a type whose corresponding real type is long double.
  • Otherwise, if the corresponding real type of either operand is double, the other operand is converted, without change of type
    domain, to a type whose corresponding real type is double.
  • Otherwise, if the corresponding real type of either operand is float, the other operand is converted, without change of type
    domain, to a type whose corresponding real type is float.

This last case applies here: i_value is converted to float.

The reason that you can see an odd result from the comparison, despite this, is because of this caveat to the usual arithmetic conversions:

The values of floating operands and of the results of floating
expressions may be represented in greater precision and range than
that required by the type; the types are not changed thereby.

This is what is happening: the type of the converted i_value is still float, but in this expression your compiler is taking advantage of this latitude and representing it in greater precision than float. This is typical compiler behaviour when compiling for 387-compatible floating point, because the compiler leaves temporary values on the floating point stack, which stores floating point numbers in an 80bit extended precision format.

If your compiler is gcc, you can disable this additional precision by giving the -ffloat-store command-line option.

Is conversion from float to int consistent across all platforms and processor architectures?

The short answer is "no".

Floating point representation is implementation defined, and the type of concerns you mention between floating point types also occur when converting floating point values to other types.

Also, a number of properties of int - including size, range of values it can represent, and representation (e.g. organisation of bits) are also implementation defined.

The net effect is that some conversions from float to int will work reliably between implementations, and some won't. Some values will be rounded down when converting float to int. A float can also represent a larger range of values than an int,and converting "out of range" values can give undefined behaviour.

Rather than trying to use floating point literals to initialise your variables, consider using string literals (and wrapping the values in double quotes). The tradeoff is overhead of parsing a string to initialise your variables.

Proper way to fit int overflow in c, when we cast int to float to int

I know that a float to int conversion is safe, but an int to float conversion is not.

Each conversion has issues.

(Assume 32-bit int and 32-bit float for discussion.)

Large int to float risks lost of precision as float does not exactly encode all int. With OP's int num = 2147483647; (float)num, the 2147483647 was converted to 1 of 2 nearby float. With round to the nearest rounding mode, float: result was certainly 2147483648.0.

float to int truncates any fraction. Conversion from infinity and Not-a-number pose addition concerns.

float to int risks implementation-defined behavior when the the floating point value is not inside the -2,147,483,648.9999... 2,147,483,647.9999... range. This is the case with OP's int num = 2147483647; (int)(float)num attempting to convert an out of range 2147483648.0 to int. In OP's case, the value was apparently wrapped around (232 subtracted) to end with -2147483648.



Which way is best to handle it? Also, what other exceptions should I consider?

With conversion int to float, expect rounding for large int.

With conversion float to int, expect truncation and perhaps test if value is in range.

With 2's complement integer encoding: a test to prevent out of range conversion.

#define FLT_INT_MAX_PLUS1 ((INT_MAX/2 + 1)*2.0f)

// return true when OK to convert
bool float_to_int_test(float x) {
return (x < FLT_INT_MAX_PLUS1) && (x - INT_MIN > -1.0f);
}

Other tests could be had to determine rounding or truncation.

What happens in C++ when an integer type is cast to a floating point type or vice-versa?

Do the underlying bits just get "reinterpreted" as a floating point value?

No, the value is converted according to the rules in the standard.

is there a run-time conversion to produce the nearest floating point value?

Yes there's a run-time conversion.

For floating point -> integer, the value is truncated, provided that the source value is in range of the integer type. If it is not, behaviour is undefined. At least I think that it's the source value, not the result, that matters. I'd have to look it up to be sure. The boundary case if the target type is char, say, would be CHAR_MAX + 0.5. I think it's undefined to cast that to char, but as I say I'm not certain.

For integer -> floating point, the result is the exact same value if possible, or else is one of the two floating point values either side of the integer value. Not necessarily the nearer of the two.

Is endianness a factor on any platforms (i.e., endianness of floats differs from ints)?

No, never. The conversions are defined in terms of values, not storage representations.

How do different width types behave (e.g., int to float vs. int to double)?

All that matters is the ranges and precisions of the types. Assuming 32 bit ints and IEEE 32 bit floats, it's possible for an int->float conversion to be imprecise. Assuming also 64 bit IEEE doubles, it is not possible for an int->double conversion to be imprecise, because all int values can be exactly represented as a double.

What does the language standard guarantee about the safety of such casts/conversions? By cast, I mean a static_cast or C-style cast.

As indicated above, it's safe except in the case where a floating point value is converted to an integer type, and the value is outside the range of the destination type.

If a float holds a small magnitude value (e.g., 2), does the bit pattern have the same meaning when interpreted as an int?

No, it does not. The IEEE 32 bit representation of 2 is 0x40000000.



Related Topics



Leave a reply



Submit