How to Cast a Uint64 to an Int64

How do you cast a UInt64 to an Int64?

Casting a UInt64 to an Int64 is not safe since a UInt64 can have a number which is greater than Int64.max, which will result in an overflow.

Here's a snippet for converting a UInt64 to Int64 and vice-versa:

// Extension for 64-bit integer signed <-> unsigned conversion

extension Int64 {
var unsigned: UInt64 {
let valuePointer = UnsafeMutablePointer<Int64>.allocate(capacity: 1)
defer {
valuePointer.deallocate(capacity: 1)
}

valuePointer.pointee = self

return valuePointer.withMemoryRebound(to: UInt64.self, capacity: 1) { $0.pointee }
}
}

extension UInt64 {
var signed: Int64 {
let valuePointer = UnsafeMutablePointer<UInt64>.allocate(capacity: 1)
defer {
valuePointer.deallocate(capacity: 1)
}

valuePointer.pointee = self

return valuePointer.withMemoryRebound(to: Int64.self, capacity: 1) { $0.pointee }
}
}

This simply interprets the binary data of UInt64 as an Int64, i.e. numbers greater than Int64.max will be negative because of the sign bit at the most significat bit of the 64-bit integer.

If you just want positive integers, just get the absolute value.

EDIT: Depending on behavior, you can either get the absolute value, or:

if currentValue < 0 {
return Int64.max + currentValue + 1
} else {
return currentValue
}

The latter option is similar to stripping the sign bit. Ex:

// Using an 8-bit integer for simplicity

// currentValue
0b1111_1111 // If this is interpreted as Int8, this is -1.

// Strip sign bit
0b0111_1111 // As Int8, this is 127. To get this we can add Int8.max

// Int8.max + currentValue + 1
127 + (-1) + 1 = 127

golang how can I convert uint64 to int64?

What you are asking (to store 18,446,744,073,709,551,615 as an int64 value) is impossible.

A unit64 stores positive integers and has 64 bits available to hold information. It can therefore store any positive integer between 0 and 18,446,744,073,709,551,615 (2^64-1).

An int64 uses one bit to hold the sign, leaving 63 bits to hold information about the number. It can store any value between -9,223,372,036,854,775,808 and +9,223,372,036,854,775,807 (-2^63 and 2^63-1).

Both types can hold 18,446,744,073,709,551,616 unique integers, it is just that the uint64 range starts at zero, where as the int64 values straddle zero.

To hold 18,446,744,073,709,551,615 as a signed integer would require 65 bits.

In your conversion, no information from the underlying bytes is lost. The difference in the integer values returned is due to how the the two types interpret and display the values.

uint64 will display a raw integer value, whereas int64 will use two's complement.

var x uint64 = 18446744073709551615
var y int64 = int64(x)

fmt.Printf("uint64: %v = %#[1]x, int64: %v = %#x\n", x, y, uint64(y))
// uint64: 18446744073709551615 = 0xffffffffffffffff
// int64: -1 = 0xffffffffffffffff


x -= 100
y -= 100
fmt.Printf("uint64: %v = %#[1]x, int64: %v = %#x\n", x, y, uint64(y))
// uint64: 18446744073709551515 = 0xffffffffffffff9b
// int64: -101 = 0xffffffffffffff9b

https://play.golang.com/p/hlWqhnC9Dh

Convert uint64_t to int64_t generically for all data widths


However, I suspect that doesn't do what I want.

Indeed, your suspicion is valid: the expression, static_cast<signed>(x), is exactly equivalent to static_cast<signed int>(x), so the cast will always be to an object of the 'default' size of an int on the given platform (and, similarly, unsigned is equivalent to unsigned int).


However, with a little bit of work, you can create 'generic' cast template functions that use the std::make_signed and std::make_unsigned type-trait structures.

Here's a possible implementation of such functions, with a brief test program that shows some deduced result types for different input type widths:

#include <type_traits> // For make_signed and make_unsigned

template<typename T>
static inline auto unsigned_cast(T s)
{
return static_cast<std::make_unsigned<T>::type>(s);
}

template<typename T>
static inline auto signed_cast(T s)
{
return static_cast<std::make_signed<T>::type>(s);
}


#include <typeinfo> // For "typeid"
#include <iostream>

int main()
{
int16_t s16 = 42;
auto u16 = unsigned_cast(s16);
std::cout << "Type of unsigned cast is: " << typeid(u16).name() << "\n";

uint64_t u64 = 39uLL;
auto s64 = signed_cast(u64);
std::cout << "Type of signed cast is: " << typeid(s64).name() << "\n";
return 0;
}

Convert uint64 to int64 without loss of information

Seeing -1 would be consistent with a process running as 32bits.

See for instance the Go1.1 release notes (which introduced uint64)

x := ^uint32(0) // x is 0xffffffff
i := int(x) // i is -1 on 32-bit systems, 0xffffffff on 64-bit
fmt.Println(i)

Using fmt.Printf("%b\n", y) can help to see what is going on (see ANisus' answer)

As it turned out, the OP wheaties confirms (in the comments) it was run initially in 32 bits (hence this answer), but then realize 18446744073709551615 is 0xffffffffffffffff (-1) anyway: see ANisusanswer;

uint64_t to int

Usually, using an exact width integer like 'uint64_t' is for a good reason.
If you cast it to an int, which may not be 64bits long, you may have serious problems...

What is the recommended way to convert from long long int to uint64_t?

This is an implicit conversion, integral convertion, no cast is required:

If the destination type is unsigned, the resulting value is the smallest unsigned value equal to the source value modulo 2n
where n is the number of bits used to represent the destination type.
That is, depending on whether the destination type is wider or narrower, signed integers are sign-extended or truncated and unsigned integers are zero-extended or truncated respectively.

static_cast adds no value.

A static_assert can be used to prevent truncation, e.g.:

static_assert(sizeof(uint64_t) >= sizeof(long long), "Truncation detected.");`

There is also boost::numeric_cast:

The fact that the behavior for overflow is undefined for all conversions (except the aforementioned unsigned to unsigned) makes any code that may produce positive or negative overflows exposed to portability issues.

numeric_cast returns the result of converting a value of type Source to a value of type Target. If out-of-range is detected, an overflow policy is executed whose default behavior is to throw an an exception (see bad_numeric_cast, negative_overflow and positive_overflow ).



Related Topics



Leave a reply



Submit