Strange "Unsigned Long Long Int" Behaviour

strange behavior of unsigned long long int in loop

When the variable is unsigned, i >= 0 is always true. So your loop never ends. When i gets to 0, the next -- makes i 0xFFFFFFFFFFFFFFFF (decimal 18446744073709551615).

Strange unsigned long long int behaviour

The problem is that MinGW relies on the msvcrt.dll runtime. Even though the GCC compiler supports C99-isms like long long the runtime which is processing the format string doesn't understand the "%llu" format specifier.

You'll need to use Microsoft's format specifier for 64-bit ints. I think that "%I64u" will do the trick.

If you #include <inttypes.h> you can use the macros it provides to be a little more portable:

int main ()
{
unsigned long long int n;
scanf("%"SCNu64, &n);
printf("n: %"PRIu64"\n",n);
n /= 3;
printf("n/3: %"PRIu64"\n",n);
return 0;
}

Note that MSVC doesn't have inttypes.h until VS 2010, so if you want to be portable to those compilers you'll need to dig up your own copy of inttypes.h or take the one from VS 2010 (which I think will work with earlier MSVC compilers, but I'm not entirely sure). Then again, to be portable to those compilers, you'd need to do something similar for the unsigned long long declaration, which would entail digging up stdint.h and using uint64_t instead of unsigned long long.

unsigned long long int strange behaviour

For the latter one use:

printf("\n x: %llu \n y: %f \n",x,y);

Use u for unsigned integrals (your output is only correct because you use small values). Use the ll modifier for long longs otherwise printf will use the wrong size for decoding the second parameter (x) for printf, so it uses bad address to fetch y.

Strange unsigned long long int behavior

The reason you get this awkward result is because of loosing the values of the least significant bits in double type. double mantissa can only hold about 15 decimal digits and exactly 52 binary digits (wiki).

900000000000001i64+4*pow(10, 16) will be converted to double trimming all lower bits. In you case there are 3 of them.

Example:

std::cout << std::setprecision(30);
std::cout << 900000000000001i64 + 4 * pow(10, 16) << endl;
std::cout << 900000000000002i64 + 4 * pow(10, 16) << endl;
std::cout << 900000000000003i64 + 4 * pow(10, 16) << endl;
std::cout << 900000000000004i64 + 4 * pow(10, 16) << endl;
std::cout << 900000000000005i64 + 4 * pow(10, 16) << endl;
std::cout << 900000000000006i64 + 4 * pow(10, 16) << endl;
std::cout << 900000000000007i64 + 4 * pow(10, 16) << endl;
std::cout << 900000000000008i64 + 4 * pow(10, 16) << endl;
std::cout << 900000000000009i64 + 4 * pow(10, 16) << endl;

will produce result:

40900000000000000
40900000000000000
40900000000000000
40900000000000000
40900000000000008
40900000000000008
40900000000000008
40900000000000008
40900000000000008
40090000000000008

Notice how the values are rounded to 23.

C++ strange jump in unsigned long long int values

What is happening exactly?

The problem is here:

unsigned long long low = 1;
// Side note: This is simply (2ULL << 62) - 1
unsigned long long high = (unsigned long long)(pow(2, 63)) - 1;
unsigned long long n;
while (/* irrelevant */) {
n = (low + high) / 2;
// Some stuff that do not modify n...
f(n, a, b, c) // <-- Here!
}

In the first iteration, you have low = 1 and high = 2^63 - 1, which mean that n = 2^63 / 2 = 2^62. Now, let's look at f:

#define f(n, a, b, c) (/* I do not care about this... */ + c*n*n*n)

You have n^3 in f, so for n = 2^62, n^3 = 2^186, which is probably way too large for your unsigned long long (which is likely to be 64-bits long).

How should I solve it?

The main issue here is overflow when doing the binary search, so you should simply handle the overflowing case separatly.

Preamble: I am using ull_t because I am lazy, and you should avoid macro in C++, prefer using a function and let the compiler inline it. Also, I prefer a loop against using the log function to compute the log2 of an unsigned long long (see the bottom of this answer for the implementation of log2 and is_overflow).

using ull_t = unsigned long long;

constexpr auto f (ull_t n, ull_t a, ull_t b, ull_t c) {
if (n == 0ULL) { // Avoid log2(0)
return 0ULL;
}
if (is_overflow(n, a, b, c)) {
return 0ULL;
}
return a * n + b * n * log2(n) + c * n * n * n;
}

Here is slightly modified binary search version:

constexpr auto find_n (ull_t k, ull_t a, ull_t b, ull_t c) {
constexpr ull_t max = std::numeric_limits<ull_t>::max();
auto lb = 1ULL, ub = (1ULL << 63) - 1;
while (lb <= ub) {
if (ub > max - lb) {
// This should never happens since ub < 2^63 and lb <= ub so lb + ub < 2^64
return 0ULL;
}
// Compute middle point (no overflow guarantee).
auto tn = (lb + ub) / 2;
// If there is an overflow, then change the upper bound.
if (is_overflow(tn, a, b, c)) {
ub = tn - 1;
}
// Otherwize, do a standard binary search...
else {
auto val = f(tn, a, b, c);
if (val < k) {
lb = tn + 1;
}
else if (val > k) {
ub = tn - 1;
}
else {
return tn;
}
}
}
return 0ULL;
}

As you can see, there is only one test that is relevant here, which is is_overflow(tn, a, b, c) (the first test regarding lb + ub is irrelevant here since ub < 2^63 and lb <= ub < 2^63 so ub + lb < 2^64 which is ok for unsigned long long in our case).

Complete implementation:

#include <limits>
#include <type_traits>

using ull_t = unsigned long long;

template <typename T,
typename = std::enable_if_t<std::is_integral<T>::value>>
constexpr auto log2 (T n) {
T log = 0;
while (n >>= 1) ++log;
return log;
}

constexpr bool is_overflow (ull_t n, ull_t a, ull_t b, ull_t c) {
ull_t max = std::numeric_limits<ull_t>::max();
if (n > max / a) {
return true;
}
if (n > max / b) {
return true;
}
if (b * n > max / log2(n)) {
return true;
}
if (c != 0) {
if (n > max / c) return true;
if (c * n > max / n) return true;
if (c * n * n > max / n) return true;
}
if (a * n > max - c * n * n * n) {
return true;
}
if (a * n + c * n * n * n > max - b * n * log2(n)) {
return true;
}
return false;
}

constexpr auto f (ull_t n, ull_t a, ull_t b, ull_t c) {
if (n == 0ULL) {
return 0ULL;
}
if (is_overflow(n, a, b, c)) {
return 0ULL;
}
return a * n + b * n * log2(n) + c * n * n * n;
}

constexpr auto find_n (ull_t k, ull_t a, ull_t b, ull_t c) {
constexpr ull_t max = std::numeric_limits<ull_t>::max();
auto lb = 1ULL, ub = (1ULL << 63) - 1;
while (lb <= ub) {
if (ub > max - lb) {
return 0ULL; // Problem here
}
auto tn = (lb + ub) / 2;
if (is_overflow(tn, a, b, c)) {
ub = tn - 1;
}
else {
auto val = f(tn, a, b, c);
if (val < k) {
lb = tn + 1;
}
else if (val > k) {
ub = tn - 1;
}
else {
return tn;
}
}
}
return 0ULL;
}

Compile time check:

Below is a little piece of code that you can use to check if the above code at compile time (since everything is constexpr):

template <unsigned long long n, unsigned long long a, 
unsigned long long b, unsigned long long c>
struct check: public std::true_type {
enum {
k = f(n, a, b, c)
};
static_assert(k != 0, "Value out of bound for (n, a, b, c).");
static_assert(n == find_n(k, a, b, c), "");
};

template <unsigned long long a,
unsigned long long b,
unsigned long long c>
struct check<0, a, b, c>: public std::true_type {
static_assert(a != a, "Ambiguous values for n when k = 0.");
};

template <unsigned long long n>
struct check<n, 0, 0, 0>: public std::true_type {
static_assert(n != n, "Ambiguous values for n when a = b = c = 0.");
};

#define test(n, a, b, c) static_assert(check<n, a, b, c>::value, "");

test(1000, 99, 99, 0);
test(1000, 99, 99, 99);
test(453333, 99, 99, 99);
test(495862, 99, 99, 9);
test(10000000, 1, 1, 0);

Note: The maximum value of k is about 2^63, so for a given triplet (a, b, c), the maximum value of n is the one such as f(n, a, b, c) < 2 ^ 63 and f(n + 1, a, b, c) >= 2 ^ 63. For a = b = c = 99, this maximum value is n = 453333 (empirically found), which is why I tested it above.

Strange behavior when attempting to obtain number of digits of unsigned long long´s maximum

IIUC, the OQ wants the size needed to print the maximum values in decimal. The below works for sizes of 8 bits upto 64 bits, at least for CHAR_BIT==8 :

#include <stdio.h>
#include <limits.h>

#define ASCII_SIZE(s) ((3+(s) * CHAR_BIT) *4 / 13)

int main(void)
{
printf("unsigned char : %zu\n", ASCII_SIZE(sizeof(unsigned char)) );
printf("unsigned short int : %zu\n", ASCII_SIZE(sizeof(unsigned short int)) );
printf("unsigned int : %zu\n", ASCII_SIZE(sizeof(unsigned int)) );
printf("unsigned long : %zu\n", ASCII_SIZE(sizeof(unsigned long)) );
printf("unsigned long long : %zu\n", ASCII_SIZE(sizeof(unsigned long long)) );

printf(" Ding : %u\n", UINT_MAX );
printf(" Lang Ding : %lu\n", ULONG_MAX );
printf("Heel lang Ding : %llu\n", ULLONG_MAX );

return 0;
}

Output:

unsigned char : 3
unsigned short int : 5
unsigned int : 10
unsigned long : 20
unsigned long long : 20
Ding : 4294967295
Lang Ding : 18446744073709551615
Heel lang Ding : 18446744073709551615

c bitfields strange behaviour with long int in struct

index is of type int, which is probably 32 bits on your system. Shifting a value by an amount greater than or equal to the number of bits in its type has undefined behavior.

Change index to unsigned long (bit-shifting signed types is ill-advised). Or you can change 1<<index to 1L << index, or even 1LL << index.

As others have pointed out, test is uninitialized. You can initialize it to all zeros like this:

 struct foo test = { 0 };

The correct printf format for size_t is %zu, not %ld.

And it wouldn't be a bad idea to modify your code so it doesn't depend on the non-portable assumption that long is 64 bits. It can be as narrow as 32 bits. Consider using the uint_N_t types defined in <stdint.h>.

I should also mention that bit fields of types other than int, unsigned int, signed int, and _Bool (or bool) are implementation-defined.

How do you format an unsigned long long int using printf?

Use the ll (el-el) long-long modifier with the u (unsigned) conversion. (Works in windows, GNU).

printf("%llu", 285212672);

unsigned long long behaving odd in C

microtime = time_camera.tv_sec * 1000000 + time_camera.tv_usec;

tv_sec is a smaller integer type (time_t, probably int or long), so

time_camera.tv_sec * 1000000

overflows. Use a suffix to give the constant the appropriate type

time_camera.tv_sec * 1000000ULL

In

microtime = time_camera.tv_sec;
microtime = microtime * 1000000;

the multiplication is performed at unsigned long long, since one operand (microtime) already has that type, so the other is converted to that.



Related Topics



Leave a reply



Submit