Why Can't I Do Foreach (Var Item in Datatable.Rows)

Is an int a 64-bit integer in 64-bit C#?

No. The C# specification rigidly defines that int is an alias for System.Int32 with exactly 32 bits. Changing this would be a major breaking change.

int behavior at 32/64 bit process?

No, in 64-bit / C#, an int is still 4 bytes.

In C#, int is always merely an alias to global::System.Int32

What will change is the reference size and pointer size, but that is all abstracted by the IL anyway - nothing needs to change. Note, though, that the CLI is only going to be 32 bit xor (nand?) 64 bit. You might need one of them to be "Any CPU".

sizeof(int) on x64?

There are various 64-bit data models; Microsoft uses LP64 for .NET: both longs and pointers are 64-bits (although C-style pointers can only be used in C# in unsafe contexts or as a IntPtr value which cannot be used for pointer-arithmetic). Contrast this with ILP64 where ints are also 64-bits.

Thus, on all platforms, int is 32-bits and long is 64-bits; you can see this in the names of the underlying types System.Int32 and System.Int64.

How to use Int64 in C#

64 bit int is long

Is an int in C# 32 bits?

In C#, the int type is always an Int32, and is always 32 bits.

Some other languages have different rules, and int can be machine dependent. In certain languages, int is defined as having a minimum size, but a machine/implementation specific size at least that large. For example, in C++, the int datatype is not necessarily 32 bits. From fundimental data types:

the general specification is that int has the natural size suggested by the system architecture (one "word") and the four integer types char, short, int and long must each one be at least as large as the one preceding it, with char being always one byte in size.

However, .NET standardized this, so the types are actually specified as Int32, Int64, etc. In C#, int is an alias for System.Int32, and will always be 32 bits. This is guaranteed by section 4.1.5 Integral Types within the C# Language Specification, which states:

The int type represents signed 32-bit integers with values between –2147483648 and 2147483647.

How to convert integer to unsigned 64-bit integer

In short, Time.now.to_i is enough.

Ruby internally stores Times in seconds since 1970-01-01T00:00:00.000+0000:

Time.now.to_f  #=> 1439806700.8638804
Time.now.to_i #=> 1439806700

And you don't have to convert the value to something like ulong in C#, because Ruby automatically coerces the integer type so that it doesn't fight against your common sence.

A bit verbose explanation: ruby stores integers as instances of Fixnum, if that number fits the 63-bit size (not 64-bit, weird huh?) If that number exceeds that size, ruby automatically converts it to a Bignum, which has an arbitrary size.

How big can a 64 bit unsigned integer be?

It is hard or impossible to detect by looking at a value.

The problem is the maximum value plus even only 1 is still/again a valid value; i.e. 0.

This is why most programmers avoid as much as possible, if it is actually a wrong value. For some applications, wrapping around is part of the logic and fine.

If you calculate e.g. c=a+b; (a, b, c being 64bit unsigned ints and a,b being worryingly close to max, or migth be) and want to find out whether the result is affected,

then check whether ((max - b) < a); with max being the appropriate compiler-provided symbol.

Do not calculate the maximum value yourself as 2^64-1, it will be implementation specific and platform specific. On top of that, it will contain the wraparound twice (2^64 being beyond max, probably 0; and subtracting 1 going via 0 back...). And that applies even if ^ is understood to be an appropriate version of "to the power of".

Result of adding an int32 to a 64-bit native int?

The signedness does not matter when adding two values of the same bitsize. For example, adding 32-bit -10 (0xfffffff6) to 32-bit 10 (0x0000000a) will correctly yield 0. Because of that, there is only one add instruction in the CIL (Common Instruction Language).

However, when adding two values of differing bitsizes, then the signedness does matter. For example, adding 32-bit -10 to 64-bit 10 can result in 4294967296 (0x100000000) when done unsigned, and 0 when signed.

The CIL add instruction allows adding a native integer and a 32-bit integer. The native integer may be 64-bit (on a 64-bit system). Testing reveals that add treats the 32-bit integer as a signed integer, and sign-extends it. This is not always correct and may be considered a bug. Microsoft is currently not going to fix it.

Because overflow checking depends on whether the operands are treated as being unsigned or signed, there are two variants of add.ovf: add.ovf (signed) and add.ovf.un (unsigned). However, these variants also correctly sign-extend of zero-extend the smaller operand when adding a 32-bit integer to a native integer.

So adding a native integer and an unsigned 32-bit integer may yield different results depending on the overflow checking setting of C#. Apparently the fact that I could not figure this out is the result of a bug or oversight in the CIL language design.

Should I use 'long' instead of 'int' on 64-bits in langs with fixed type size (like Java, C#)

If you're on a 64-bit processor, and you've compiled your code for 64-bit, then at least some of the time, long is likely to be more efficient because it matches the register size. But whether that will really impact your program much is debatable. Also, if you're using long all over the place, you're generally going to use more memory - both on the stack and on the heap - which could negatively impact performance. There are too many variables to know for sure how well your program will perform using long by default instead of int. There are reasons why it could be faster and reasons why it could be slower. It could be a total wash.

The typical thing to do is to just use int if you don't care about the size of the integer. If you need a 64-bit integer, then you use long. If you're trying to use less memory and int is far more than you need, then you use byte or short.

x86_64 CPUs are going to be designed to be efficient at processing 32-bit programs and so it's not like using int is going to seriously degrade performance. Some things will be faster due to better alignment when you use 64-bit integers on a 64-bit CPU, but other things will be slower due to the increased memory requirements. And there are probably a variety of other factors involved which could definitely affect performance in either direction.

If you really want to know which is going to do better for your particular application in your particular environment, you're going to need to profile it. This is not a case where there is a clear advantage of one over the other.

Personally, I would advise that you follow the typical route of using int when you don't care about the size of the integer and to use the other types when you do.

Creating a 64-bit Integer Based Off of Four 16-bit Integers

You need the same/corresponding indexes and shift amounts that you used to create nums from num:

nums[0] = num >> 0;                 // xxxx ---- ---- ----
nums[1] = num >> 16; // ---- ---- ---- xxxx
nums[2] = num >> 32; // ---- ---- xxxx ----
nums[3] = num >> 48; // ---- xxxx ---- ----

And, each term needs a (uint64_t) cast to force promotion to 64 bits. Otherwise, the shift will exceed the size used for intermediate terms on the right side of the assignment (e.g. they'll be done with int and/or unsigned int).

mainnum |= (uint64_t) nums[0] << 0;
mainnum |= (uint64_t) nums[1] << 16;
mainnum |= (uint64_t) nums[2] << 32;
mainnum |= (uint64_t) nums[3] << 48;

I prefer the above because it's cleaner/clearer and [when optimized] will produce the same code as doing it with a single statement:

mainnum =
(uint64_t) nums[0] << 0 |
(uint64_t) nums[1] << 16 |
(uint64_t) nums[2] << 32 |
(uint64_t) nums[3] << 48;


Related Topics



Leave a reply



Submit