3-Byte Int and 5-Byte Long

3-byte int and 5-byte long?

I think 3.9.1/2 (C++98) sums this up nicely (immediately followed by analogous information for the unsigned types):

There are four signed integer types: “signed char”, “short int”,
“int”, and “long int.” In this list, each type provides at least as
much storage as those preceding it in the list. Plain ints have the
natural size suggested by the architecture of the execution
environment39) ; the other signed integer types are provided to meet
special needs.

Basically all we know is that sizeof(char) == 1 and that each "larger" type is at least that large, with int being a "natural" size for an architecture (where as far as I can tell "natural" is up to the compiler writer). We don't know anything like CHAR_BIT * sizeof(int) <= 32 etc. Also keep in mind that CHAR_BIT doesn't have to be 8 either.

It seems fairly safe to say that three byte int and five byte long would be allowed for hardware where those sizes were natively used. I am however not aware of any such hardware/architectures.

EDIT: As pointed out in @Nigel Harper comment we do know that int has to be at least 16 bits and long at least 32 bits to satisfy range requirements. Otherwise we don't have any specific size restrictions other than as seen above.

How do you read in a 3 byte size value as an integer in c++?

Read each byte and then put them together into your int:

int id3 = byte0 + (byte1 << 8) + (byte2 << 16);

Make sure to take endianness into account.

convert 3 bytes to int in java

You have to be careful with the sign-extension here - unfortunately, bytes are signed in Java (as far as I know, this has caused nothing but grief).

So you have to do a bit of masking.

int r = (b3 & 0xFF) | ((b2 & 0xFF) << 8) | ((b1 & 0x0F) << 16);

Java Byte Operation - Converting 3 Byte To Integer Data

This will give you the value:

(byte3[0] & 0xff) << 16 | (byte3[1] & 0xff) << 8 | (byte3[2] & 0xff)

This assumes, the byte array is 3 bytes long. If you need to convert also shorter arrays you can use a loop.

The conversion in the other direction (int to bytes) can be written with logical operations like this:

byte3[0] = (byte)(hexData >> 16);
byte3[1] = (byte)(hexData >> 8);
byte3[2] = (byte)(hexData);

3 byte signed value to 4byte signed value

You could test the sign bit, and if it's 0, prepend you number with 0x00, otherwise prepend it with 0xff.
Like so (warning: didn't test it):

int yourCustomNumberIsStoredHere = 0xffe8a4; //I know it won't actually be your number, because int is 4 byte, but I don't know how your store it in your program.
int result = yourCustomNumberIsStoredHere;
if (yourCustomNumberIsStoredHere & (0x80 << 16))
result |= 0xff << (24);

A reference on the topic: Wikipedia: Two's complement (how negative numbers are stored in memory).

Changing endianness on 3 byte integer

if you receive the value in network ordering (that is big endian) you have this situation:

myarray[0] = most significant byte
myarray[1] = middle byte
myarray[2] = least significant byte

so this should work:

int result = (((int) myarray[0]) << 16) | (((int) myarray[1]) << 8) | ((int) myarray[2]);

C# 3 byte Ints

Can you? Sure. Will it save any space? Maybe, depending on how much work you want to do. You have to understand that the processor is 32-bit, meaning it has 4 byte registers, so that's how it's going to want to store and access things. To force a 3-byte "int" you'll have to keep it in a byte array, and extract it from the array to an aligned address before use. That means that if you store it short, the compiler will either pad it out (and you'll lose any efficiency you think you've created) or it will be a lot slower to read and write.

If this is a desktop app, how exactly is saving space a primary consideration, especially when talking 1 byte per element? The perf penalty for element access may change you mind about how critical that one byte is.

I'd argue that if that 1 byte truly is important, that maybe, just maybe, you're using the wrong language anyway. The number of bytes you'd save my not installing and using the CLR in the first place is a lot of those bytes.

Side note: You'd also do a shift, not a multiplication (though the compiler would likely get there for you).

Why does the compiler not give an error for this addition operation?

The answer is provided by JLS 15.26.2:

For example, the following code is correct:

short x = 3;

x += 4.6;

and results in x having the value 7 because it is equivalent to:

short x = 3;

x = (short)(x + 4.6);

So, as you can see, the latest case actually work because the addition assignment (as any other operator assignment) performs an implicit cast to the left hand type (and in your case a is a byte). Extending, it is equivalent to byte e = (byte)(a + b);, which will compile happily.



Related Topics



Leave a reply



Submit