Why Do C#'s Binary Operators Always Return Int Regardless of the Format of Their Inputs

Why do C#'s binary operators always return int regardless of the format of their inputs?

Why do C#'s bitwise operators always return int regardless of the format of their inputs?

I disagree with always. This works and the result of a & b is of type long:

long a = 0xffffffffffff;
long b = 0xffffffffffff;
long x = a & b;

The return type is not int if one or both of the arguments are long, ulong or uint.


Why do C#'s bitwise operators return int if their inputs are bytes?

The result of byte & byte is an int because there is no & operator defined on byte. (Source)

An & operator exists for int and there is also an implicit cast from byte to int so when you write byte1 & byte2 this is effectively the same as writing ((int)byte1) & ((int)byte2) and the result of this is an int.

C# Bitwise OR needs casting with byte *sometimes*

Is suspect that the line:

byte b = BIT_ZERO_SET | BIT_ONE_SET;

is actually processed by the C# compiler into an assignment of a constant value to b, rather than a bitwise operation - it can do this because the right side of the expression is fully defined at compile time.

The line:

//b = b | BIT_TWO_SET;

doesn't compiled because the bit-wise OR operator promotes its elements and evaluates to an int, not a byte. Since it involves a runtime value (b) it cannot be compiled into a constant assignment like the line before, and requires casting.

Why is ushort + ushort equal to int?

The simple and correct answer is "because the C# Language Specification says so".

Clearly you are not happy with that answer and want to know "why does it say so". You are looking for "credible and/or official sources", that's going to be a bit difficult. These design decisions were made a long time ago, 13 years is a lot of dog lives in software engineering. They were made by the "old timers" as Eric Lippert calls them, they've moved on to bigger and better things and don't post answers here to provide an official source.

It can be inferred however, at a risk of merely being credible. Any managed compiler, like C#'s, has the constraint that it needs to generate code for the .NET virtual machine. The rules for which are carefully (and quite readably) described in the CLI spec. It is the Ecma-335 spec, you can download it for free from here.

Turn to Partition III, chapter 3.1 and 3.2. They describe the two IL instructions available to perform an addition, add and add.ovf. Click the link to Table 2, "Binary Numeric Operations", it describes what operands are permissible for those IL instructions. Note that there are just a few types listed there. byte and short as well as all unsigned types are missing. Only int, long, IntPtr and floating point (float and double) is allowed. With additional constraints marked by an x, you can't add an int to a long for example. These constraints are not entirely artificial, they are based on things you can do reasonably efficient on available hardware.

Any managed compiler has to deal with this in order to generate valid IL. That isn't difficult, simply convert the ushort to a larger value type that's in the table, a conversion that's always valid. The C# compiler picks int, the next larger type that appears in the table. Or in general, convert any of the operands to the next largest value type so they both have the same type and meet the constraints in the table.

Now there's a new problem however, a problem that drives C# programmers pretty nutty. The result of the addition is of the promoted type. In your case that will be int. So adding two ushort values of, say, 0x9000 and 0x9000 has a perfectly valid int result: 0x12000. Problem is: that's a value that doesn't fit back into an ushort. The value overflowed. But it didn't overflow in the IL calculation, it only overflows when the compiler tries to cram it back into an ushort. 0x12000 is truncated to 0x2000. A bewildering different value that only makes some sense when you count with 2 or 16 fingers, not with 10.

Notable is that the add.ovf instruction doesn't deal with this problem. It is the instruction to use to automatically generate an overflow exception. But it doesn't, the actual calculation on the converted ints didn't overflow.

This is where the real design decision comes into play. The old-timers apparently decided that simply truncating the int result to ushort was a bug factory. It certainly is. They decided that you have to acknowledge that you know that the addition can overflow and that it is okay if it happens. They made it your problem, mostly because they didn't know how to make it theirs and still generate efficient code. You have to cast. Yes, that's maddening, I'm sure you didn't want that problem either.

Quite notable is that the VB.NET designers took a different solution to the problem. They actually made it their problem and didn't pass the buck. You can add two UShorts and assign it to an UShort without a cast. The difference is that the VB.NET compiler actually generates extra IL to check for the overflow condition. That's not cheap code, makes every short addition about 3 times as slow. But otherwise the reason that explains why Microsoft maintains two languages that have otherwise very similar capabilities.

Long story short: you are paying a price because you use a type that's not a very good match with modern cpu architectures. Which in itself is a Really Good Reason to use uint instead of ushort. Getting traction out of ushort is difficult, you'll need a lot of them before the cost of manipulating them out-weighs the memory savings. Not just because of the limited CLI spec, an x86 core takes an extra cpu cycle to load a 16-bit value because of the operand prefix byte in the machine code. Not actually sure if that is still the case today, it used to be back when I still paid attention to counting cycles. A dog year ago.


Do note that you can feel better about these ugly and dangerous casts by letting the C# compiler generate the same code that the VB.NET compiler generates. So you get an OverflowException when the cast turned out to be unwise. Use Project > Properties > Build tab > Advanced button > tick the "Check for arithmetic overflow/underflow" checkbox. Just for the Debug build. Why this checkbox isn't turned on automatically by the project template is another very mystifying question btw, a decision that was made too long ago.

Why is a cast required for byte subtraction in C#?

Because subtraction is coercing up to an integer. As I recall, byte is an unsigned type in C#, so subtraction can take you out of the domain of bytes.

Convert a positive number to negative in C#

How about

myInt = myInt * -1

Checking if a bit is set or not

sounds a bit like homework, but:

bool IsBitSet(byte b, int pos)
{
return (b & (1 << pos)) != 0;
}

pos 0 is least significant bit, pos 7 is most.



Related Topics



Leave a reply



Submit