Why Some Types Do Not Have Literal Modifiers

Why some types do not have literal modifiers

Why long int has a literal modifier, but short int does not?

The question is "why does C# not have this feature?" The answer to that question is always the same. Features are unimplemented by default; C# does not have that feature because no one designed, implemented and shipped the feature to customers.

The absence of a feature does not need justification. Rather, all features must be justified by showing that their benefits outweigh their costs. As the person proposing the feature, the onus is on you to describe why you think the feature is valuable; the onus is not on me to explain why it is not.

Probably there is a strong reason to provide literal modifiers for some types, but not for all. What is it?

Now that is a more answerable question. Now the question is "what justifies the literal suffix on long, and why is that not also a justification for a similar literal suffix on short?"

Integers can be used for a variety of purposes. You can use them as arithmetic numbers. You can use them as collections of bit flags. You can use them as indices into arrays. And there are lots of more special-purpose usages. But I think it is fair to say that most of the time, integers are used as arithmetical numbers.

The vast majority of calculations performed in integers by normal programs involve numbers that are far, far smaller than the range of a 32 bit signed integer -- roughly +/- two billion. And lots of modern hardware is extremely efficient when dealing solely with 32 bit integers. It therefore makes sense to make the default representation of numbers to be signed 32 bit integers. C# is therefore designed to make calculations involving 32 bit signed integers look perfectly normal; when you say "x = x + 1" that "1" is understood to be a signed 32 bit integer, and odds are good that x is too, and the result of the sum is too.

What if the calculation is integral but does not fit into the range of a 32 bit integer? "long" 64 bit integers are a sensible next step up; they are also efficient on a lot of hardware and longs have a range that should satisfy the needs of pretty much anyone who isn't doing heavy-duty combinatorics that involve extremely large numbers. It therefore makes sense to have some way to specify clearly and concisely in source code that this literal here is to be treated as a long integer.

Interop scenarios, or scenarios in which integers are used as bit fields, often require the use of unsigned integers. Again, it makes sense to have a way to clearly and concisely specify that this literal is intended to be treated as an unsigned integer.

So, summing up, when you see "1", odds are good that the vast majority of the time the user intends it to be used as a 32 bit signed integer. The next most likely cases are that the user intends it to be a long integer or an unsigned int or unsigned long. Therefore there are concise suffixes for each of those cases.

Thus, the feature is justified.

Why is that not a justification for shorts?

Because first, in every context in which a short is legal, it is already legal to use an integer literal. "short x = 1;" is perfectly legal; the compiler realizes that the integer fits into a short and lets you use it.

Second, arithmetic is never done in shorts in C#. Arithmetic can be done in ints, uints, longs and ulongs, but arithmetic is never done in shorts. Shorts promote to int and the arithmetic is done in ints, because like I said before, the vast majority of arithmetic calculations fit into an int. The vast majority do not fit into a short. Short arithmetic is possibly slower on modern hardware which is optimized for ints, and short arithmetic does not take up any less space; it's going to be done in ints or longs on the chip.

You want a "long" suffix to tell the compiler "this arithmetic needs to be done in longs" but a "short" suffix doesn't tell the compiler "this arithmetic needs to be done in shorts" because that's simply not a feature of the C# language to begin with.

The reasons for providing a long suffix and an unsigned syntax do not apply to shorts. If you think there is a compelling benefit to the feature, state what the benefit is. Without a benefit to justify its costs, the feature will not be implemented in C#.

Why are there type modifiers for constants?

For integer literals, apart from what's in Bathsheba's answer, it's also used for various cases like suppressing warnings

unsigned int n = somevalue;
...
if (n > 5) dosomething();

Changing to if (n > 5U) and there'll be no more warnings.

Or when you do something like this

long long x = 1 << 50;

and realized that x is not what you expected, you need to change it to

long long x = 1LL << 50;

Another usage is for the auto keyword in C++11

auto a = 1;
auto b = 1U;
auto c = 1L;
auto d = 1UL;

The above will result in different types for the variable

For floating-point literals, using suffix will result in more correct result

long double a = 0.01234567890123456789;
long double a = 0.01234567890123456789L;

Those may result in very very different values. That's because a literal without suffix is a double literal value and will be rounded correctly to double, hence when long double has more precision than double it'll result in precision lost. The same will occur with floats due to double-rounding (first to double then to float, instead of directly round the literal to float)

if (0.67 == 0.67f)
std::cout << "Equal";
else
std::cout << "Not Equal";

The above will print out "Not Equal"

What is the difference between casting to float and adding f as a suffix when initializing a float?

Assert that the value literals have no type is false?

Why is common knowledge that the literal values (and #defines) have no type but a type can be specified with the literal? in other words: Assert that the value literals have no type is false?

It's not. Literals all have types, as specified in section 2.14 of the C++11 standard. Preprocessor macros are replaced before literals are interpreted.

A value literal without suffix and no decimals (like 100), can always be considered of type int?

No; a decimal literal is the first of int, long int or long long int that can represent the value. Octal or hex literals may also be unsigned if necessary. Before 2011, long long wasn't considered, since it wasn't a standard type.

So 100 will have type int since it is small enough to be representable by int.

Which is the type and qualifiers of the text literals, even considering its prefixes?

With no prefix, it's an array of const char, large enough to hold all the characters and the zero terminator. So "Text" has type char const[5].

With prefixes, the character type changes to the types you give in the question; the array size is still large enough for all characters including the terminator.

Why are number suffixes necessary?

That's fine for simple cases like:

float f = 7;

but it's far better to be explicit so that you don't have to worry about statements like:

float f = (double)(int)(1 / 3 + 6e22 / (double)7);

Yes, I know that's a contrived example but we don't know the intent of the coder from just the type on the left, especially if it's not being assigned to a variable at all (such as being passed as an argument to an overloaded function that can take one of int, float, double, decimal et al).

C# compiler number literals

var y = 0f; // y is single
var z = 0d; // z is double
var r = 0m; // r is decimal
var i = 0U; // i is unsigned int
var j = 0L; // j is long (note capital L for clarity)
var k = 0UL; // k is unsigned long (note capital L for clarity)

From the C# specification 2.4.4.2 Integer literals and 2.4.4.3 Real literals. Take note that L and UL are preferred as opposed to their lowercase variants for clarity as recommended by Jon Skeet.

With c# why are 'in' parameters not usable in local functions?

The documentation on local functions states the following

Variable capture

Note that when a local function captures variables in the enclosing scope, the local function is implemented as a delegate type.

And looking at lambdas:

Capture of outer variables and variable scope in lambda expressions

A lambda expression can't directly capture an in, ref, or out parameter from the enclosing method.

The reason is simple: it's not possible to lift these parameters into a class, due to ref escaping problems. And that is what would be necessary to do in order to capture it.

Example

public Func<int> DoSomething(in SomeType something){
int local(){
return something.anInt;
}
return local;
}

Suppose this function is called like this:

public Func<int> Mystery()
{
SomeType ghost = new SomeType();
return DoSomething(ghost);
}

public void Scary()
{
var later = Mystery();
Thread.Sleep(5000);
later(); // oops
}

The Mystery function creates a ghost and passes it as an in parameter to DoSomething, which means that it is passed as a read-only reference to the ghost variable.

The DoSomething function captures this reference into the local function local, and then returns that function as a Func<int> delegate.

When the Mystery function returns, the ghost variable no longer exists. The Scary function then uses the delegate to call the local function, and local will try to read the anInt property from a nonexistent variable. Oops.

The "You may not capture reference parameters (in, out, ref) in delegates" rule prevents this problem.

You can work around this problem by making a copy of the in parameter and capturing the copy:

public Func<int> DoSomething2(in SomeType something){
var copy = something;
int local(){
return copy.anInt;
}
return local;
}

Note that the returned delegate operates on the copy, not on the original ghost. It means that the delegate will always have a valid copy to get anInt from. However, it means that any future changes to ghost will have no effect on the copy.

public int Mystery()
{
SomeType ghost = new SomeType() { anInt = 42 };
var later = DoSomething2(ghost);
ghost = new SomeType() { anInt = -1 };
return later(); // returns 42, not -1
}

is it possible to change modifier of variables and methods in interface

Declaring a class to implement an interface is a promise that this class provides these methods, or extensions thereof. Increasing/extending access to the methods is allowed, but decreasing access is not, because then we would break our promise.

This is similar to classes and extensions of them, subclasses. A subclass has to provide all methods of the superclass with at least the visibility they were declared with.

The reason for this is the statically typed nature of Java. If a type may differ from its initial declaration in such a way that it breaks (access to) the type, then it makes the whole static-typing obsolete.

Fields (also called member variables) are different in the way that their access cannot be extended. Declaring another member variable of similar name on a subclass is exactly that, another variable. By doing this, you hide the superclass' variable, and you may have to use the keyword super to access it.



Related Topics



Leave a reply



Submit