Java - char, int conversions
The first example (which compiles) is special because both operands of the addition are literals.
A few definitions to start with:
Converting an
int
tochar
is called a narrowing primitive conversion, becausechar
is a smaller type thanint
.'A' + 1
is a constant expression. A constant expression is (basically) an expression whose result is always the same and can be determined at compile-time. In particular,'A' + 1
is a constant expression because the operands of+
are both literals.
A narrowing conversion is allowed during the assignments of byte
, short
and char
, if the right-hand side of the assignment is a constant expression:
In addition, if the expression [on the right-hand side] is a constant expression of type
byte
,short
,char
, orint
:
- A narrowing primitive conversion may be used if the variable is of type
byte
,short
, orchar
, and the value of the constant expression is representable in the type of the variable.
c + 1
is not a constant expression, because c
is a non-final
variable, so a compile-time error occurs for the assignment. From looking at the code, we can determine that the result is always the same, but the compiler isn't allowed to do that in this case.
One interesting thing we can do is this:
final char a = 'a';
char b = a + 1;
In that case a + 1
is a constant expression, because a
is a final
variable which is initialized with a constant expression.
The caveat "if […] the value […] is representable in the type of the variable" means that the following would not compile:
char c = 'A' + 99999;
The value of 'A' + 99999
(which is 100064
, or 0x186E0
) is too big to fit in to a char
, because char
is an unsigned 16-bit integer.
As for the postfix ++
operator:
The type of the postfix increment expression is the type of the variable.
...
Before the addition, binary numeric promotion* is performed on the value
1
and the value of the variable. If necessary, the sum is narrowed by a narrowing primitive conversion and/or subjected to boxing conversion to the type of the variable before it is stored.
(* Binary numeric promotion takes byte
, short
and char
operands of operators such as +
and converts them to int
or some other larger type. Java doesn't do arithmetic on integral types smaller than int
.)
In other words, the statement c++;
is mostly equivalent to:
c = (char)(c + 1);
(The difference is that the result of the expression c++
, if we assigned it to something, is the value of c
before the increment.)
The other increments and decrements have very similar specifications.
Compound assignment operators such as +=
automatically perform narrowing conversion as well, so expressions such as c += 1
(or even c += 3.14
) are also allowed.
How can I convert a char to int in Java?
The ASCII table is arranged so that the value of the character '9'
is nine greater than the value of '0'
; the value of the character '8'
is eight greater than the value of '0'
; and so on.
So you can get the int value of a decimal digit char by subtracting '0'
.
char x = '9';
int y = x - '0'; // gives the int value 9
Java char is also an int?
The Java Language Specification states
When a return statement with an
Expression
appears in a method
declaration, theExpression
must be assignable (§5.2) to the declared
return type of the method, or a compile-time error occurs.
where the rules governing whether one value is assignable to another is defined as
Assignment contexts allow the use of one of the following:
- a widening primitive conversion (§5.1.2)
and
19 specific conversions on primitive types are called the widening
primitive conversions:
char
toint
,long
,float
, or `double
and finally
A widening primitive conversion does not lose information about the
overall magnitude of a numeric value in the following cases, where the
numeric value is preserved exactly: [...]A widening conversion of a
char
to an integral typeT
zero-extends the
representation of thechar
value to fill the wider format.
In short, a char
value as the expression of a return
statement is assignable to a return type of int
through widening primitive conversion.
Do chars have intrinsic int values in Java?
a
is of type char
and chars can be implicitly converted to int
. a
is represented by 97 as this is the codepoint of small latin letter a
.
System.out.println('a'); // this will print out "a"
// If we cast it explicitly:
System.out.println((int)'a'); // this will print out "97"
// Here the cast is implicit:
System.out.println('a' + 0); // this will print out "97"
The first call calls println(char)
, and the other calls are to println(int)
.
Related: In what encoding is a Java char stored in?
Displaying Char And Integer Together
It doesn't work, because as far a Java is concerned, a char
primitive is just a 16 bit unsigned integer value. To demonstrate:
System.out.println('A' == 65); // prints true
One way to work around this would be to declare an object array, and rely on auto-boxing to have Java automatically convert your primitive value to an Integer
or Character
wrapper object:
Object arr[] = new Object[2];
arr[0] = '@'; // stores '@' as a Character wrapper object
arr[1] = 1; // stores 1 as an Integer wrapper object
for (Object o : arr) {
System.out.println(o);
}
Unlike primitives, wrapper objects are aware of their type, so this prints out:
@ // by calling Character.toString()
1 // by calling Integer.toString()
Note that you lose some compile-time checking in the process. Your object array will not only accept Character
and Integer
values, but also any other value.
Why are we allowed to assign char to a int in java?
A char
is simply an unsigned 16-bit number, so since it's basically a subset of the int
type, the JVM can cast it without any ambiguity.
Difference between char and int when declaring character
The difference is the size in byte of the variable, and from there the different values the variable can hold.
A char is required to accept all values between 0 and 127 (included). So in common environments it occupies exactly
one byte (8 bits). It is unspecified by the standard whether it is signed (-128 - 127) or unsigned (0 - 255).
An int is required to be at least a 16 bits signed word, and to accept all values between -32767 and 32767. That means that an int can accept all values from a char, be the latter signed or unsigned.
If you want to store only characters in a variable, you should declare it as char
. Using an int
would just waste memory, and could mislead a future reader. One common exception to that rule is when you want to process a wider value for special conditions. For example the function fgetc
from the standard library is declared as returning int
:
int fgetc(FILE *fd);
because the special value EOF
(for End Of File) is defined as the int
value -1 (all bits to one in a 2-complement system) that means more than the size of a char. That way no char (only 8 bits on a common system) can be equal to the EOF constant. If the function was declared to return a simple char
, nothing could distinguish the EOF value from the (valid) char 0xFF.
That's the reason why the following code is bad and should never be used:
char c; // a terrible memory saving...
...
while ((c = fgetc(stdin)) != EOF) { // NEVER WRITE THAT!!!
...
}
Inside the loop, a char would be enough, but for the test not to succeed when reading character 0xFF, the variable needs to be an int.
Related Topics
Java Executors: How to Set Task Priority
How to Make a Color Transparent in a Bufferedimage and Save as Png
What Does Biginteger Having No Limit Mean
Convert a String of Hex into Ascii in Java
How to Get the Session Object If I Have the Entity-Manager
Problem with Assigning an Array to Other Array in Java
What Are the Differences Between "Generic" Types in C++ and Java
Does the Sequence of the Values Matter in a JSON Object
How to Add a Utf-8 Bom in Java
Dragging a Jlabel Around the Screen
Wrap the String After a Number of Characters Word-Wise in Java
Servlet Seems to Handle Multiple Concurrent Browser Requests Synchronously
Obtaining the Array Class of a Component Type