Why Use Hex

Why use hex?

In both cases you cite, the bit pattern of the number is important, not the actual number.

For example,
In the first case,
j is going to be 1, then 2, 4, 8, 16, 32, 64 and finally 128 as the loop progresses.

In binary, that is,

0000:0001, 0000:0010, 0000:0100, 0000:1000, 0001:0000, 0010:0000, 0100:0000 and 1000:0000.

There's no option for binary constants in C (until C23) or C++ (until C++14), but it's a bit clearer in Hex:
0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, and 0x80.

In the second example,
the goal was to remove the lower two bytes of the value.
So given a value of 1,234,567,890 we want to end up with 1,234,567,168.

In hex, it's clearer: start with 0x4996:02d2, end with 0x4996:0000.

Why use hexadecimal constants?

It is likely for organizational and visual cleanliness. Base 16 has a much simpler relationship to binary than base 10, because in base 16 each digit corresponds to exactly four bits.

Notice how in the above, the constants are grouped with many digits in common. If they were represented in decimal, bits in common would be less clear. If they instead had decimal digits in common, the bit patterns would not have the same degree of similarity.

Also, in many situations it is desired to be able to bitwise-OR constants together to create a combination of flags. If the value of each constant is constrained to only have a subset of the bits non-zero, then this can be done in a way that can be re-separated. Using hex constants makes it clear which bits are non-zero in each value.

There are two other reasonable possibilities: octal, or base 8 simply encodes 3 bits per digit. And then there is binary coded decimal, in which each digit requires four bits, but digit values above 9 are prohibited - that would be disadvantageous as it cannot represent all of the possibilities which binary can.

Why do Computers use Hex Number System at assembly language?

Well it doesn't make a difference how you represent them but as we know that humans don't understand binary numbers, they are only to make the computer's life easier as it works on only two states true and false. So in order to make binary numbers(instructions) human readable we adapted the hexadecimal number system for representing assembly instructions. It has its roots in the history of computers.

For example we can represent this binary number

11010101110100110010001100111010 in

hex as 0xd5d3233a
octal as 32564621472
decimal as 3587384122

As you can see that it is easily readable and less prone to error for humans. The hex value is the most precise.

Is it more efficient to use hexadecimal instead of decimal?

There is absolutely no performance difference between the format of numeric literals in a source language, because the conversion is done by the compiler. The only reason to switch from one representation to another is readability of your code.

Two common cases for using hexadecimal literals are representing colors and bit masks. Since color representation is often split at byte boundaries, parsing a number 0xFF00FF is much easier than 16711935: hex format tells you that the red and blue components are maxed out, while the green component is zero. Decimal format, on the other hand, requires you to perform the conversion.

Bit masks are similar: when you use hex or octal representation, it is very easy to see what bits are ones and what bits are zero. All you need to learn is a short table of sixteen bit patterns corresponding to hex digits 0 through F. You can immediately tell that 0xFF00 has the upper eight bits set to 1, and the lower eight bits set to 0. Doing the same with 65280 is much harder for most programmers.

What is the use of hexadecimal values in programming?

In many cases (like e.g. bit masks) you need to use binary, but binary is hard to read because of its length. Since hexadecimal values can be much easier translated to/from binary than decimals, you could look at hex values as kind of shorthand notation for binary values.

what is the benefit of using Hexadecimal to define a variables in php?

Hex numbers are very easy to be changed to binary. Each hexadecimal digit represents four binary digits (bits), and the primary use of hexadecimal notation is a human-friendly representation of binary-coded values in computing and digital electronics. That is why they are used by programmers.

1 byte = 255 in dec and 0xFF but for example

3 bytes = 16777215 in dec and 0xFFFFFF hex. Now you see that numbers are shorter because they have extra 6 digits(A-F).

When you define constant as hex you can get a lot of advantages. For example:

  1. You can use flags PHP function flags, how?
  2. Numbers in hex are easier to remember they are shorter.

Edit

Ad1. Flags obviously can be used with any other numeric system but hex makes them easier to understand and more readable.

Why use a hex literal for an int?

In some cases HEX value is more rounded and understandable than its decimal equivalent. Like 0x0FFF, 0xA0A0, 0x10000, etc.

Why use hex values instead of normal base 10 numbers?

Because hex corresponds much more closely to bits that decimal numbers. Each hex digit corresponds to 4 bits (a nibble). So, once you've learned the bitmask associated with each hex digit (0-F), you can do something like "I want a mask for the low order byte":

0xff

or, "I want a mask for the bottom 31 bits":

0x7fffffff

Just for reference:

HEX    BIN
0 -> 0000
1 -> 0001
2 -> 0010
3 -> 0011
4 -> 0100
5 -> 0101
6 -> 0110
7 -> 0111
8 -> 1000
9 -> 1001
A -> 1010
B -> 1011
C -> 1100
D -> 1101
E -> 1110
F -> 1111


Related Topics



Leave a reply



Submit