Convert Bytes to Floating Point Numbers?
>>> import struct
>>> struct.pack('f', 3.141592654)
b'\xdb\x0fI@'
>>> struct.unpack('f', b'\xdb\x0fI@')
(3.1415927410125732,)
>>> struct.pack('4f', 1.0, 2.0, 3.0, 4.0)
'\x00\x00\x80?\x00\x00\x00@\x00\x00@@\x00\x00\x80@'
How to convert ten bytes to floating point number
C code to build an array of ten bytes from ten decimal values and then interpret that array as a long double
is straightforward if your compiler's long double
implementation is compatible with this format. The following program consumes ten decimals given on the command line. Obviously it would be straightforward to obtain the numbers from somewhere else, perhaps from stdin or from a file.
#include <stdlib.h>
#include <stdio.h>
void
show_longdouble(const unsigned char array[]) {
long double *ldp = (long double *)array;
int ix;
for (ix = 0 ; ix < 10 ; ++ix) {
printf("%u ", array[ix]);
}
printf("=> %Lf\n", *ldp);
}
int
main(int argc, char *argv[])
{
if (11 == argc) {
unsigned char vals[16]; /* only use 10, but dimension 16 for alignment */
int ix;
for (ix = 0 ; ix < 10 ; ++ix) {
sscanf(argv[ix+1], "%hhu", &vals[ix]);
}
show_longdouble(vals);
}
return 0;
}
Running this program gives:
$ ./gnu-long-double 0 0 0 0 0 45 17 188 22 64
0 0 0 0 0 45 17 188 22 64 => 12325165.000000
$ ./gnu-long-double 0 0 0 0 0 248 30 196 20 64
0 0 0 0 0 248 30 196 20 64 => 3213246.000000
If you wanted to do the conversion manually instead of relying on the C compiler's built-in printf
then the format is referred to in Wikipedia as the "x86 Extended Precision format" and is described at https://en.wikipedia.org/wiki/Extended_precision#x86_extended_precision_format
Convert Bytes to Floating Point Numbers WITHOUT using STRUCT
Like this:
def int_from_bytes(inbytes):
res = 0
shft = 0
for b in inbytes:
res |= ord(b) << shft
shft += 8
return res
def float_from_bytes(inbytes):
bits = int_from_bytes(inbytes)
mantissa = ((bits&8388607)/8388608.0)
exponent = (bits>>23)&255
sign = 1.0 if bits>>31 ==0 else -1.0
if exponent != 0:
mantissa+=1.0
elif mantissa==0.0:
return sign*0.0
return sign*pow(2.0,exponent-127)*mantissa
print float_from_bytes('\x9a\xa3\x14\xbe')
print float_from_bytes('\x00\x00\x00\x40')
print float_from_bytes('\x00\x00\xC0\xbf')
output:
-0.145155340433
2.0
-1.5
The format is IEEE-754 floating point. Try this out to see what each bit means: https://www.h-schmidt.net/FloatConverter/IEEE754.html
How does Python convert bytes into float?
When passed a bytes
object, float()
treats the contents of the object as ASCII bytes. That's sufficient here, as the conversion from string to float only accepts ASCII digits and letters, plus .
and _
anyway (the only non-ASCII codepoints that would be permitted are whitespace codepoints), and this is analogous to the way int()
treats bytes
input.
Under the hood, the implementation does this:
- because the input is not a string,
PyNumber_Float()
is called on the object (forstr
objects the code jumps straight toPyFloat_FromString
). PyNumber_Float()
checks for a__float__
method, but if that's not available, it callsPyFloat_FromString()
PyFloat_FromString()
accepts not onlystr
objects, but any object implementing the buffer protocol. TheString
name is a Python 2 holdover, the Python 3str
type is calledUnicode
in the C implementation.bytes
objects implement the buffer protocol, and thePyBytes_AS_STRING
macro is used to access the internal C buffer holding the bytes.- A combination of two internal functions named
_Py_string_to_number_with_underscores()
andfloat_from_string_inner()
is then used to parse ASCII bytes into a floating point value.
For actual str
strings, the CPython implementation actually converts any non-ASCII string into a sequence of ASCII bytes by only looking at ASCII codepoints in the input value, and converting any non-ASCII whitespace character to ascii 0x20 spaces, to then use the same _Py_string_to_number_with_underscores()
/ float_from_string_inner()
combo.
I see this as a bug in the documentation and have filed issue with the Python project to have it updated.
How to convert two bytes to floating point number
Just a VB.Net
translation of the C
code posted by njuffa.
The original structure
has been substituted with a Byte array and the numeric data type adapted to .Net types. That's all.
Dim data As Byte(,) = New Byte(,) {
{7, 241}, {254, 255}, {9, 156}, {9, 181}, {9, 206}, {9, 231}, {13, 0}, {137, 12}, {9, 25},
{137, 37}, {9, 50}, {15, 2}, {9, 75}, {137, 87}, {9, 100}, {137, 112}, {2, 0}, {199, 13},
{7, 15}, {71, 16}, {135, 17}, {15, 6}, {7, 20}, {71, 21}, {135, 22}, {199, 23}, {4, 0}
}
Dim byte1, byte2 As Byte
Dim word, code As UShort
Dim nValue As Integer
Dim result As Double
For i As Integer = 0 To (data.Length \ 2 - 1)
byte1 = data(i, 0)
byte2 = data(i, 1)
word = (byte2 * 256US) + byte1
If (word Mod 2) = 1 Then
code = (word \ 2US) Mod 8US
nValue = ((word \ 16) Xor 2048) - 2048
Select Case code
Case 0 : result = nValue * 5000
Case 1 : result = nValue * 500
Case 2 : result = nValue / 20
Case 3 : result = nValue / 200
Case 4 : result = nValue / 2000
Case 5 : result = nValue / 20000
Case 6 : result = nValue / 16
Case 7 : result = nValue / 64
End Select
Else
'unscaled 15-bit integer in h<15:1>. Extract, sign extend to 32 bits
nValue = ((word \ 2) Xor 16384) - 16384
result = nValue
End If
Console.WriteLine($"[{byte1,3:D}, {byte2,3:D}] number = {nValue:X8} result ={result,12:F8}")
Next
JavaScript - Convert bytes into float in a clean way
Is there a way to convert bytes back into the Float32
You don't need to convert it, it's already there! you just need to read it from the float32 view. However in your example you didn't save a reference to the float32 view...
Typed arrays work very differently to other numbers in JavaScript. The key is to think about the buffer and views independently - that is, Float32Array and Uint8Array are merely views into a buffer (a buffer is just a fixed sized contiguous block of memory, which is why typed arrays are so fast).
In your example when you called new Float32Array
you passed it an array with a single number to initialise it, but you didn't pass it a buffer, this causes it to create a buffer for you of the appropriate length (4 bytes). When you called new Uint8Array
you passed it a buffer instead, this doesn't cause it to merely copy the buffer, but it actually uses it directly. The below example is equivalent to yours, but retains all references and makes the above assertions more obvious:
const number = Math.PI
const buffer = new ArrayBuffer(4);
const f32 = new Float32Array(buffer); // [0]
const ui8 = new Uint8Array(buffer); // [0, 0, 0, 0]
f32[0] = number;
f32 // [3.1415927410125732]
ui8 // [219, 15, 73, 64]
ui8[3] = 1;
f32 // [3.6929245196445856e-38]
ui8 // [219, 15, 73, 1]
As you can see there is no need to "convert" above, as both views share the same buffer, any change via one view is instantly available in the other.
This is actually a good way to play with and understand floating point formats. Also use ui8[i].toString(2)
to get the raw binary and use ui8[i] = parseInt('01010101', 2)
set raw binary for each byte where i is 0-3. Note that you cannot set the raw binary through the f32 view as it will interpret your number numerically and break it into the significand and exponent, however you may want to do this to see how the numerical binary is converted into the float32 format.
Converting Bytes to Fixed point
The f
designator is for floating point, which is entirely different from fixed point. You just need to convert that to an integer and divide by 2**24.
>>> x = 0x00d4f9c1
>>> x/(1<<24)
0.8319359421730042
>>>
How can i translate these bytes to float?
The bytes you have are IEEE-754 encodings of the numbers 46870, 46829.55078125, 46870.1015625, 46870, and 46917.20703125.
To decode them, copy the bytes into a float
object in little-endian order, then interpret them as that float
object. Details of how to do this will depend on the programming language used, which the question does not state.
To decode them manually, write out the 32 bits of each four bytes, with the bits of the fourth byte first (in the high-value positions), then the bits of the third byte, then the second, then the first. From those 32 bits, take the first one as a sign bit s. Take the next eight as bits for an exponent code e. Take the last 23 as bits for a significand code f.
Decode the sign bit: Let S = (−1)s.
Decode the exponent bits: Interpret them as an unsigned eight-bit numeral, e. Then:
- If e is 255 and f is zero, then the number represented is +∞ or −∞ according to whether S is +1 or −1. The decoding is done, stop.
- If e is 255 and f is not zero, the data represents a NaN (Not a Number), and f contains supplementary information. In typical implementations, if the high bit of f is set, the NaN is a quiet, otherwise it is signaling. The decode is done, stop.
- If e is zero, let E = −126 and let F = 0.
- Otherwise, let E = e−127 and let F = 1.
Decode the significand bits: Let F = F + f•2−23.
The number represented is S • F • 2E.
How to convert byte array to float values in ObjectiveC?
I agree with meaning-matters. It seems your byte order is reversed with respect to how it should be stored as a floating point number. Since your hex value is just a different representation of the bits that makes up the floating point number, you do not have to do anything more than tell the compiler that it should treat the number as a float. This works once you have placed the bytes in correct order.
fourbytearray[0] = GolfResult[0];
fourbytearray[1] = GolfResult[1];
fourbytearray[2] = GolfResult[2];
fourbytearray[3] = GolfResult[3];
float result = *(float *)fourbytearray;
I tried your value 0x4148e7e1 and got 12.5566111.
How to convert a byte array to float in Python
Apparently the Scala code that encodes the value uses a different byte order than the Python code that decodes it.
Make sure you use the same byte order (endianness) in both programs.
In Python, you can change the byte order used to decode the value by using >f
or <f
instead of f
. See https://docs.python.org/3/library/struct.html#struct-alignment.
>>> b = b'\xc2\xdatZ'
>>> struct.unpack('f', b) # native byte order (little-endian on my machine)
(1.7230105268977664e+16,)
>>> struct.unpack('>f', b) # big-endian
(-109.22724914550781,)
Related Topics
Difference Between Data and JSON Parameters in Python Requests Package
Check If a Given Key Already Exists in a Dictionary and Increment It
Printing a List Separated with Commas, Without a Trailing Comma
Creating a Class Within a Function and Access a Function Defined in the Containing Function's Scope
Python Library 'Unittest': Generate Multiple Tests Programmatically
Finding Moving Average from Data Points in Python
Keep a Subprocess Alive and Keep Giving It Commands? Python
Format Strings VS Concatenation
Can a Decorator of an Instance Method Access the Class
Can't Open Lib 'Odbc Driver 13 for SQL Server'? Sym Linking Issue
Running Python Scripts with Xampp
Sampling Uniformly Distributed Random Points Inside a Spherical Volume
How to Get Char from String by Index
Iterate Over Individual Bytes in Python 3
Recursively Iterate Through All Subdirectories Using Pathlib