How to Convert Bytes to a Float Value in Swift

How to convert bytes to a float value in swift?

<44fa0000> is the big-endian memory representation of the
binary floating point number 2000.0. To get the number back from
the data, you have to read it into an UInt32 first, convert from
big-endian to host byteorder, and then cast the result to
a Float.

In Swift 2 that would be

func floatValueFromData(data: NSData) -> Float {
return unsafeBitCast(UInt32(bigEndian: UnsafePointer(data.bytes).memory), Float.self)
}

Example:

let bytes: [UInt8] =  [0x44, 0xFA, 0x00, 0x00]
let data = NSData(bytes: bytes, length: 4)

print(data) // <44fa0000>
let f = floatValueFromData(data)
print(f) // 2000.0

In Swift 3 you would use Data instead of NSData, and the
unsafeBitCast can be replaced by the Float(bitPattern:)
initializer:

func floatValue(data: Data) -> Float {
return Float(bitPattern: UInt32(bigEndian: data.withUnsafeBytes { $0.pointee } ))
}

In Swift 5 the withUnsafeBytes() method of Data calls the closure with an (untyped) UnsafeRawBufferPointer, and you can load() the value from the raw memory:

func floatValue(data: Data) -> Float {
return Float(bitPattern: UInt32(bigEndian: data.withUnsafeBytes { $0.load(as: UInt32.self) }))
}

How to convert 4 bytes to a Swift float?

Drop the & on &bytes. bytes is an array.

    var bytes:Array<UInt8> = [0x9A, 0x99, 0x99, 0x41] //19.20000

var f:Float = 0.0

memccpy(&f, bytes, 4, 4) // as per OP. memcpy(&f, bytes, 4) preferred

println ("f=\(f)")// f=19.2000007629395

Update Swift 3

memccpy does not seem to work in Swift 3. As commentators have said, use memcpy :

import Foundation
var bytes:Array<UInt8> = [0x9A, 0x99, 0x99, 0x41] //19.20000

var f:Float = 0.0

/* Not in Swift 3
memccpy(&f, bytes, 4, 4) // as per OP.

print("f=\(f)")// f=19.2
*/

memcpy(&f, bytes, 4) /

print("f=\(f)")// f=19.2

Swift: How to convert Bytes into a float / get a more precise number?

The problem is that you're trying to turn little endian UInt32 values into Float merely by "reinterpreting" the same bit patterns as a new value (that's what Float(bitPattern:) is for), but that's not at all how Float stores its data. Swift's Float and Double datatypes are implementations of the 32 and 64 bit floating point data types from IEEE 754. There's plenty of online resources that explain it, but the TL;DR is that they store numbers in a similar way as scientific notation, with a mantissa raised to the power of an exponent.

I think part of your difficulty comes from trying to do too much at once. Break it down into small pieces. Write a function that takes your data, and decomposes it into the 3 UInt32 components. Then write a separate function that does whatever transformation you want on those components, such as turning them into floats. Here's a rough example:

import Foundation

func createTestData(x: UInt32, y: UInt32, z: UInt32) -> Data {
return [x, y, z]
.map { UInt32(littleEndian: $0) }
.withUnsafeBufferPointer { Data(buffer: $0) }
}

func decode(data: Data) -> (x: UInt32, y: UInt32, z: UInt32) {
let values = data.withUnsafeBytes { bufferPointer in
bufferPointer
.bindMemory(to: UInt32.self)
.map { rawBitPattern in
return UInt32(littleEndian: rawBitPattern)
}
}

assert(values.count == 3)
return (x: values[0], y: values[1], z: values[2])
}

func transform(ints: (x: UInt32, y: UInt32, z: UInt32))
-> (x: Float, y: Float, z: Float) {
let transform: (UInt32) -> Float = { Float($0) / 1000 } // define whatever transformation you need
return (transform(ints.x), transform(ints.y), transform(ints.z))
}

let testData = createTestData(x: 123, y: 456, z: 789)
print(testData) // => 12 bytes
let decodedVector = decode(data: testData)
print(decodedVector) // => (x: 123, y: 456, z: 789)
let intsToFloats = transform(ints: decodedVector)
print(intsToFloats) // => (x: 0.123, y: 0.456, z: 0.789)

How to convert a pair of bytes into a Float using Swift

Your issue there is that you are not suppose to initialize your Float using the bitPattern initializer and/or use the UInt32(littleEndian:) initializer. What you need is to convert those 2 bytes to Int16, coerce it to Float and then multiply by the factor of 9.81/2048 to get its acceleration.

Expanding on that, you can create a Numeric initializer that takes an object that conforms to DataProtocol (Data or Bytes [UInt8]):

extension Numeric {
init<D: DataProtocol>(_ data: D) {
var value: Self = .zero
let size = withUnsafeMutableBytes(of: &value, { data.copyBytes(to: $0)} )
assert(size == MemoryLayout.size(ofValue: value))
self = value
}
}

Then you can initialize your Int16 object with the subdata (two bytes).

let bytes: [UInt8] = [3, 4, 250, 255, 199, 249, 91, 191]
let xData = bytes[2..<4]
let yData = bytes[4..<6]
let zData = bytes[6..<8]

let factor: Float = 9.81/2048

let xAxis = Float(Int16(xData)) * factor
let yAxis = Float(Int16(yData)) * factor
let zAxis = Float(Int16(zData)) * factor

print("x:", xAxis, "y:", yAxis, "z:", zAxis) // x: -0.028740235 y: -7.6305327 z: -79.27036

How to convert a float value to byte array in swift?

Float to NSData:

var float1 : Float = 40.0
let data = NSData(bytes: &float1, length: sizeofValue(float1))
print(data) // <00002042>

... and back to Float:

var float2 : Float = 0
data.getBytes(&float2, length: sizeofValue(float2))
print(float2) // 40.0

(The same would work for other "simple" types like Double,
Int, ...)

Update for Swift 3, using the new Data type:

var float1 : Float = 40.0
let data = Data(buffer: UnsafeBufferPointer(start: &float1, count: 1))
print(data as NSData) // <00002042>

let float2 = data.withUnsafeBytes { $0.pointee } as Float
print(float2) // 40.0

(See also round trip Swift number types to/from Data)

Update for Swift 4 and later:

var float1 : Float = 40.0
let data = Data(buffer: UnsafeBufferPointer(start: &float1, count: 1))

let float2 = data.withUnsafeBytes { $0.load(as: Float.self) }
print(float2) // 40.0

Remark: load(as:) requires the data to be properly aligned, for Float that would be on a 4 byte boundary. See e.g. round trip Swift number types to/from Data for other solutions which work for arbitrarily aligned data.

Swift: extract float from byte data

The floating point types have a static _fromBitPattern that will return a value. <Type>._BitsType is a type alias to the correctly sized unsigned integer:

let data: [Byte] = [0x00, 0x00, 0x00, 0x40, 0x86, 0x66, 0x66, 0x00]
let dataPtr = UnsafePointer<Byte>(data)
let byteOffset = 3
let bits = UnsafePointer<Float._BitsType>(dataPtr + byteOffset)[0].bigEndian
let f = Float._fromBitPattern(bits)

You don't see that method in auto-completion, but it's a part of the FloatingPointType protocol. There's an instance method that will give you back the bits, called ._toBitPattern().

How to convert bytes to half-floats in Swift?

There is no 16-bit floating point type in Swift, but you can convert
the results to 32-bit floating point numbers (Float).
This thread

  • 32-bit to 16-bit Floating Point Conversion

contains a lot of information about the
Half-precision floating-point format, and various conversion methods. The crucial hint however is in Ian Ollman's answer:

On OS X / iOS, you can use vImageConvert_PlanarFtoPlanar16F and
vImageConvert_Planar16FtoPlanarF. See Accelerate.framework.

Ian did provide no code however, so here is a possible implementation
in Swift:

func areaHistogram(image : UIImage) {

let inputImage = CIImage(image: image)

let totalBytes : Int = bpp * BINS //8 * 64 for example
let bitmap = calloc(1, totalBytes)

let filter = CIFilter(name: "CIAreaHistogram")!
filter.setValue(inputImage, forKey: kCIInputImageKey)
filter.setValue(CIVector(x: 0, y: 0, z: image.size.width, w: image.size.height), forKey: kCIInputExtentKey)
filter.setValue(BINS, forKey: "inputCount")
filter.setValue(1, forKey: "inputScale")

let myEAGLContext = EAGLContext(API: .OpenGLES2)
let options = [kCIContextWorkingColorSpace : kCFNull]
let context : CIContext = CIContext(EAGLContext: myEAGLContext, options: options)
context.render(filter.outputImage!, toBitmap: bitmap, rowBytes: totalBytes, bounds: filter.outputImage!.extent, format: kCIFormatRGBAh, colorSpace: CGColorSpaceCreateDeviceRGB())

// *** CONVERSION FROM 16-bit TO 32-bit FLOAT ARRAY STARTS HERE ***

let comps = 4 // Number of components (RGBA)

// Array for the RGBA values of the histogram:
var rgbaFloat = [Float](count: comps * BINS, repeatedValue: 0)

// Source and image buffer structure for vImage conversion function:
var srcBuffer = vImage_Buffer(data: bitmap, height: 1, width: UInt(comps * BINS), rowBytes: bpp * BINS)
var dstBuffer = vImage_Buffer(data: &rgbaFloat, height: 1, width: UInt(comps * BINS), rowBytes: comps * sizeof(Float) * BINS)

// Half-precision float to Float conversion of entire buffer:
if vImageConvert_Planar16FtoPlanarF(&srcBuffer, &dstBuffer, 0) == kvImageNoError {
for bin in 0 ..< BINS {
let R = rgbaFloat[comps * bin + 0]
let G = rgbaFloat[comps * bin + 1]
let B = rgbaFloat[comps * bin + 2]
print("R/G/B = \(R) \(G) \(B)")
}
}

free(bitmap)
}

Remarks:

  • You need to import Accelerate.
  • Note that your code allocates totalBytes * bpp bytes instead
    of the necessary totalBytes.
  • The kCIFormatRGBAh pixel format is not supported on the Simulator (as of Xcode 7), so you have to test the code on a real device.

Update: Swift 5.3 (Xcode 12, currently in beta) introduces a new Float16 type which is available in iOS 14, see SE-0277 Float16 on Swift Evolution.

This simplifies the code because a conversion to Float is no longer necessary. I have also removed the use of OpenGL functions which are deprecated as of iOS 12:

func areaHistogram(image: UIImage, bins: Int) -> [Float16] {

let comps = 4 // Number of components (RGBA)

let inputImage = CIImage(image: image)
var rgbaFloat = [Float16](repeating: 0, count: comps * bins)
let totalBytes = MemoryLayout<Float16>.size * comps * bins

let filter = CIFilter(name: "CIAreaHistogram")!
filter.setValue(inputImage, forKey: kCIInputImageKey)
filter.setValue(CIVector(x: 0, y: 0, z: image.size.width, w: image.size.height), forKey: kCIInputExtentKey)
filter.setValue(bins, forKey: "inputCount")
filter.setValue(1, forKey: "inputScale")

let options: [CIContextOption : Any] = [.workingColorSpace : NSNull()]
let context = CIContext(options: options)

rgbaFloat.withUnsafeMutableBytes {
context.render(filter.outputImage!, toBitmap: $0.baseAddress!, rowBytes: totalBytes,
bounds: filter.outputImage!.extent, format: CIFormat.RGBAh,
colorSpace: CGColorSpaceCreateDeviceRGB())
}
return rgbaFloat
}

How to convert an ContiguousArray of Floats into a byte array in Swift?

You can use the withUnsafeBytes() method to get a buffer pointer to the underlying bytes of the array's contiguous storage, and directly initialize an [UInt8] array from that buffer pointer. Example:

let floatArray: [Float] = [1.0, 2.0]
// Works also with a ContiguousArray:
// let floatArray: ContiguousArray<Float> = [1.0, 2.0]

let byteArray = floatArray.withUnsafeBytes { Array($0) }
print(byteArray) // [0, 0, 128, 63, 0, 0, 0, 64]

Equivalently (based on Leo's suggestion):

let byteArray = floatArray.withUnsafeBytes(Array.init)

The byte array contains the binary representation of the floating point numbers in host byte order (which is little-endian on all current Apple platforms). A conversion to big-endian is possible, but not without an intermediate copy to an integer array:

let floatArray: ContiguousArray<Float> = [1.0, 2.0]
let intArray = floatArray.map { $0.bitPattern.bigEndian }
let byteArray = intArray.withUnsafeBytes(Array.init)
print(byteArray) // 63, 128, 0, 0, 64, 0, 0, 0]

Reverse conversion: A simple method would be

let floatArray2 = byteArray.withUnsafeBytes { Array($0.bindMemory(to: Float.self)) }
print(floatArray2) // [1.0, 2.0]

However, that requires that the element storage of the byte array is properly aligned for floating point numbers. If that is not guaranteed then you can do

var floatArray2 = [Float](repeating: 0.0, count: byteArray.count / MemoryLayout<Float>.stride)
_ = floatArray2.withUnsafeMutableBytes { byteArray.copyBytes(to: $0) }
print(floatArray2) // [1.0, 2.0]


Related Topics



Leave a reply



Submit