How to Use Bit Field with Swift to Store Values with More Than 1 Bit

How to use bit field with Swift to store values with more than 1 bit

Swift simply does not support bit fields, so you can only

  • use the next larger integer type instead (in your case Int8) and accept
    that the variables need more memory, or
  • use bit operations to access the different parts of the integer.

For the second case you could define custom computed properties to ease
the access. As an example:

extension UInt8 {
var lowNibble : UInt8 {
get {
return self & 0x0F
}
set(newValue) {
self = (self & 0xF0) | (newValue & 0x0F)
}
}

var highNibble : UInt8 {
get {
return (self & 0xF0) >> 4
}
set(newValue) {
self = (self & 0x0F) | ((newValue & 0x0F) << 4)
}
}
}

var byte : UInt8 = 0
byte.lowNibble = 0x01
byte.highNibble = 0x02
print(byte.lowNibble)
print(byte.highNibble)

Bit field larger than 64 shifts in Swift?

So I eventually had to create my own primitive struct which was a pain in the ass, since the library @appzYourLife provided does not actually meet every protocol required of UnsignedIntegerTypes. The following is an extension I wrote that actually allows me to write things like

let a: UInt256 = 30
let b: UInt256 = 1 << 98
print(a + b)

which would output to the console:

0x00000000:00000000:00000000:00000000:00000004:00000000:00000000:0000001E

The extension is pretty lengthy and does not yet implement multiplication and devision or bit-shifting numbers other than 1. This version also supports encoding with and NSCoder

//
// UInt256.swift
// NoodleKit
//
// Created by NoodleOfDeath on 7/10/16.
// Copyright © 2016 NoodleOfDeath. All rights reserved.
//

import Foundation

// Bit Shifting only supports lhs = 1

@warn_unused_result
public func << (lhs: UInt256, rhs: UInt256) -> UInt256 {
if lhs > 1 { print("Warning: Only supports binary bitshifts (i.e. 1 << n, where n < 256. Shifting any other numbers than 1 may result in unexpected behavior.") }
if rhs > 255 { fatalError("shift amount is larger than type size in bits") }
let shift = UInt64(rhs.parts[7]) % 32
let offset = Int(rhs.parts[7] / 32)
var parts = [UInt32]()
for i in (0 ..< 8) {
let part: UInt64 = (i + offset < 8 ? UInt64(lhs.parts[i + offset]) : 0)
let sum32 = UInt32(part << shift)
parts.append(sum32)
}
return UInt256(parts)
}

@warn_unused_result
public func >> (lhs: UInt256, rhs: UInt256) -> UInt256 {
if lhs > 1 { print("Warning: Only supports binary bitshifts (i.e. 1 << n, where n < 256. Shifting any other numbers than 1 may result in unexpected behavior.") }
if rhs > 255 { fatalError("shift amount is larger than type size in bits") }
let shift = UInt64(rhs.parts[7]) % 32
let offset = Int(rhs.parts[7] / 32)
var parts = [UInt32]()
for i in (0 ..< 8) {
let part: UInt64 = (i - offset > 0 ? UInt64(lhs.parts[i - offset]) : 0)
let sum32 = UInt32(part >> shift)
parts.append(sum32)
}
return UInt256(parts)
}

@warn_unused_result
public func == (lhs: UInt256, rhs: UInt256) -> Bool {
return lhs.parts == rhs.parts
}

@warn_unused_result
public func < (lhs: UInt256, rhs: UInt256) -> Bool {
for i in 0 ..< 8 {
guard lhs.parts[i] < rhs.parts[i] else { continue }
return true
}
return false
}

@warn_unused_result
public func > (lhs: UInt256, rhs: UInt256) -> Bool {
for i in 0 ..< 8 {
guard lhs.parts[i] > rhs.parts[i] else { continue }
return true
}
return false
}

@warn_unused_result
public func <= (lhs: UInt256, rhs: UInt256) -> Bool {
return lhs < rhs || lhs == rhs
}

@warn_unused_result
public func >= (lhs: UInt256, rhs: UInt256) -> Bool {
return lhs > rhs || lhs == rhs
}

/// Adds `lhs` and `rhs`, returning the result and trapping in case of
/// arithmetic overflow (except in -Ounchecked builds).
@warn_unused_result
public func + (lhs: UInt256, rhs: UInt256) -> UInt256 {
var parts = [UInt32]()
var carry = false
for i in (0 ..< 8).reverse() {
let lpart = UInt64(lhs.parts[i])
let rpart = UInt64(rhs.parts[i])
let comp = lpart == UInt64(UInt32.max) && rpart == UInt64(UInt32.max)
let sum64 = lpart + rpart + (carry || comp ? 1 : 0)
let sum32 = UInt32((sum64 << 32) >> 32)
carry = sum64 > UInt64(UInt32.max)
parts.insert(sum32, atIndex: 0)
}
return UInt256(parts)
}

/// Adds `lhs` and `rhs`, returning the result and trapping in case of
/// arithmetic overflow (except in -Ounchecked builds).
public func += (inout lhs: UInt256, rhs: UInt256) {
lhs = lhs + rhs
}

/// Subtracts `lhs` and `rhs`, returning the result and trapping in case of
/// arithmetic overflow (except in -Ounchecked builds).
@warn_unused_result
public func - (lhs: UInt256, rhs: UInt256) -> UInt256 {
var parts = [UInt32]()
var borrow = false
var gave = false
for i in (0 ..< 8).reverse() {
borrow = lhs.parts[i] < rhs.parts[i]
let lpart = UInt64(lhs.parts[i]) - (gave ? 1 : 0) + (borrow ? UInt64(UInt32.max) : 0)
let rpart = UInt64(rhs.parts[i])
let sum64 = lpart - rpart
let sum32 = UInt32((sum64 << 32) >> 32)
gave = borrow
parts.insert(sum32, atIndex: 0)
}
return UInt256(parts)
}

public func -= (inout lhs: UInt256, rhs: UInt256) {
lhs = lhs - rhs
}

/// Multiplies `lhs` and `rhs`, returning the result and trapping in case of
/// arithmetic overflow (except in -Ounchecked builds).
/// - Complexity: O(64)
@warn_unused_result
public func * (lhs: UInt256, rhs: UInt256) -> UInt256 {
// TODO: - Not Implemented
return UInt256()
}

public func *= (inout lhs: UInt256, rhs: UInt256) {
lhs = lhs * rhs
}

/// Divides `lhs` and `rhs`, returning the result and trapping in case of
/// arithmetic overflow (except in -Ounchecked builds).
@warn_unused_result
public func / (lhs: UInt256, rhs: UInt256) -> UInt256 {
// TODO: - Not Implemented
return UInt256()
}

public func /= (inout lhs: UInt256, rhs: UInt256) {
lhs = lhs / rhs
}

/// Divides `lhs` and `rhs`, returning the remainder and trapping in case of
/// arithmetic overflow (except in -Ounchecked builds).
@warn_unused_result
public func % (lhs: UInt256, rhs: UInt256) -> UInt256 {
// TODO: - Not Implemented
return UInt256()
}

public func %= (inout lhs: UInt256, rhs: UInt256) {
lhs = lhs % rhs
}

public extension UInt256 {

@warn_unused_result
public func toIntMax() -> IntMax {
return Int64(parts[6] << 32) + Int64(parts[7])
}

@warn_unused_result
public func toUIntMax() -> UIntMax {
return UInt64(parts[6] << 32) + UInt64(parts[7])
}

/// Adds `lhs` and `rhs`, returning the result and a `Bool` that is
/// `true` iff the operation caused an arithmetic overflow.
public static func addWithOverflow(lhs: UInt256, _ rhs: UInt256) -> (UInt256, overflow: Bool) {
var parts = [UInt32]()
var carry = false
for i in (0 ..< 8).reverse() {
let lpart = UInt64(lhs.parts[i])
let rpart = UInt64(rhs.parts[i])
let comp = lpart == UInt64(UInt32.max) && rpart == UInt64(UInt32.max)
let sum64 = lpart + rpart + (carry || comp ? 1 : 0)
let sum32 = UInt32((sum64 << 32) >> 32)
carry = sum64 > UInt64(UInt32.max)
parts.insert(sum32, atIndex: 0)
}
return (UInt256(parts), parts[0] > 0x8fffffff)
}

/// Subtracts `lhs` and `rhs`, returning the result and a `Bool` that is
/// `true` iff the operation caused an arithmetic overflow.
public static func subtractWithOverflow(lhs: UInt256, _ rhs: UInt256) -> (UInt256, overflow: Bool) {
// TODO: -
var parts = [UInt32]()
var borrow = false
var gave = false
for i in (0 ..< 8).reverse() {
borrow = lhs.parts[i] < rhs.parts[i]
let lpart = UInt64(lhs.parts[i]) - (gave ? 1 : 0) + (borrow ? UInt64(UInt32.max) : 0)
let rpart = UInt64(rhs.parts[i])
let sum64 = lpart - rpart
let sum32 = UInt32((sum64 << 32) >> 32)
gave = borrow
parts.insert(sum32, atIndex: 0)
}
return (UInt256(parts), parts[0] > 0x8fffffff)
}

/// Multiplies `lhs` and `rhs`, returning the result and a `Bool` that is
/// `true` iff the operation caused an arithmetic overflow.
public static func multiplyWithOverflow(lhs: UInt256, _ rhs: UInt256) -> (UInt256, overflow: Bool) {
// TODO: - Not Implemented
return (UInt256(), false)
}

/// Divides `lhs` and `rhs`, returning the result and a `Bool` that is
/// `true` iff the operation caused an arithmetic overflow.
public static func divideWithOverflow(lhs: UInt256, _ rhs: UInt256) -> (UInt256, overflow: Bool) {
// TODO: - Not Implemented
return (UInt256(), false)
}

/// Divides `lhs` and `rhs`, returning the remainder and a `Bool` that is
/// `true` iff the operation caused an arithmetic overflow.
public static func remainderWithOverflow(lhs: UInt256, _ rhs: UInt256) -> (UInt256, overflow: Bool) {
// TODO: - Not Implemented
return (UInt256(), false)
}

}

public struct UInt256 : UnsignedIntegerType, Comparable, Equatable {

public typealias IntegerLiteralType = UInt256
public typealias Distance = Int32
public typealias Stride = Int32

private let parts: [UInt32]

private var part0: UInt32 { return parts[0] }
private var part1: UInt32 { return parts[1] }
private var part2: UInt32 { return parts[2] }
private var part3: UInt32 { return parts[3] }
private var part4: UInt32 { return parts[4] }
private var part5: UInt32 { return parts[5] }
private var part6: UInt32 { return parts[6] }
private var part7: UInt32 { return parts[7] }

public static var max: UInt256 {
return UInt256([.max, .max, .max, .max, .max, .max, .max, .max])
}

public var description: String {
var hex = "0x"
for i in 0 ..< parts.count {
let part = parts[i]
hex += String(format:"%08X", part)
if i + 1 < parts.count {
hex += ":"
}
}
return "\(hex)"
}

public var componentDescription: String {
return "\(parts)"
}

public var hashValue: Int {
return (part0.hashValue + part1.hashValue + part2.hashValue + part3.hashValue + part4.hashValue + part5.hashValue + part6.hashValue + part7.hashValue).hashValue
}

public var data: NSData {
let bytes = [part0, part1, part2, part3, part4, part5, part6, part7]
return NSData(bytes: bytes, length: 32)
}

public init(_builtinIntegerLiteral builtinIntegerLiteral: _MaxBuiltinIntegerType) {
self.init(UInt64(_builtinIntegerLiteral: builtinIntegerLiteral))
}

public init() { parts = [0, 0, 0, 0, 0, 0, 0, 0] }

public init(_ newParts: [UInt32]) {
var zeros = UInt256().parts
zeros.replaceRange((8 - newParts.count ..< 8), with: newParts)
parts = zeros
}

public init(_ v: Int8) {
self.init(UInt64(v))
}

public init(_ v: UInt8) {
self.init(UInt64(v))
}

public init(_ v: Int16) {
self.init(UInt64(v))
}

public init(_ v: UInt16) {
self.init(UInt64(v))
}

public init(_ v: Int32) {
self.init(UInt64(v))
}

public init(_ v: UInt32) {
self.init(UInt64(v))
}

public init(_ v: Int) {
self.init(UInt64(v))
}

public init(_ v: UInt) {
self.init(UInt64(v))
}

public init(_ v: Int64) {
self.init(UInt64(v))
}

public init(_ v: UInt64) {
self.init([UInt32(v >> 32), UInt32((v << 32) >> 32)])
}

public init(integerLiteral value: IntegerLiteralType) {
parts = value.parts
}

public init?(data: NSData) {
var parts = [UInt32]()
let size = sizeof(UInt32)
for i in 0 ..< 8 {
var part = UInt32()
data.getBytes(&part, range: NSMakeRange(i * size, size))
parts.append(part)
}
guard parts.count == 8 else { return nil }
self.init(parts)
}

@warn_unused_result
public func advancedBy(n: Stride) -> UInt256 {
return self + UInt256(n)
}

@warn_unused_result
public func advancedBy(n: Distance, limit: UInt256) -> UInt256 {
return limit - UInt256(n) > self ? self + UInt256(n) : limit
}

@warn_unused_result
public func distanceTo(end: UInt256) -> Distance {
return end - self
}

/// Returns the previous consecutive value in a discrete sequence.
///
/// If `UInt256` has a well-defined successor,
/// `UInt256.successor().predecessor() == UInt256`. If `UInt256` has a
/// well-defined predecessor, `UInt256.predecessor().successor() ==
/// UInt256`.
///
/// - Requires: `UInt256` has a well-defined predecessor.
@warn_unused_result
public func predecessor() -> UInt256 {
return advancedBy(-1)
}

@warn_unused_result
public func successor() -> UInt256 {
return advancedBy(1)
}

}

extension UInt256 : BitwiseOperationsType {}

/// Returns the intersection of bits set in `lhs` and `rhs`.
///
/// - Complexity: O(1).
@warn_unused_result
public func & (lhs: UInt256, rhs: UInt256) -> UInt256 {
var parts = [UInt32]()
for i in 0 ..< 8 {
parts.append(lhs.parts[i] & rhs.parts[i])
}
return UInt256(parts)
}
/// Returns the union of bits set in `lhs` and `rhs`.
///
/// - Complexity: O(1).
@warn_unused_result
public func | (lhs: UInt256, rhs: UInt256) -> UInt256 {
var parts = [UInt32]()
for i in 0 ..< 8 {
parts.append(lhs.parts[i] | rhs.parts[i])
}
return UInt256(parts)
}
/// Returns the bits that are set in exactly one of `lhs` and `rhs`.
///
/// - Complexity: O(1).
@warn_unused_result
public func ^ (lhs: UInt256, rhs: UInt256) -> UInt256 {
var parts = [UInt32]()
for i in 0 ..< 8 {
parts.append(lhs.parts[i] ^ rhs.parts[i])
}
return UInt256(parts)
}
/// Returns `x ^ ~UInt256.allZeros`.
///
/// - Complexity: O(1).
@warn_unused_result
prefix public func ~ (x: UInt256) -> UInt256 {
return x ^ ~UInt256.allZeros
}

public extension UInt256 {

public static var allZeros: UInt256 {
return UInt256()
}

}

public extension NSCoder {

public func encodeUInt256(unsignedInteger: UInt256, forKey key: String) {
encodeObject(unsignedInteger.data, forKey: key)
}

public func decodeUInt256ForKey(key: String) -> UInt256 {
guard let data = decodeObjectForKey(key) as? NSData else { return UInt256() }
return UInt256(data: data) ?? UInt256()
}

}

Declaring and using a bit field enum in Swift

Updated for Swift 2/3

Since swift 2, a new solution has been added as "raw option set" (see: Documentation), which is essentially the same as my original response, but using structs that allow arbitrary values.

This is the original question rewritten as an OptionSet:

struct MyOptions: OptionSet
{
let rawValue: UInt8

static let One = MyOptions(rawValue: 0x01)
static let Two = MyOptions(rawValue: 0x02)
static let Four = MyOptions(rawValue: 0x04)
static let Eight = MyOptions(rawValue: 0x08)
}

let m1 : MyOptions = .One

let combined : MyOptions = [MyOptions.One, MyOptions.Four]

Combining with new values can be done exactly as Set operations (thus the OptionSet part), .union, likewise:

m1.union(.Four).rawValue // Produces 5

Same as doing One | Four in its C-equivalent. As for One & Mask != 0, can be specified as a non-empty intersection

// Equivalent of A & B != 0
if !m1.intersection(combined).isEmpty
{
// m1 belongs is in combined
}

Weirdly enough, most of the C-style bitwise enums have been converted to their OptionSet equivalent on Swift 3, but Calendar.Compontents does away with a Set<Enum>:

let compontentKeys : Set<Calendar.Component> = [.day, .month, .year]

Whereas the original NSCalendarUnit was a bitwise enum. So both approaches are usable (thus the original response remains valid)

Original Response

I think the best thing to do, is to simply avoid the bitmask syntax until the Swift devs figure out a better way.

Most of the times, the problem can be solved using an enum and and a Set

enum Options
{
case A, B, C, D
}

var options = Set<Options>(arrayLiteral: .A, .D)

An and check (options & .A) could be defined as:

options.contains(.A)

Or for multiple "flags" could be:

options.isSupersetOf(Set<Options>(arrayLiteral: .A, .D))

Adding new flags (options |= .C):

options.insert(.C)

This also allows for using all the new stuff with enums: custom types, pattern matching with switch case, etc.

Of course, it doesn't have the efficiency of bitwise operations, nor it would be compatible with low level things (like sending bluetooth commands), but it's useful for UI elements that the overhead of the UI outweighs the cost of the Set operations.

How to divide UInt8 into 3 bits and 5 bits

You can do something like that

extension UInt8 {
var bit3 : UInt8 {
get {
return (self & 0b1110_0000) >> 5
}
set(newValue) {
self &= 0b0001_1111
self |= ((newValue << 5) & 0b1110_0000)
}
}

var bit5 : UInt8 {
get {
return self & 0b0001_1111
}
set(newValue) {
self &= 0b1110_0000
self |= (newValue & 0b0001_1111)
}
}
}

bit3 here is "top" and bit5 is "bottom".

EDIT: Forgot to clean bits before setting

How to perform an insertion operation on a bit field?

You need to extract the bits below the insertion point, and the bits above. You need to shift-left the bits above, then recombine the parts. Thus:

var x: UInt64 = 0b1111
let index: UInt64 = 1
let lowMask: UInt64 = (1 << index) - 1
let highMask: UInt64 = ~lowMask
x = ((x & highMask) << 1) | (x & lowMask)
print(String(x, radix: 2))
// Output: 11101

How to work with bit operations in Swift?

You don't need to pack the all the datasets into the 20-byte array until the very end so keep them in a an Int array of length 14. Easier to work with that way. When you need to send it over to the hardware, convert it to a UInt8 array of length 20:

struct DataPacket {
var dataSets = [Int](count: 14, repeatedValue: 0)

func toCArray() -> [UInt8] {
var result = [UInt8](count: 20, repeatedValue: 0)
var index = 0
var bitsRemaining = 8
var offset = 0

for value in self.dataSets {
offset = 10

while offset >= 0 {
let mask = 1 << offset
let bit = ((value & mask) >> offset) << (bitsRemaining - 1)
result[index] |= UInt8(bit)

offset -= 1
bitsRemaining -= 1
if bitsRemaining == 0 {
index += 1
bitsRemaining = 8
}
}
}

return result
}
}

// Usage:
var packet = DataPacket()
packet.dataSets[0] = 0b11111111111
packet.dataSets[1] = 0b00000000011
// etc...

let arr = packet.toCArray()

There is a lot of shifting operations going on so I can't explain them all. The general ideal is allocate each of those 11-bit dataset into bytes, spilling over to the next byte as necessary.

Find most significant bit in Swift

You can use the flsl() function ("find last set bit, long"):

let x = 9
let p = flsl(x)
print(p) // 4

The result is 4 because flsl() and the related functions number the bits starting at 1, the least significant bit.

On Intel platforms you can use the _bit_scan_reverse intrinsic,
in my test in a macOS application this translated to a BSR
instruction.

import _Builtin_intrinsics.intel

let x: Int32 = 9
let p = _bit_scan_reverse(x)
print(p) // 3

How can I do it better to convert my code

Integer types could be signed (with negative numbers as well as positive ones) or unsigned (only positive numbers). There is a lot of documentation on internet about integer types, you should read it.

Unsigned integer are usually used to store counters as they are always positive.

The equivalent in Swift for unsigned int is UInt.

The correct way is:

struct DelegateFlags {
var didDoneClicked: UInt
var didCancelClicked: UInt
}

Note that you should use the var keyword in Swift. Also if you want to match Swift style guidelines you should drop the underscore for an uppercased first letter.


Note

This struct looks like a C bit field used to store boolean values in a memory efficient way, by storing the information on 1 bit instead of 8.

If you want to match the exact C memory pattern the Swift way is:

struct DelegateFlags: OptionSet {
var rawValue: UInt

static let didDoneClicked = DelegateFlags(rawValue: 1 << 0)
static let didCancelClicked = DelegateFlags(rawValue: 1 << 1)
}

However if you need a simpler struct still more memory efficient than the 1st I presented, here is an alternative:

struct DelegateFlags {
var didDoneClicked: Bool
var didCancelClicked: Bool
}

The first struct is 16bytes, the second one is 8byte (can be made 1byte with UInt8 though) and the last one is 2bytes. As the number of cases grow, the second one will be the most efficient (8 times less memory Thant the last one and 64x less than the first).



Related Topics



Leave a reply



Submit