How to Express Strings in Swift Using Unicode Hexadecimal Values (Utf-16)

How to express Strings in Swift using Unicode hexadecimal values (UTF-16)

Character

The Swift syntax for forming a hexadecimal code point is

\u{n}

where n is a hexadecimal number up to 8 digits long. The valid range for a Unicode scalar is U+0 to U+D7FF and U+E000 to U+10FFFF inclusive. (The U+D800 to U+DFFF range is for surrogate pairs, which are not scalars themselves, but are used in UTF-16 for encoding the higher value scalars.)

Examples:

// The following forms are equivalent. They all produce "C". 
let char1: Character = "\u{43}"
let char2: Character = "\u{0043}"
let char3: Character = "\u{00000043}"

// Higher value Unicode scalars are done similarly
let char4: Character = "\u{203C}" // ‼ (DOUBLE EXCLAMATION MARK character)
let char5: Character = "\u{1F431}" // (cat emoji)

// Characters can be made up of multiple scalars
let char7: Character = "\u{65}\u{301}" // é = "e" + accent mark
let char8: Character = "\u{65}\u{301}\u{20DD}" // é⃝ = "e" + accent mark + circle

Notes:

  • Leading zeros can be added or omitted
  • Characters are known as extended grapheme clusters. Even when they are composed of multiple scalars, they are still considered a single character. What is key is that they appear to be a single character (grapheme) to the user.
  • TODO: How to convert surrogate pair to Unicode scalar in Swift

String

Strings are composed of characters. See the following examples for some ways to form them using hexadecimal code points.

Examples:

var string1 = "\u{0043}\u{0061}\u{0074}\u{203C}\u{1F431}" // Cat‼br>
// pass an array of characters to a String initializer
let catCharacters: [Character] = ["\u{0043}", "\u{0061}", "\u{0074}", "\u{203C}", "\u{1F431}"] // ["C", "a", "t", "‼", "quot;]
let string2 = String(catCharacters) // Cat‼br>

Converting Hex Values at Runtime

At runtime you can convert hexadecimal or Int values into a Character or String by first converting it to a UnicodeScalar.

Examples:

// hex values
let value0: UInt8 = 0x43 // 67
let value1: UInt16 = 0x203C // 8252
let value2: UInt32 = 0x1F431 // 128049

// convert hex to UnicodeScalar
let scalar0 = UnicodeScalar(value0)
// make sure that UInt16 and UInt32 form valid Unicode values
guard
let scalar1 = UnicodeScalar(value1),
let scalar2 = UnicodeScalar(value2) else {
return
}

// convert to Character
let character0 = Character(scalar0) // C
let character1 = Character(scalar1) // ‼
let character2 = Character(scalar2) // br>
// convert to String
let string0 = String(scalar0) // C
let string1 = String(scalar1) // ‼
let string2 = String(scalar2) // br>
// convert hex array to String
let myHexArray = [0x43, 0x61, 0x74, 0x203C, 0x1F431] // an Int array
var myString = ""
for hexValue in myHexArray {
if let scalar = UnicodeScalar(hexValue) {
myString.append(Character(scalar))
}
}
print(myString) // Cat‼br>

Further reading

  • Strings and Characters docs
  • Glossary of Unicode Terms
  • Strings in Swift
  • Working with Unicode code points in Swift

Working with Unicode code points in Swift

Updated for Swift 3

String and Character

For almost everyone in the future who visits this question, String and Character will be the answer for you.

Set Unicode values directly in code:

var str: String = "I want to visit 北京, Москва, मुंबई, القاهرة, and 서울시. quot;
var character: Character = "quot;

Use hexadecimal to set values

var str: String = "\u{61}\u{5927}\u{1F34E}\u{3C0}" // a大π
var character: Character = "\u{65}\u{301}" // é = "e" + accent mark

Note that the Swift Character can be composed of multiple Unicode code points, but appears to be a single character. This is called an Extended Grapheme Cluster.

See this question also.

Convert to Unicode values:

str.utf8
str.utf16
str.unicodeScalars // UTF-32

String(character).utf8
String(character).utf16
String(character).unicodeScalars

Convert from Unicode hex values:

let hexValue: UInt32 = 0x1F34E

// convert hex value to UnicodeScalar
guard let scalarValue = UnicodeScalar(hexValue) else {
// early exit if hex does not form a valid unicode value
return
}

// convert UnicodeScalar to String
let myString = String(scalarValue) // br>

Or alternatively:

let hexValue: UInt32 = 0x1F34E
if let scalarValue = UnicodeScalar(hexValue) {
let myString = String(scalarValue)
}

A few more examples

let value0: UInt8 = 0x61
let value1: UInt16 = 0x5927
let value2: UInt32 = 0x1F34E

let string0 = String(UnicodeScalar(value0)) // a
let string1 = String(UnicodeScalar(value1)) // 大
let string2 = String(UnicodeScalar(value2)) // br>
// convert hex array to String
let myHexArray = [0x43, 0x61, 0x74, 0x203C, 0x1F431] // an Int array
var myString = ""
for hexValue in myHexArray {
myString.append(UnicodeScalar(hexValue))
}
print(myString) // Cat‼br>

Note that for UTF-8 and UTF-16 the conversion is not always this easy. (See UTF-8, UTF-16, and UTF-32 questions.)

NSString and unichar

It is also possible to work with NSString and unichar in Swift, but you should realize that unless you are familiar with Objective C and good at converting the syntax to Swift, it will be difficult to find good documentation.

Also, unichar is a UInt16 array and as mentioned above the conversion from UInt16 to Unicode scalar values is not always easy (i.e., converting surrogate pairs for things like emoji and other characters in the upper code planes).

Custom string structure

For the reasons mentioned in the question, I ended up not using any of the above methods. Instead I wrote my own string structure, which was basically an array of UInt32 to hold Unicode scalar values.

Again, this is not the solution for most people. First consider using extensions if you only need to extend the functionality of String or Character a little.

But if you really need to work exclusively with Unicode scalar values, you could write a custom struct.

The advantages are:

  • Don't need to constantly switch between Types (String, Character, UnicodeScalar, UInt32, etc.) when doing string manipulation.
  • After Unicode manipulation is finished, the final conversion to String is easy.
  • Easy to add more methods when they are needed
  • Simplifies converting code from Java or other languages

Disadavantages are:

  • makes code less portable and less readable for other Swift developers
  • not as well tested and optimized as the native Swift types
  • it is yet another file that has to be included in a project every time you need it

You can make your own, but here is mine for reference. The hardest part was making it Hashable.

// This struct is an array of UInt32 to hold Unicode scalar values
// Version 3.4.0 (Swift 3 update)

struct ScalarString: Sequence, Hashable, CustomStringConvertible {

fileprivate var scalarArray: [UInt32] = []


init() {
// does anything need to go here?
}

init(_ character: UInt32) {
self.scalarArray.append(character)
}

init(_ charArray: [UInt32]) {
for c in charArray {
self.scalarArray.append(c)
}
}

init(_ string: String) {

for s in string.unicodeScalars {
self.scalarArray.append(s.value)
}
}

// Generator in order to conform to SequenceType protocol
// (to allow users to iterate as in `for myScalarValue in myScalarString` { ... })
func makeIterator() -> AnyIterator<UInt32> {
return AnyIterator(scalarArray.makeIterator())
}

// append
mutating func append(_ scalar: UInt32) {
self.scalarArray.append(scalar)
}

mutating func append(_ scalarString: ScalarString) {
for scalar in scalarString {
self.scalarArray.append(scalar)
}
}

mutating func append(_ string: String) {
for s in string.unicodeScalars {
self.scalarArray.append(s.value)
}
}

// charAt
func charAt(_ index: Int) -> UInt32 {
return self.scalarArray[index]
}

// clear
mutating func clear() {
self.scalarArray.removeAll(keepingCapacity: true)
}

// contains
func contains(_ character: UInt32) -> Bool {
for scalar in self.scalarArray {
if scalar == character {
return true
}
}
return false
}

// description (to implement Printable protocol)
var description: String {
return self.toString()
}

// endsWith
func endsWith() -> UInt32? {
return self.scalarArray.last
}

// indexOf
// returns first index of scalar string match
func indexOf(_ string: ScalarString) -> Int? {

if scalarArray.count < string.length {
return nil
}

for i in 0...(scalarArray.count - string.length) {

for j in 0..<string.length {

if string.charAt(j) != scalarArray[i + j] {
break // substring mismatch
}
if j == string.length - 1 {
return i
}
}
}

return nil
}

// insert
mutating func insert(_ scalar: UInt32, atIndex index: Int) {
self.scalarArray.insert(scalar, at: index)
}
mutating func insert(_ string: ScalarString, atIndex index: Int) {
var newIndex = index
for scalar in string {
self.scalarArray.insert(scalar, at: newIndex)
newIndex += 1
}
}
mutating func insert(_ string: String, atIndex index: Int) {
var newIndex = index
for scalar in string.unicodeScalars {
self.scalarArray.insert(scalar.value, at: newIndex)
newIndex += 1
}
}

// isEmpty
var isEmpty: Bool {
return self.scalarArray.count == 0
}

// hashValue (to implement Hashable protocol)
var hashValue: Int {

// DJB Hash Function
return self.scalarArray.reduce(5381) {
($0 << 5) &+ $0 &+ Int($1)
}
}

// length
var length: Int {
return self.scalarArray.count
}

// remove character
mutating func removeCharAt(_ index: Int) {
self.scalarArray.remove(at: index)
}
func removingAllInstancesOfChar(_ character: UInt32) -> ScalarString {

var returnString = ScalarString()

for scalar in self.scalarArray {
if scalar != character {
returnString.append(scalar)
}
}

return returnString
}
func removeRange(_ range: CountableRange<Int>) -> ScalarString? {

if range.lowerBound < 0 || range.upperBound > scalarArray.count {
return nil
}

var returnString = ScalarString()

for i in 0..<scalarArray.count {
if i < range.lowerBound || i >= range.upperBound {
returnString.append(scalarArray[i])
}
}

return returnString
}


// replace
func replace(_ character: UInt32, withChar replacementChar: UInt32) -> ScalarString {

var returnString = ScalarString()

for scalar in self.scalarArray {
if scalar == character {
returnString.append(replacementChar)
} else {
returnString.append(scalar)
}
}
return returnString
}
func replace(_ character: UInt32, withString replacementString: String) -> ScalarString {

var returnString = ScalarString()

for scalar in self.scalarArray {
if scalar == character {
returnString.append(replacementString)
} else {
returnString.append(scalar)
}
}
return returnString
}
func replaceRange(_ range: CountableRange<Int>, withString replacementString: ScalarString) -> ScalarString {

var returnString = ScalarString()

for i in 0..<scalarArray.count {
if i < range.lowerBound || i >= range.upperBound {
returnString.append(scalarArray[i])
} else if i == range.lowerBound {
returnString.append(replacementString)
}
}
return returnString
}

// set (an alternative to myScalarString = "some string")
mutating func set(_ string: String) {
self.scalarArray.removeAll(keepingCapacity: false)
for s in string.unicodeScalars {
self.scalarArray.append(s.value)
}
}

// split
func split(atChar splitChar: UInt32) -> [ScalarString] {
var partsArray: [ScalarString] = []
if self.scalarArray.count == 0 {
return partsArray
}
var part: ScalarString = ScalarString()
for scalar in self.scalarArray {
if scalar == splitChar {
partsArray.append(part)
part = ScalarString()
} else {
part.append(scalar)
}
}
partsArray.append(part)
return partsArray
}

// startsWith
func startsWith() -> UInt32? {
return self.scalarArray.first
}

// substring
func substring(_ startIndex: Int) -> ScalarString {
// from startIndex to end of string
var subArray: ScalarString = ScalarString()
for i in startIndex..<self.length {
subArray.append(self.scalarArray[i])
}
return subArray
}
func substring(_ startIndex: Int, _ endIndex: Int) -> ScalarString {
// (startIndex is inclusive, endIndex is exclusive)
var subArray: ScalarString = ScalarString()
for i in startIndex..<endIndex {
subArray.append(self.scalarArray[i])
}
return subArray
}

// toString
func toString() -> String {
var string: String = ""

for scalar in self.scalarArray {
if let validScalor = UnicodeScalar(scalar) {
string.append(Character(validScalor))
}
}
return string
}

// trim
// removes leading and trailing whitespace (space, tab, newline)
func trim() -> ScalarString {

//var returnString = ScalarString()
let space: UInt32 = 0x00000020
let tab: UInt32 = 0x00000009
let newline: UInt32 = 0x0000000A

var startIndex = self.scalarArray.count
var endIndex = 0

// leading whitespace
for i in 0..<self.scalarArray.count {
if self.scalarArray[i] != space &&
self.scalarArray[i] != tab &&
self.scalarArray[i] != newline {

startIndex = i
break
}
}

// trailing whitespace
for i in stride(from: (self.scalarArray.count - 1), through: 0, by: -1) {
if self.scalarArray[i] != space &&
self.scalarArray[i] != tab &&
self.scalarArray[i] != newline {

endIndex = i + 1
break
}
}

if endIndex <= startIndex {
return ScalarString()
}

return self.substring(startIndex, endIndex)
}

// values
func values() -> [UInt32] {
return self.scalarArray
}

}

func ==(left: ScalarString, right: ScalarString) -> Bool {
return left.scalarArray == right.scalarArray
}

func +(left: ScalarString, right: ScalarString) -> ScalarString {
var returnString = ScalarString()
for scalar in left.values() {
returnString.append(scalar)
}
for scalar in right.values() {
returnString.append(scalar)
}
return returnString
}

How to decode a UTF16 string into a Unicode character

This is a little bit of a cheat but UTF-16 happens to be the encoding used by NSString so you can borrow the methods of NSString to achieve it:

extension String {
func decodeBlock() -> String? {
var chars = [unichar]()

for substr in self.components(separatedBy: "\\u") where !substr.isEmpty {
if let value = UInt16(substr, radix: 16) {
chars.append(value)
} else {
return nil
}
}

return NSString(characters: chars, length: chars.count) as String
}
}

if let decoded = "\\uD83E\\uDD1B\\uD83C\\uDFFD".decodeBlock() {
print(decoded)
} else {
print("Cannot decode")
}

Convert unicode symbols \uXXXX in String to Character in Swift

You can use \u{my_unicode}:

print("Ain\u{2019}t this a beautiful day")
/* Prints "Ain’t this a beautiful day"

From the Language Guide - Strings and Characters - Unicode:

String literals can include the following special characters:

...

  • An arbitrary Unicode scalar, written as \u{n}, where n is a 1–8 digit
    hexadecimal number with a value equal to a valid Unicode code point

Hex String to Character in PURE Swift

A couple of things about your code:

var charArray = [Character]()
charArray = map(hexArray) { charArray.append(Character($0)) }

You don't need to create an array and then assign the result of the map, you can just assign the result and avoid creating an unnecessary array.

charArray = map(hexArray) { charArray.append(Character($0)) }

Here you can use hexArray.map instead of map(hexArray), also when you use a map function what you are conceptually doing is mapping the elements of the receiver array to a new set of values and the result of the mapping is the new "mapped" array, which means that you don't need to do charArray.append inside the map closure.

Anyway, here is a working example:

let hexArray = ["2F", "24", "40", "2A"]
var charArray = hexArray.map { char -> Character in
let code = Int(strtoul(char, nil, 16))
return Character(UnicodeScalar(code))
}
println(charArray) // -> [/, $, @, *]

EDIT: This is another implementation that doesn't need Foundation:

func hexToScalar(char: String) -> UnicodeScalar {
var total = 0
for scalar in char.uppercaseString.unicodeScalars {
if !(scalar >= "A" && scalar <= "F" || scalar >= "0" && scalar <= "9") {
assertionFailure("Input is wrong")
}

if scalar >= "A" {
total = 16 * total + 10 + scalar.value - 65 /* 'A' */
} else {
total = 16 * total + scalar.value - 48 /* '0' */
}
}
return UnicodeScalar(total)
}

let hexArray = ["2F", "24", "40", "2A"]
var charArray = hexArray.map { Character(hexToScalar($0)) }
println(charArray)

EDIT2 Yet another option:

func hexToScalar(char: String) -> UnicodeScalar {
let map = [ "0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9,
"A": 10, "B": 11, "C": 12, "D": 13, "E": 14, "F": 15 ]

let total = reduce(char.uppercaseString.unicodeScalars, 0, { $0 * 16 + (map[String($1)] ?? 0xff) })
if total > 0xFF {
assertionFailure("Input char was wrong")
}
return UnicodeScalar(total)
}

Final edit: explanation

Given that the ascii table has all the number together (012345679), we can convert 'N' (base 10) to an integer knowing the ascii value of 0.

Because:

'0': 48
'1': 49
...
'9': 57

Then if for example you need to convert '9' to 9 you could do

asciiValue('9') - asciiValue('0') => 57 - 48 = 9

And you can do the same from 'A' to 'F':

'A': 65
'B': 66
...
'F': 70

Now we can do the same as before but, for example for 'F' we'd do:

asciiValue('F') - asciiValue('A') => 70 - 65 = 5

Note that we need to add 10 to this number to get the decimal. Then (going back to the code): If the scalar is between A-Z we need to do:

10 + asciiValue(<letter>) - asciiValue('A')

which is the same as: 10 + scalar.value - 65

And if it's between 0-9:

asciiValue(<letter>) - asciiValue('0')

which is the same as: scalar.value - 48

For example: '2F'

'2' is 2 and 'F' is 15 (by the previous example), right?. Since hex is base 16 we'd need to do:

((16 ^ 1) * 2) + ((16 ^ 0) * 15) = 47

How is the 🇩🇪 character represented in Swift strings?

let flag = "\u{1f1e9}\u{1f1ea}"

then flag is .

For more regional indicator symbols, see:

http://en.wikipedia.org/wiki/Regional_Indicator_Symbol

How to convert Unicode Character to Int in Swift

Hex to Int

If you are starting with \u{0D85} and you know the hex value of the Unicode character, then you might as well write it in the following format because it is an Int already.

let myInt = 0x0D85                          // Int: 3461

String to Int

I assume, though, that you have "\u{0D85}" (in quotes), which makes it a String by default. And since it is a String, you can't be certain that you will only have a single Int value for the general case.

let myString = "\u{0D85}"

for element in myString.unicodeScalars {
let myInt = element.value // UInt32: 3461
}

I could have also used myString.utf16 and let myInt = Int(element), but I find it easier to deal with Unicode scalar values (UTF-32) when there is a possibility of things like emoji. (See this answer for more details.)

Character to Int

Swift Character, which is an extended grapheme cluster, does not have a utf16 or unicodeScalars property, so if you are starting with Character then convert it to a String first and then follow the directions in the String to Int section above.

let myChar: Character = "\u{0D85}"
let myString = String(myChar)


Related Topics



Leave a reply



Submit