How to Know If an Emoji Is Supported in iOS

Is there a way to know if an Emoji is supported in iOS?

Clarification: an Emoji is merely a character in the Unicode Character space, so the present solution works for all characters, not just Emoji.

Synopsis

To know if a Unicode character (including an Emoji) is available on a given device or OS, run the unicodeAvailable() method below.

It works by comparing a given character image against a known undefined Unicode character U+1FFF.

unicodeAvailable(), a Character extension

private static let refUnicodeSize: CGFloat = 8
private static let refUnicodePng =
Character("\u{1fff}").png(ofSize: Character.refUnicodeSize)

func unicodeAvailable() -> Bool {
if let refUnicodePng = Character.refUnicodePng,
let myPng = self.png(ofSize: Character.refUnicodeSize) {
return refUnicodePng != myPng
}
return false
}

Discussion

  1. All characters will be rendered as a png at the same size (8) as defined once in

    static let refUnicodeSize: CGFloat = 8

  2. The undefined character U+1FFF image is calculated once in

    static let refUnicodePng = Character("\u{1fff}").png(ofSize: Character.refUnicodeSize)

  3. A helper method optionally creates a png from a Character

    func png(ofSize fontSize: CGFloat) -> Data?

1. Example: Test against 3 emoji

let codes:[Character] = ["\u{2764}","\u{1f600}","\u{1F544}"] // ❤️, , undefined
for unicode in codes {
print("\(unicode) : \(unicode.unicodeAvailable())")
}

3 characters

2. Example: Test a range of Unicode characters

func unicodeRange(from: Int, to: Int) {
for unicodeNumeric in from...to {
if let scalar = UnicodeScalar(unicodeNumeric) {
let unicode = Character(scalar)
let avail = unicode.unicodeAvailable()
let hex = String(format: "0x%x", unicodeNumeric)
print("\(unicode) \(hex) is \(avail ? "" : "not ")available")
}
}
}

a few hundred characters



Helper function: Character to png

func png(ofSize fontSize: CGFloat) -> Data? {
let attributes = [NSAttributedStringKey.font:
UIFont.systemFont(ofSize: fontSize)]
let charStr = "\(self)" as NSString
let size = charStr.size(withAttributes: attributes)

UIGraphicsBeginImageContext(size)
charStr.draw(at: CGPoint(x: 0,y :0), withAttributes: attributes)

var png:Data? = nil
if let charImage = UIGraphicsGetImageFromCurrentImageContext() {
png = UIImagePNGRepresentation(charImage)
}

UIGraphicsEndImageContext()
return png
}

► Find this solution on GitHub and a detailed article on Swift Recipes.

Find out if Character in String is emoji?

What I stumbled upon is the difference between characters, unicode scalars and glyphs.

For example, the glyph ‍‍‍ consists of 7 unicode scalars:

  • Four emoji characters: /li>
  • In between each emoji is a special character, which works like character glue; see the specs for more info

Another example, the glyph consists of 2 unicode scalars:

  • The regular emoji: /li>
  • A skin tone modifier: /li>

Last one, the glyph 1️⃣ contains three unicode characters:

  • The digit one: 1
  • The variation selector
  • The Combining Enclosing Keycap:

So when rendering the characters, the resulting glyphs really matter.

Swift 5.0 and above makes this process much easier and gets rid of some guesswork we needed to do. Unicode.Scalar's new Property type helps is determine what we're dealing with.
However, those properties only make sense when checking the other scalars within the glyph. This is why we'll be adding some convenience methods to the Character class to help us out.

For more detail, I wrote an article explaining how this works.

For Swift 5.0, this leaves you with the following result:

extension Character {
/// A simple emoji is one scalar and presented to the user as an Emoji
var isSimpleEmoji: Bool {
guard let firstScalar = unicodeScalars.first else { return false }
return firstScalar.properties.isEmoji && firstScalar.value > 0x238C
}

/// Checks if the scalars will be merged into an emoji
var isCombinedIntoEmoji: Bool { unicodeScalars.count > 1 && unicodeScalars.first?.properties.isEmoji ?? false }

var isEmoji: Bool { isSimpleEmoji || isCombinedIntoEmoji }
}

extension String {
var isSingleEmoji: Bool { count == 1 && containsEmoji }

var containsEmoji: Bool { contains { $0.isEmoji } }

var containsOnlyEmoji: Bool { !isEmpty && !contains { !$0.isEmoji } }

var emojiString: String { emojis.map { String($0) }.reduce("", +) }

var emojis: [Character] { filter { $0.isEmoji } }

var emojiScalars: [UnicodeScalar] { filter { $0.isEmoji }.flatMap { $0.unicodeScalars } }
}

Which will give you the following results:

"A̛͚̖".containsEmoji // false
"3".containsEmoji // false
"A̛͚̖▶️".unicodeScalars // [65, 795, 858, 790, 9654, 65039]
"A̛͚̖▶️".emojiScalars // [9654, 65039]
"3️⃣".isSingleEmoji // true
"3️⃣".emojiScalars // [51, 65039, 8419]
"quot;.isSingleEmoji // true
"‍♂️".isSingleEmoji // true
"quot;.isSingleEmoji // true
"⏰".isSingleEmoji // true
"quot;.isSingleEmoji // true
"‍‍‍quot;.isSingleEmoji // true
"quot;.isSingleEmoji // true
"quot;.containsOnlyEmoji // true
"‍‍‍quot;.containsOnlyEmoji // true
"Hello ‍‍‍quot;.containsOnlyEmoji // false
"Hello ‍‍‍quot;.containsEmoji // true
" Héllo ‍‍‍quot;.emojiString // "‍‍‍quot;
"‍‍‍quot;.count // 1

" Héllœ ‍‍‍quot;.emojiScalars // [128107, 128104, 8205, 128105, 8205, 128103, 8205, 128103]
" Héllœ ‍‍‍quot;.emojis // ["quot;, "‍‍‍quot;]
" Héllœ ‍‍‍quot;.emojis.count // 2

"‍‍‍‍‍quot;.isSingleEmoji // false
"‍‍‍‍‍quot;.containsOnlyEmoji // true

For older Swift versions, check out this gist containing my old code.

How can you verify a multiple code point emoji is supported?

According to Emoji 14.0's data files, an emoji is either a basic emoji, a keycap sequence, a flag, a modifier sequence, or a ZWJ sequence. In each of those cases, there will be at least one code point in the sequence for which isEmoji returns true, and the sequence will form a single glyph.

So, you should make a string out of the unicode scalars first:

let scalars = [UnicodeScalar(0x1F415)!, UnicodeScalar(0x200D)!, UnicodeScalar(0x1F9BA)!]
let scalarView = String.UnicodeScalarView(scalars)
let string = String(scalarView)

Then, you can check if it is a emoji like this:

CTLineGetGlyphCount(CTLineCreateWithAttributedString(
NSAttributedString(string: string)
)) == 1 &&
string.unicodeScalars.contains { $0.properties.isEmoji }

Alternatively, since you just want to check if the emoji can be displayed properly, you can use CTFontGetGlyphsForCharacters to see if Apple Color Emoji supports the characters.

let font = UIFont(name: "AppleColorEmoji", size: 20)! as CTFont
var text = Array(string.utf16)
var glyphs = Array(repeating: 0 as CGGlyph, count: text.count)
let isEmoji = CTFontGetGlyphsForCharacters(font, &text, &glyphs, text.count) &&
CTLineGetGlyphCount(CTLineCreateWithAttributedString(
NSAttributedString(string: string)
)) == 1

Note that both of these methods will return false positives (non-emojis like ASCII letters being reported as emojis), but will not return false negatives.

How to know if two emojis will be displayed as one emoji?

Update for Swift 4 (Xcode 9)

As of Swift 4, a "Emoji sequence" is treated as a single grapheme
cluster (according to the Unicode 9 standard):

let s = "ab‍❤️‍‍br>print(s.count) // 4

so the other workarounds are not needed anymore.


(Old answer for Swift 3 and earlier:)

A possible option is to enumerate and count the
"composed character sequences" in the string:

let s = "ab‍❤️‍‍br>var count = 0
s.enumerateSubstringsInRange(s.startIndex..<s.endIndex,
options: .ByComposedCharacterSequences) {
(char, _, _, _) in
if let char = char {
count += 1
}
}
print(count) // 4

Another option is to find the range of the composed character
sequences at a given index:

let s = "‍❤️‍‍br>if s.rangeOfComposedCharacterSequenceAtIndex(s.startIndex) == s.characters.indices {
print("This is a single composed character")
}

As String extension methods:

// Swift 2.2:
extension String {
var composedCharacterCount: Int {
var count = 0
enumerateSubstringsInRange(characters.indices, options: .ByComposedCharacterSequences) {
(_, _, _, _) in count += 1
}
return count
}

var isSingleComposedCharacter: Bool {
return rangeOfComposedCharacterSequenceAtIndex(startIndex) == characters.indices
}
}

// Swift 3:
extension String {
var composedCharacterCount: Int {
var count = 0
enumerateSubstrings(in: startIndex..<endIndex, options: .byComposedCharacterSequences) {
(_, _, _, _) in count += 1
}
return count
}

var isSingleComposedCharacter: Bool {
return rangeOfComposedCharacterSequence(at: startIndex) == startIndex..<endIndex
}
}

Examples:

".composedCharacterCount // 1
".characters.count // 2

"‍❤️‍‍.composedCharacterCount // 1
"‍❤️‍‍.characters.count // 4

".composedCharacterCount // 2
".characters.count // 1

As you see, the number of Swift characters (extended grapheme clusters) can be more or less than
the number of composed character sequences.



Related Topics



Leave a reply



Submit