Add Skin Tone Modifier to an Emoji Programmatically

Add skin tone modifier to an emoji programmatically

Adding the skin tone modifier only works if the preceding character is a plain Emoji character. It turns out, your "‍/code> is actually made up of 3 characters. The plain woman emoji U+1F469 (/code>), followed by U+200D, and finally, the hair modifier U+1F9B0 (/code>).

You can see this with:

print(Array("‍.unicodeScalars)) // ["\u{0001F469}", "\u{200D}", "\u{0001F9B0}"]

So trying to add the skin tone modifier after the hair modifier doesn't work. The skin tone modifier needs to be immediately after the base Emoji.

Here's a String extension that will apply a skin tone to characters even if they already have a skin tone or other modifiers.

extension String {
func applySkinTone(_ tone: String) -> String {
guard tone.count == 1 && self.count > 0 else { return self }
let minTone = Unicode.Scalar(0x1F3FB)!
let maxTone = Unicode.Scalar(0x1F3FF)!
guard let toneChar = tone.unicodeScalars.first, minTone...maxTone ~= toneChar else { return self }

var scalars = Array(self.unicodeScalars)
// Remove any existing tone
if scalars.count >= 2 && minTone...maxTone ~= scalars[1] {
scalars.remove(at: 1)
}
// Now add the new tone
scalars.insert(toneChar, at: 1)

return String(String.UnicodeScalarView(scalars))
}
}

print(".applySkinTone(")) // br>print("‍.applySkinTone(")) // ‍br>

Note that this code does not do any validation that the original string supports a skin tone.

How do I apply a skin tone to an emoji in JavaScript?

There are only few emojis that actually take up Skin Tones. For example, faces - man and women facepalming:

function units(codepoint) {    const tmp = codepoint - 0x10000    const padded = tmp.toString(2).padStart(20, '0')    const unit1 = Number.parseInt(padded.substr(0, 10), 2) + 0xD800;    const unit2 = Number.parseInt(padded.substr(10), 2) + 0xDC00;    return [unit1, unit2]}
const face = units(0x1F926)const tone = units(0x1F3FF)const ch = String.fromCharCode(...face, ...tone) console.log(ch)

Emoji skin-tone detect

A "hack" would be to compare the visual representation of a correct emoji (like ") and a wanna-be emoji (like ").

I've modified your code here and there to make it work:

extension String {
static let emojiSkinToneModifiers: [String] = [", ", ", ", "]

var emojiVisibleLength: Int {
var count = 0
let nsstr = self as NSString
let range = NSRange(location: 0, length: nsstr.length)
nsstr.enumerateSubstrings(in: range,
options: .byComposedCharacterSequences)
{ (_, _, _, _) in

count = count + 1
}
return count
}

var emojiUnmodified: String {
if isEmpty {
return self
}
let string = String(self.unicodeScalars.first!)
return string
}

private static let emojiReferenceSize: CGSize = {
let size = CGSize(width : CGFloat.greatestFiniteMagnitude,
height: CGFloat.greatestFiniteMagnitude)
let rect = (" as NSString).boundingRect(with: size,
options: .usesLineFragmentOrigin,
context: nil)
return rect.size
}()

var canHaveSkinToneModifier: Bool {
if isEmpty {
return false
}

let modified = self.emojiUnmodified + String.emojiSkinToneModifiers[0]

let size = (modified as NSString)
.boundingRect(with: CGSize(width : CGFloat.greatestFiniteMagnitude,
height: .greatestFiniteMagnitude),
options: .usesLineFragmentOrigin,
context: nil).size

return size == String.emojiReferenceSize
}
}

Let's try it out:

let emojis = [ ", ", " ]
for emoji in emojis {
if emoji.canHaveSkinToneModifier {
let unmodified = emoji.emojiUnmodified
print(unmodified)
for modifier in String.emojiSkinToneModifiers {
print(unmodified + modifier)
}
} else {
print(emoji)
}
print("\n")
}

And voila!

br>br>br>br>br>br>br>br>br>br>br>br>br>

Swift 5 – How to use base emoji in case statement that may have skin-tone emoji as input?

One way you can do this is to first check if the string is a single Character, then get the first unicode scalar. That will be the unmodified base emoji.

You can write your own ~= operator for String, s

// or if...else if you are doing other stuff after the switch
guard blah.count == 1 else {
daIcon = self.iconRedQuestion!
return
}
switch blah.unicodeScalars.first {
case "✅":
daIcon = self.iconComplete!
case "quot;:
daIcon = self.iconConfirmed!
case "quot;:
daIcon = self.iconMessageLeft!
case "quot;:
daIcon = self.iconLookingForDocs!
case "☎":
daIcon = self.iconVoiceMailLeft!
case "❌":
daIcon = self.iconPostponed!
default:
daIcon = self.iconRedQuestion!
}

Note that the ☎️ emoji is actually ☎ (U+260E BLACK TELEPHONE) plus a variation selector. You should use only U+260E in the cases.

You can also use Smile, specifically the unmodify(emoji:) and isSingleEmoji functions, which are implemented similarly as above.

Note that this approach picks out the first emoji in a ZWJ sequence, which may or may not be expected.

Unicode to Emoji on Android

tl;dr

There is nothing special about emoji. Each emoji is just a text character. Parse your text into a Unicode code point number for that character, and build a String.

new StringBuilder()
.appendCodePoint(
Integer.decode(
"U+1F601".replace( "U+" , "0x" )
)
)
.toString()

/p>

Details

Replace the “U+” with “0x” in your input string.

Use Integer.decode to convert that hex text to an integer number.

Use StringBuilder to parse the integer as a code point for a Unicode character put into in a String object.

        String input = "U+1F601".replace( "U+" , "0x" ) ;
int x = Integer.decode( input ) ;
String y = new StringBuilder().appendCodePoint( x ).toString() ;

System.out.println ( x ) ;
System.out.println ( y ) ;

See this code run live at IdeOne.com.

128513
br>

Notice that we did not use the obsolete char type. That type is now legacy, unable to represent even half of the characters defined in Unicode. Instead, use integer numbers for the code point of each character.

How to determine the display count of a Swift String?

What happens if you print on a system that doesn't support the Fitzpatrick modifiers? You get followed by whatever the system uses for an unknown character placeholder.

So I think to answer this, you must consult your system's typesetter. For Apple platforms, you can use Core Text to create a CTLine and then count the line's glyph runs. Example:

import Foundation
import CoreText

func test(_ string: String) {
let richText = NSAttributedString(string: string)
let line = CTLineCreateWithAttributedString(richText as CFAttributedString)
let runs = CTLineGetGlyphRuns(line) as! [CTRun]
print(string, runs.count)
}

test(" + ")
test("A" + ")
test("B\u{0300}\u{0301}\u{0302}" + ")

Output from a macOS playground in Xcode 10.2.1 on macOS 10.14.6 Beta (18G48f):

 1
A 2
B̀́̂ 2

Find out if Character in String is emoji?

What I stumbled upon is the difference between characters, unicode scalars and glyphs.

For example, the glyph ‍‍‍ consists of 7 unicode scalars:

  • Four emoji characters: /li>
  • In between each emoji is a special character, which works like character glue; see the specs for more info

Another example, the glyph consists of 2 unicode scalars:

  • The regular emoji: /li>
  • A skin tone modifier: /li>

Last one, the glyph 1️⃣ contains three unicode characters:

  • The digit one: 1
  • The variation selector
  • The Combining Enclosing Keycap:

So when rendering the characters, the resulting glyphs really matter.

Swift 5.0 and above makes this process much easier and gets rid of some guesswork we needed to do. Unicode.Scalar's new Property type helps is determine what we're dealing with.
However, those properties only make sense when checking the other scalars within the glyph. This is why we'll be adding some convenience methods to the Character class to help us out.

For more detail, I wrote an article explaining how this works.

For Swift 5.0, this leaves you with the following result:

extension Character {
/// A simple emoji is one scalar and presented to the user as an Emoji
var isSimpleEmoji: Bool {
guard let firstScalar = unicodeScalars.first else { return false }
return firstScalar.properties.isEmoji && firstScalar.value > 0x238C
}

/// Checks if the scalars will be merged into an emoji
var isCombinedIntoEmoji: Bool { unicodeScalars.count > 1 && unicodeScalars.first?.properties.isEmoji ?? false }

var isEmoji: Bool { isSimpleEmoji || isCombinedIntoEmoji }
}

extension String {
var isSingleEmoji: Bool { count == 1 && containsEmoji }

var containsEmoji: Bool { contains { $0.isEmoji } }

var containsOnlyEmoji: Bool { !isEmpty && !contains { !$0.isEmoji } }

var emojiString: String { emojis.map { String($0) }.reduce("", +) }

var emojis: [Character] { filter { $0.isEmoji } }

var emojiScalars: [UnicodeScalar] { filter { $0.isEmoji }.flatMap { $0.unicodeScalars } }
}

Which will give you the following results:

"A̛͚̖".containsEmoji // false
"3".containsEmoji // false
"A̛͚̖▶️".unicodeScalars // [65, 795, 858, 790, 9654, 65039]
"A̛͚̖▶️".emojiScalars // [9654, 65039]
"3️⃣".isSingleEmoji // true
"3️⃣".emojiScalars // [51, 65039, 8419]
"quot;.isSingleEmoji // true
"‍♂️".isSingleEmoji // true
"quot;.isSingleEmoji // true
"⏰".isSingleEmoji // true
"quot;.isSingleEmoji // true
"‍‍‍quot;.isSingleEmoji // true
"quot;.isSingleEmoji // true
"quot;.containsOnlyEmoji // true
"‍‍‍quot;.containsOnlyEmoji // true
"Hello ‍‍‍quot;.containsOnlyEmoji // false
"Hello ‍‍‍quot;.containsEmoji // true
" Héllo ‍‍‍quot;.emojiString // "‍‍‍quot;
"‍‍‍quot;.count // 1

" Héllœ ‍‍‍quot;.emojiScalars // [128107, 128104, 8205, 128105, 8205, 128103, 8205, 128103]
" Héllœ ‍‍‍quot;.emojis // ["quot;, "‍‍‍quot;]
" Héllœ ‍‍‍quot;.emojis.count // 2

"‍‍‍‍‍quot;.isSingleEmoji // false
"‍‍‍‍‍quot;.containsOnlyEmoji // true

For older Swift versions, check out this gist containing my old code.



Related Topics



Leave a reply



Submit