Any Good Solutions for C++ String Code Point and Code Unit

Any good solutions for C++ string code point and code unit?

I generally convert the UTF-8 string to a wide UTF-32/UCS-2 string before doing character operations. C++ does actually give us functions to do that but they are not very user friendly so I have written some nicer conversion functions here:

// This should convert to whatever the system wide character encoding 
// is for the platform (UTF-32/Linux - UCS-2/Windows)
std::string ws_to_utf8(std::wstring const& s)
{
std::wstring_convert<std::codecvt_utf8<wchar_t>, wchar_t> cnv;
std::string utf8 = cnv.to_bytes(s);
if(cnv.converted() < s.size())
throw std::runtime_error("incomplete conversion");
return utf8;
}

std::wstring utf8_to_ws(std::string const& utf8)
{
std::wstring_convert<std::codecvt_utf8<wchar_t>, wchar_t> cnv;
std::wstring s = cnv.from_bytes(utf8);
if(cnv.converted() < utf8.size())
throw std::runtime_error("incomplete conversion");
return s;
}

int main()
{
std::string s = u8"很烫烫的一锅汤";

auto w = utf8_to_ws(s); // convert to wide (UTF-32/UCS-2)

// now we can use code-point indexes on the wide string

std::cout << s << " is " << w.size() << " characters long" << '\n';
}

Output:

很烫烫的一锅汤 is 7 characters long

If you want to convert to and from UTF-32 regardless of platform then you can use the following (not so well tested) conversion routines:

std::string utf32_to_utf8(std::u32string const& utf32)
{
std::wstring_convert<std::codecvt_utf8<char32_t>, char32_t> cnv;
std::string utf8 = cnv.to_bytes(utf32);
if(cnv.converted() < utf32.size())
throw std::runtime_error("incomplete conversion");
return utf8;
}

std::u32string utf8_to_utf32(std::string const& utf8)
{
std::wstring_convert<std::codecvt_utf8<char32_t>, char32_t> cnv;
std::u32string utf32 = cnv.from_bytes(utf8);
if(cnv.converted() < utf8.size())
throw std::runtime_error("incomplete conversion");
return utf32;
}

NOTE: As of C++17 std::wstring_convert is deprecated.

However I still prefer to use it over a third party library because it is portable, it avoids external dependencies, it won't be removed until a replacement is provided and in all cases it will be easy to replace the implementations of these functions without having to change all the code that uses them.

Return code point of characters in C#

Easy, since chars in C# is actually UTF16 code points:

char x = 'A';
Console.WriteLine("U+{0:x4}", (int)x);

To address the comments, A char in C# is a 16 bit number, and holds a UTF16 code point. Code points above 16 the bit space cannot be represented in a C# character. Characters in C# is not variable width. A string however can have 2 chars following each other, each being a code unit, forming a UTF16 code point. If you have a string input and characters above the 16 bit space, you can use char.IsSurrogatePair and Char.ConvertToUtf32, as suggested in another answer:

string input = ....
for(int i = 0 ; i < input.Length ; i += Char.IsSurrogatePair(input,i) ? 2 : 1)
{
int x = Char.ConvertToUtf32(input, i);
Console.WriteLine("U+{0:X4}", x);
}

How do I properly use std::string on UTF-8 in C++?

Unicode Glossary

Unicode is a vast and complex topic. I do not wish to wade too deep there, however a quick glossary is necessary:

  1. Code Points: Code Points are the basic building blocks of Unicode, a code point is just an integer mapped to a meaning. The integer portion fits into 32 bits (well, 24 bits really), and the meaning can be a letter, a diacritic, a white space, a sign, a smiley, half a flag, ... and it can even be "the next portion reads right to left".
  2. Grapheme Clusters: Grapheme Clusters are groups of semantically related Code Points, for example a flag in unicode is represented by associating two Code Points; each of those two, in isolation, has no meaning, but associated together in a Grapheme Cluster they represent a flag. Grapheme Clusters are also used to pair a letter with a diacritic in some scripts.

This is the basic of Unicode. The distinction between Code Point and Grapheme Cluster can be mostly glossed over because for most modern languages each "character" is mapped to a single Code Point (there are dedicated accented forms for commonly used letter+diacritic combinations). Still, if you venture in smileys, flags, etc... then you may have to pay attention to the distinction.



UTF Primer

Then, a serie of Unicode Code Points has to be encoded; the common encodings are UTF-8, UTF-16 and UTF-32, the latter two existing in both Little-Endian and Big-Endian forms, for a total of 5 common encodings.

In UTF-X, X is the size in bits of the Code Unit, each Code Point is represented as one or several Code Units, depending on its magnitude:

  • UTF-8: 1 to 4 Code Units,
  • UTF-16: 1 or 2 Code Units,
  • UTF-32: 1 Code Unit.


std::string and std::wstring.

  1. Do not use std::wstring if you care about portability (wchar_t is only 16 bits on Windows); use std::u32string instead (aka std::basic_string<char32_t>).
  2. The in-memory representation (std::string or std::wstring) is independent of the on-disk representation (UTF-8, UTF-16 or UTF-32), so prepare yourself for having to convert at the boundary (reading and writing).
  3. While a 32-bits wchar_t ensures that a Code Unit represents a full Code Point, it still does not represent a complete Grapheme Cluster.

If you are only reading or composing strings, you should have no to little issues with std::string or std::wstring.

Troubles start when you start slicing and dicing, then you have to pay attention to (1) Code Point boundaries (in UTF-8 or UTF-16) and (2) Grapheme Clusters boundaries. The former can be handled easily enough on your own, the latter requires using a Unicode aware library.



Picking std::string or std::u32string?

If performance is a concern, it is likely that std::string will perform better due to its smaller memory footprint; though heavy use of Chinese may change the deal. As always, profile.

If Grapheme Clusters are not a problem, then std::u32string has the advantage of simplifying things: 1 Code Unit -> 1 Code Point means that you cannot accidentally split Code Points, and all the functions of std::basic_string work out of the box.

If you interface with software taking std::string or char*/char const*, then stick to std::string to avoid back-and-forth conversions. It'll be a pain otherwise.



UTF-8 in std::string.

UTF-8 actually works quite well in std::string.

Most operations work out of the box because the UTF-8 encoding is self-synchronizing and backward compatible with ASCII.

Due the way Code Points are encoded, looking for a Code Point cannot accidentally match the middle of another Code Point:

  • str.find('\n') works,
  • str.find("...") works for matching byte by byte1,
  • str.find_first_of("\r\n") works if searching for ASCII characters.

Similarly, regex should mostly works out of the box. As a sequence of characters ("haha") is just a sequence of bytes ("哈"), basic search patterns should work out of the box.

Be wary, however, of character classes (such as [:alphanum:]), as depending on the regex flavor and implementation it may or may not match Unicode characters.

Similarly, be wary of applying repeaters to non-ASCII "characters", "哈?" may only consider the last byte to be optional; use parentheses to clearly delineate the repeated sequence of bytes in such cases: "(哈)?".

1 The key concepts to look-up are normalization and collation; this affects all comparison operations. std::string will always compare (and thus sort) byte by byte, without regard for comparison rules specific to a language or a usage. If you need to handle full normalization/collation, you need a complete Unicode library, such as ICU.

How would you get an array of Unicode code points from a .NET String?

This answer is not correct. See @Virtlink's answer for the correct one.

static int[] ExtractScalars(string s)
{
if (!s.IsNormalized())
{
s = s.Normalize();
}

List<int> chars = new List<int>((s.Length * 3) / 2);

var ee = StringInfo.GetTextElementEnumerator(s);

while (ee.MoveNext())
{
string e = ee.GetTextElement();
chars.Add(char.ConvertToUtf32(e, 0));
}

return chars.ToArray();
}

Notes: Normalization is required to deal with composite characters.

Unicode string indexing in C++

Standard C++ is not equipped for proper handling of Unicode, giving you problems like the one you observed.

The problem here is that C++ predates Unicode by a comfortable margin. This means that even that string literal of yours will be interpreted in an implementation-defined manner because those characters are not defined in the Basic Source Character set (which is, basically, the ASCII-7 characters minus @, $, and the backtick).

C++98 does not mention Unicode at all. It mentions wchar_t, and wstring being based on it, specifying wchar_t as being capable of "representing any character in the current locale". But that did more damage than good...

Microsoft defined wchar_t as 16 bit, which was enough for the Unicode code points at that time. However, since then Unicode has been extended beyond the 16-bit range... and Windows' 16-bit wchar_t is not "wide" anymore, because you need two of them to represent characters beyond the BMP -- and the Microsoft docs are notoriously ambiguous as to where wchar_t means UTF-16 (multibyte encoding with surrogate pairs) or UCS-2 (wide encoding with no support for characters beyond the BMP).

All the while, a Linux wchar_t is 32 bit, which is wide enough for UTF-32...

C++11 made significant improvements to the subject, adding char16_t and char32_t including their associated string variants to remove the ambiguity, but still it is not fully equipped for Unicode operations.

Just as one example, try to convert e.g. German "Fuß" to uppercase and you will see what I mean. (The single letter 'ß' would need to expand to 'SS', which the standard functions -- handling one character in, one character out at a time -- cannot do.)

However, there is help. The International Components for Unicode (ICU) library is fully equipped to handle Unicode in C++. As for specifying special characters in source code, you will have to use u8"", u"", and U"" to enforce interpretation of the string literal as UTF-8, UTF-16, and UTF-32 respectively, using octal / hexadecimal escapes or relying on your compiler implementation to handle non-ASCII-7 encodings appropriately.

And even then you will get an integer value for std::cout << ramp[5], because for C++, a character is just an integer with semantic meaning. ICU's ustream.h provides operator<< overloads for the icu::UnicodeString class, but ramp[5] is just a 16-bit unsigned integer (1), and people would look askance at you if their unsigned short would suddenly be interpreted as characters. You need the C-API u_fputs() / u_printf() / u_fprintf() functions for that.

#include <unicode/unistr.h>
#include <unicode/ustream.h>
#include <unicode/ustdio.h>

#include <iostream>

int main()
{
// make sure your source file is UTF-8 encoded...
icu::UnicodeString ramp( icu::UnicodeString::fromUTF8( "ÐðŁłŠšÝýÞþŽž" ) );
std::cout << ramp << "\n";
std::cout << ramp[5] << "\n";
u_printf( "%C\n", ramp[5] );
}

Compiled with g++ -std=c++11 testme.cpp -licuio -licuuc.

ÐðŁłŠšÝýÞþŽž
353
š

(1) ICU uses UTF-16 internally, and UnicodeString::operator[] returns a code unit, not a code point, so you might end up with one half of a surrogate pair. Look up the API docs for the various other ways to index a unicode string.

Is there a C library to convert Unicode code points to UTF-8?

Converting Unicode code points to UTF-8 is so trivial that making the call to a library probably takes more code than just doing it yourself:

if (c<0x80) *b++=c;
else if (c<0x800) *b++=192+c/64, *b++=128+c%64;
else if (c-0xd800u<0x800) goto error;
else if (c<0x10000) *b++=224+c/4096, *b++=128+c/64%64, *b++=128+c%64;
else if (c<0x110000) *b++=240+c/262144, *b++=128+c/4096%64, *b++=128+c/64%64, *b++=128+c%64;
else goto error;

Also, doing it yourself means you can tune the api to the type of work you need (character-at-a-time? Or long strings?) You can remove the error cases if you know your input is a valid Unicode scalar value.

The other direction is a good bit harder to get correct. I recommend a finite automaton approach rather than the typical bit-arithmetic loops that sometimes decode invalid sequences as aliases for real characters (which is very dangerous and can lead to security problems).

Even if you do end up going with a library, I think you should either try writing it yourself first or at least seriously study the UTF-8 specification before going further. A lot of bad design can come from treating UTF-8 as a black box when the whole point is that it's not a black box but was created to have very powerful properties, and too many programmers new to UTF-8 fail to see this until they've worked with it a lot themselves.

c++ how can i work and validate a utf8 character

First of all, let me tell you that there is a problem with the console encoding change to UTF-8, not all characters are directly convertible. So using SetConsoleOutputCP(CP_UTF8) doesn't ensure that everything will be rendered to the screen, it will be replaced by some other weird character.

You would no longer use SetConsoleOutputCP(CP_UTF8), instead you would set the default LOCALE for your system (eg: std::setlocale(LC_ALL, "")). This will cause the accented vowels to appear on the console, just like any other printable character on the console.

Then, I would do the conversion using the function you provided above (utf8_to_ws), to transform the UTF-8 string into UTF-16.

Finally, to avoid making your life complicated, you would use this statement to convert the UTF-8 string to UNICODE characters of type wchar_t:

std::printf("%ls", utf8.c_str())

This will do the rest for you, allowing you to print the vast majority of characters on the console (all those that are printable for your system).

If you finally use SetConsoleOutputCP(CP_UTF8), you will be able to print a few more characters than if you only use the local encoding, but this sentence is not standard, which would lead you to study alternatives if you are in environments other than Windows.

Unicode - generally working with it in C++

  1. Which encoding is most general
    Probably UTF-32, though all three formats can store any character. UTF-32 has the property that every character can be encoded in a single codepoint.

  2. Which encoding is supported by wchar_t
    None. That's implementation defined. On most Windows platforms it's UTF-16, on most Unix platforms its UTF-32.

  3. Which encoding is supported by the STL
    None really. The STL can store any type of character you want. Just use the std::basic_string<t> template with a type large enough to hold your code point. Most operations (e.g. std::reverse) do not know about any sort of unicode encoding though.

  4. Are these encodings all(or not at all) null-terminated?
    No. Null is a legal value in any of those encodings. Technically, NULL is a legal character in plain ASCII too. NULL termination is a C thing -- not an encoding thing.

Choosing how to do this has a lot to do with your platform. If you're on Windows, use UTF-16 and wchar_t strings, because that's what the Windows API uses to support unicode. I'm not entirely sure what the best choice is for UNIX platforms but I do know that most of them use UTF-8.



Related Topics



Leave a reply



Submit