Decode Utf-8 with JavaScript

Decode utf8 character on javascript

Since the encoding is almost identical to percent escapes used in URLs, you can simply use:

decodeURIComponent("SK Uni=C4=8Dov vs Prostejov".replace(/=/g, "%"))

JavaScript - Encode/Decode UTF8 to Hex and Hex to UTF8

Your utf8toHex is using encodeURIComponent, and this won't make everything HEX.

So I've slightly modified your utf8toHex to handle HEX.

Update
Forgot toString(16) does not pre-zero the hex, so if they was
values less 16, eg. line feeds etc it would fail
So, to added the 0 and sliced to make sure.

Update 2,
Use TextEncoder, this will handle UTF-8 much better than use charCodeAt.

function hexToUtf8(s)
{
return decodeURIComponent(
s.replace(/\s+/g, '') // remove spaces
.replace(/[0-9a-f]{2}/g, '%$&') // add '%' before each 2 characters
);
}

const utf8encoder = new TextEncoder();

function utf8ToHex(s)
{
const rb = utf8encoder.encode(s);
let r = '';
for (const b of rb) {
r += ('0' + b.toString(16)).slice(-2);
}
return r;
}

var hex = "d7a452656c6179204f4e214f706572617465642062792030353232";

var utf8 = hexToUtf8(hex);
var hex2 = utf8ToHex(utf8);

console.log("Hex: " + hex);
console.log("UTF8: " + utf8);
console.log("Hex2: " + hex2);
console.log("Is conversion OK: " + (hex == hex2));

Is it safe to decode an arbitrary UTF8-byte-chunk to string?

It depends on what do you mean for safe.

You know the size of original string, so you have the maximum size of decoded string. So this reduce a lot some modern DoS (amplification attacks).

Algorithms are straightforward. But there are a lot of security implication on how to use data: UTF-8 may hide unnecessary long sequences. Good decoder should discard them, but maybe when requiring U+0000 (a long encoding helps keeping C string happy, but also to use all Unicode characters (so also U+0000). You should test this. You do not want that string will have a 0x00 value, and some function will use one length, some an other length of string, and so giving a possible buffer overflow.

UCS uses a generalization of UTF-8 allowing encoding more bits (up to 31), but so eating more bytes. Some UTF-8 decoder allow that, some not. In general this should be an error, because many manipulation functions are not happy about code-point above current Unicode limits).

Normalization has many implications, e.g. removing unnecessary code-points: Unicode (and so libraries) may have problems with characters encoded in too much (more then 16 or 32 code points, I do not remember exactly the minimum requirement).

Obviously the sorting of codepoints, and composing/decomposing have also own security problems, but this seems outside your question, like also that some characters may look like (or are exactly like) others [impersonalisation].

Good decoder should detect invalid bytes (0xC0) in UTF8, overlong sequences of UTF-8 (using more byte to get a code point), and codepoints outside Unicode (so more then 4 bytes, allowed by UCS). But some decoder are much more permissive, so programs should be able to handle that. And there are also invalid sequences, but these are not decodeable, so decoder often do the right thing (but some insert an error symbols, some just discard the invalid byte, and try to recover.

UTF-8 to readable characters

Try

let decodedString = decodeURIComponent(escape(window.atob(yourString)))

Using Javascript's atob to decode base64 doesn't properly decode utf-8 strings

The Unicode Problem

Though JavaScript (ECMAScript) has matured, the fragility of Base64, ASCII, and Unicode encoding has caused a lot of headache (much of it is in this question's history).

Consider the following example:

const ok = "a";
console.log(ok.codePointAt(0).toString(16)); // 61: occupies < 1 byte

const notOK = "✓"
console.log(notOK.codePointAt(0).toString(16)); // 2713: occupies > 1 byte

console.log(btoa(ok)); // YQ==
console.log(btoa(notOK)); // error

Why do we encounter this?

Base64, by design, expects binary data as its input. In terms of JavaScript strings, this means strings in which each character occupies only one byte. So if you pass a string into btoa() containing characters that occupy more than one byte, you will get an error, because this is not considered binary data.

Source: MDN (2021)

The original MDN article also covered the broken nature of window.btoa and .atob, which have since been mended in modern ECMAScript. The original, now-dead MDN article explained:

The "Unicode Problem"
Since DOMStrings are 16-bit-encoded strings, in most browsers calling window.btoa on a Unicode string will cause a Character Out Of Range exception if a character exceeds the range of a 8-bit byte (0x00~0xFF).



Solution with binary interoperability

(Keep scrolling for the ASCII base64 solution)

Source: MDN (2021)

The solution recommended by MDN is to actually encode to and from a binary string representation:

Encoding UTF8 ⇢ binary

// convert a Unicode string to a string in which
// each 16-bit unit occupies only one byte
function toBinary(string) {
const codeUnits = new Uint16Array(string.length);
for (let i = 0; i < codeUnits.length; i++) {
codeUnits[i] = string.charCodeAt(i);
}
return btoa(String.fromCharCode(...new Uint8Array(codeUnits.buffer)));
}

// a string that contains characters occupying > 1 byte
let encoded = toBinary("✓ à la mode") // "EycgAOAAIABsAGEAIABtAG8AZABlAA=="

Decoding binary ⇢ UTF-8

function fromBinary(encoded) {
const binary = atob(encoded);
const bytes = new Uint8Array(binary.length);
for (let i = 0; i < bytes.length; i++) {
bytes[i] = binary.charCodeAt(i);
}
return String.fromCharCode(...new Uint16Array(bytes.buffer));
}

// our previous Base64-encoded string
let decoded = fromBinary(encoded) // "✓ à la mode"

Where this fails a little, is that you'll notice the encoded string EycgAOAAIABsAGEAIABtAG8AZABlAA== no longer matches the previous solution's string 4pyTIMOgIGxhIG1vZGU=. This is because it is a binary encoded string, not a UTF-8 encoded string. If this doesn't matter to you (i.e., you aren't converting strings represented in UTF-8 from another system), then you're good to go. If, however, you want to preserve the UTF-8 functionality, you're better off using the solution described below.



Solution with ASCII base64 interoperability

The entire history of this question shows just how many different ways we've had to work around broken encoding systems over the years. Though the original MDN article no longer exists, this solution is still arguably a better one, and does a great job of solving "The Unicode Problem" while maintaining plain text base64 strings that you can decode on, say, base64decode.org.

There are two possible methods to solve this problem:

  • the first one is to escape the whole string (with UTF-8, see encodeURIComponent) and then encode it;
  • the second one is to convert the UTF-16 DOMString to an UTF-8 array of characters and then encode it.

A note on previous solutions: the MDN article originally suggested using unescape and escape to solve the Character Out Of Range exception problem, but they have since been deprecated. Some other answers here have suggested working around this with decodeURIComponent and encodeURIComponent, this has proven to be unreliable and unpredictable. The most recent update to this answer uses modern JavaScript functions to improve speed and modernize code.

If you're trying to save yourself some time, you could also consider using a library:

  • js-base64 (NPM, great for Node.js)
  • base64-js

Encoding UTF8 ⇢ base64

    function b64EncodeUnicode(str) {
// first we use encodeURIComponent to get percent-encoded UTF-8,
// then we convert the percent encodings into raw bytes which
// can be fed into btoa.
return btoa(encodeURIComponent(str).replace(/%([0-9A-F]{2})/g,
function toSolidBytes(match, p1) {
return String.fromCharCode('0x' + p1);
}));
}

b64EncodeUnicode('✓ à la mode'); // "4pyTIMOgIGxhIG1vZGU="
b64EncodeUnicode('\n'); // "Cg=="

Decoding base64 ⇢ UTF8

    function b64DecodeUnicode(str) {
// Going backwards: from bytestream, to percent-encoding, to original string.
return decodeURIComponent(atob(str).split('').map(function(c) {
return '%' + ('00' + c.charCodeAt(0).toString(16)).slice(-2);
}).join(''));
}

b64DecodeUnicode('4pyTIMOgIGxhIG1vZGU='); // "✓ à la mode"
b64DecodeUnicode('Cg=='); // "\n"

(Why do we need to do this? ('00' + c.charCodeAt(0).toString(16)).slice(-2) prepends a 0 to single character strings, for example when c == \n, the c.charCodeAt(0).toString(16) returns a, forcing a to be represented as 0a).



TypeScript support

Here's same solution with some additional TypeScript compatibility (via @MA-Maddin):

// Encoding UTF8 ⇢ base64

function b64EncodeUnicode(str) {
return btoa(encodeURIComponent(str).replace(/%([0-9A-F]{2})/g, function(match, p1) {
return String.fromCharCode(parseInt(p1, 16))
}))
}

// Decoding base64 ⇢ UTF8

function b64DecodeUnicode(str) {
return decodeURIComponent(Array.prototype.map.call(atob(str), function(c) {
return '%' + ('00' + c.charCodeAt(0).toString(16)).slice(-2)
}).join(''))
}


The first solution (deprecated)

This used escape and unescape (which are now deprecated, though this still works in all modern browsers):

function utf8_to_b64( str ) {
return window.btoa(unescape(encodeURIComponent( str )));
}

function b64_to_utf8( str ) {
return decodeURIComponent(escape(window.atob( str )));
}

// Usage:
utf8_to_b64('✓ à la mode'); // "4pyTIMOgIGxhIG1vZGU="
b64_to_utf8('4pyTIMOgIGxhIG1vZGU='); // "✓ à la mode"

And one last thing: I first encountered this problem when calling the GitHub API. To get this to work on (Mobile) Safari properly, I actually had to strip all white space from the base64 source before I could even decode the source. Whether or not this is still relevant in 2021, I don't know:

function b64_to_utf8( str ) {
str = str.replace(/\s/g, '');
return decodeURIComponent(escape(window.atob( str )));
}


Related Topics



Leave a reply



Submit