Unicode Box Drawing Characters Not Printed in Ruby

Unicode characters for line drawing or box drawing

Unicode does include some characters intended for this. Wikipedia has a list:

http://en.wikipedia.org/wiki/Box-drawing_character

Display Extended-ASCII character in Ruby

I am trying to using the drawing characters to create graphics in my console program.

You should use UTF-8 nowadays.

Here's an example using characters from the Box Drawing Unicode block:

puts "╔═══════╗\n║ Hello ║\n╚═══════╝"

Output:

╔═══════╗
║ Hello ║
╚═══════╝

So if for instance I was going to use the Unicode character U+2588 (the full block) how would I print that to the console in Ruby.

You can use:

puts "\u2588"

or:

puts 0x2588.chr('UTF-8')

or simply:

puts '█'

Prawn with some emojis for ttf-font not rendering text correctly

Turns out Prawn doesn't know how to parse the joined emojis (those formed by the a set of simple emojis joined by \u200D). Prawn/emoji is supposed to do that but there is a bug on the regex used to identify the emojis that causes the joined emojis to be drawn separately.

Also the index and the image gallery used is a little bit outdated.

The solution is to substitute @emoji_index.to_regexp in the class Drawer , in the prawn/emoji source code for a regex that can recognize the joined emojis and update the emoji gallery, after that run the task to update the index and you are good to go.

The fonts have nothing to do with it.

Ruby: Checking for East Asian Width (Unicode)

Late to the party, but hopefully still helpful: In Ruby, you can use the unicode-display_width gem to check for a string's east-asian-width:

require 'unicode/display_width'
"⚀".display_width #=> 1
'一'.display_width #=> 2

Printing a Unicode Symbol in C

Two problems: first of all, a wchar_t must be printed with %lc format, not %c. The second one is that unless you call setlocale the character set is not set properly, and you probably get ? instead of your star. The following code seems to work though:

#include <stdio.h>
#include <wchar.h>
#include <locale.h>

int main() {
setlocale(LC_CTYPE, "");
wchar_t star = 0x2605;
wprintf(L"%lc\n", star);
}

And for ncurses, just initialize the locale before the call to initscr.

Displaying extended Unicode characters in Python curses

The escape sequence \u interprets the following four characters (in your case 1F33) as a 16 bits hexadecimal expression, which is not what you want. Since your code point does not fit in 16 bits you need the escape sequence \U and provide a 32 bits (eight characters long) hexadecimal expression.

In [1]: '\U0001F332'                                                            
Out[1]: ''

(I am guessing from your output that you are using python 3.)

You might also have issues with your terminal encoding and font, but your current code doesn't let you even get to that point.

Ruby print text, characters in Hindi, Sanskrit etc languages

In ruby we have [ ].pack('U*') method which converts the given array's value(as Unicode values) to corresponding characters and "XYZ".unpack('U*') which returns me the Unicode values of each characters.
After executing,

[2309].pack('U*')

I got
Let's understand it with an example,
I want मिरो में आपके लिए एक नया सिलाई आदेश
Then execute

"मिरो में आपके लिए एक नया सिलाई आदेश".unpack('U*')

This will return me Unicode values of each character of above string.

[32, 2350, 2375, 2306, 32, 2310, 2346, 2325, 2375, 32, 2354, 2367, 2319, 32, 2319, 2325, 32, 2344, 2351, 2366, 32, 2360, 2367, 2354, 2366, 2312, 32, 2310, 2342, 2375, 2358]

Then to regenerate the above Hindi string,

[2350, 2367, 2352, 2379, 32, 2350, 2375, 2306, 32, 2310, 2346, 2325, 2375, 32, 2354, 2367, 2319, 32, 2319, 2325, 32, 2344, 2351, 2366, 32, 2360, 2367, 2354, 2366, 2312, 32, 2310, 2342, 2375, 2358].pack('U*')

Note: .unpack('U*') return's unicode since we asked for it by mentioning 'U*', This unpack/pack methods are also works for binary and many other forms.

Read more about pack/unpack



Related Topics



Leave a reply



Submit