Windows Cmd Encoding Change Causes Python Crash

Python (2.7) and reading Unicode argvs from the Windows command line

The file name is being received correctly. You can verify this by encoding sys.argv[1] as UTF-8 and writing it to a file (opened in binary mode) and then opening the file in a text editor that supports UTF-8.

The Windows command prompt is unable to display the characters correctly despite the 'chcp' command changing the codepage to UTF-8 because the terminal font does not contain those characters. The command prompt is unable to substitute characters from other fonts.

Python 2.7 : LookupError: unknown encoding: cp65001

The error means that Unicode characters that your script are trying to print can't be represented using the current console character encoding.

Also try to run set PYTHONIOENCODING=UTF-8 after execute pip --version without reloading terminal if everything going well add PYTHONIOENCODING as env variable with value UTF-8. See How to set the path and environment variables in Windows article to get info how to add Windows variable.

NOTE: For PowerShell use $env:PYTHONIOENCODING = "UTF-8"

Also you can try to install win-unicode-console with pip:

pip install win-unicode-console

Then reload your terminal and try to execute pip --version

However you can follow suggestions from Windows cmd encoding change causes Python crash answer because you have same problem.

Will Python print() broke CMD in long term?

CMD will remove the old log/output. So nothing will be broken.

Unicode issue with Python3.7 and Scheduled Tasks

Are you sure the schtasks output is in utf-8?

0x81 is ü in the IBM CP437 and IBM CP850 / IBM CP858 encodings.

In order to check this, the pragmatic way is to print out the string with repr() or with one of the decode(encoding, errors=...) options that outputs character codes (eg. decode(encoding, errors='xmlcharrefreplace')), then match it up with tables of encodings to see which one matches.

Python unicode write to file crashes in command line but not in IDE

As Fenikso said, you should encode a string before writing it to a file. The reason that file.write() doesn't do this itself is that you need to specify which encoding (utf-8, utf-16, etc) you want to use. There's a python module "codecs" which allows you to create stream objects that know what encoding to use, and automatically apply it. That's what Fenikso is using in his second example.

As to why your code works in the IDE but not the command line, my guess is that your IDE is setting the "default encoding" to some non-default value. Try running this in both the IDE and the command line and see if it differs:

>>> import sys
>>> print sys.getdefaultencoding()

Here's some related information: http://blog.ianbicking.org/illusive-setdefaultencoding.html

os.popen strange codec issue

I have no idea how to make os.popen use a specific encoding(and I don't think its possible), so here is a solution using subprocess:

import subprocess

output = subprocess.run("dir", shell=True, encoding="cp866", stdout=subprocess.PIPE).stdout
print(output)

Edit: dir is a shell builtin, so you need shell=True, but you can use a list arg for normal commands.



Related Topics



Leave a reply



Submit