With a Browser, How to Know Which Decimal Separator Does the Operating System Use

With a browser, how do I know which decimal separator does the operating system use?

Here is a simple JavaScript function that will return this information. Tested in Firefox, IE6, and IE7. I had to close and restart my browser in between every change to the setting under Control Panel / Regional and Language Options / Regional Options / Customize. However, it picked up not only the comma and period, but also oddball custom things, like the letter "a".

function whatDecimalSeparator() {
var n = 1.1;
n = n.toLocaleString().substring(1, 2);
return n;
}

function whatDecimalSeparator() {    var n = 1.1;    n = n.toLocaleString().substring(1, 2);    return n;}
console.log('You use "' + whatDecimalSeparator() + '" as Decimal seprator');

How to get System Decimal Separator using JavaScript in Google Chrome?

I do not believe this is possible, browsers depend on the currently selected language in the browser to format numbers and dates so the following code in chrome with English as language will always return dot .

function whatDecimalSeparator() {
var n = 1.1;
n = n.toLocaleString().substring(1, 2);
return n;
}

Now changing the previous code to use the German language for example

function whatDecimalSeparator() {
var n = 1.1;
n = n.toLocaleString('de-DE').substring(1, 2);
return n;
}

Will always return comma ,

So basically this is system independent solution

How to determine the decimal separator in ASP MVC

You can see here how to get a CultureInfo from the browser language - this will get you
a CultureInfo object that you can use like this:

var sep = browserCulture.NumberFormat.NumberDecimalSeparator;

See CultureInfo and NumberFormatInfo.

What is the decimal separator symbol in JavaScript?

According to the specification, a DecimalLiteral is defined as:

DecimalLiteral ::
DecimalIntegerLiteral . DecimalDigitsopt ExponentPartopt
. DecimalDigits ExponentPartopt
DecimalIntegerLiteral ExponentPartopt

and for satisfying the parseFloat argument:

  1. Let inputString be ToString(string).
  2. Let trimmedString be a substring of inputString consisting of the leftmost character that is not a StrWhiteSpaceChar and all characters to the right of that character.(In other words, remove leading white space.)
  3. If neither trimmedString nor any prefix of trimmedString satisfies the syntax of a StrDecimalLiteral (see 9.3.1), return NaN.
  4. Let numberString be the longest prefix of trimmedString, which might be trimmedString itself, that satisfies the syntax of a StrDecimalLiteral.
  5. Return the Number value for the MV

So numberString becomes the longest prefix of trimmedString that satisfies the syntax of a StrDecimalLiteral, meaning the first parseable literal string number it finds in the input. Only the . can be used to specify a floating-point number. If you're accepting inputs from different locales, use a string replace:

function parseLocalNum(num) {
return +(num.replace(",", "."));
}

The function uses the unary operator instead of parseFloat because it seems to me that you want to be strict about the input. parseFloat("1ABC") would be 1, whereas using the unary operator +"1ABC" returns NaN. This makes it MUCH easier to validate the input. Using parseFloat is just guessing that the input is in the correct format.

How to determine which character is used as decimal separator (radix point) or thousand separator, under current locale?

You can get the locale's radix character (decimal separator) with:

printf -v ds '%#.1f' 1
ds=${ds//[0-9]}

And the thousands grouping separator, with:

printf -v ts "%'d" 1111
ts=${ts//1}

Some locales (eg. C) have no thousands separator, in which case $ts is empty. Conversely, if the radix character is not defined by the locale, POSIX (printf(3)) says it should default to .. The # flag guarantees that it will be printed.



Related Topics



Leave a reply



Submit