Unary Plus (+) Against Literal String

Unary plus (+) against literal string

Unary + can be applied to arithmetic type values, unscoped enumeration values and pointer values because ...

the C++ standard defines it that way, in C++11 §5.3.1/7.

In this case the string literal, which is of type array of char const, decays to pointer to char const.

It's always a good idea to look at the documentation when one wonders about the functionality of something.


“The operand of the unary + operator shall have arithmetic, unscoped enumeration, or pointer type and the
result is the value of the argument. Integral promotion is performed on integral or enumeration operands.
The type of the result is the type of the promoted operand.”

How to convert a number to string via unary operator in Javascript?

There is no "bitwise operator" that returns a string, also the "unary plus operator" is not bitwise. The closest versions would be:

 "" + 12
`${12}`

What's the point of unary plus operator in Ruby?

Perhaps it's just a matter of consistency, both with other programming languages, and to mirror the unary minus.

Found support for this in The Ruby Programming Language (written by Yukihiro Matsumoto, who designed Ruby):

The unary plus is allowed, but it has no effect on numeric operands—it simply returns the value of its operand. It is provided for symmetry with unary minus, and can, of course, be redefined.

parseInt vs unary plus, when to use which?

Well, here are a few differences I know of:

  • An empty string "" evaluates to a 0, while parseInt evaluates it to NaN. IMO, a blank string should be a NaN.

      +'' === 0;              //true
    isNaN(parseInt('',10)); //true
  • The unary + acts more like parseFloat since it also accepts decimals.

    parseInt on the other hand stops parsing when it sees a non-numerical character, like the period that is intended to be a decimal point ..

      +'2.3' === 2.3;           //true
    parseInt('2.3',10) === 2; //true
  • parseInt and parseFloat parses and builds the string left to right. If they see an invalid character, it returns what has been parsed (if any) as a number, and NaN if none was parsed as a number.

    The unary + on the other hand will return NaN if the entire string is non-convertible to a number.

      parseInt('2a',10) === 2; //true
    parseFloat('2a') === 2; //true
    isNaN(+'2a'); //true
  • As seen in the comment of @Alex K., parseInt and parseFloat will parse by character. This means hex and exponent notations will fail since the x and e are treated as non-numerical components (at least on base10).

    The unary + will convert them properly though.

      parseInt('2e3',10) === 2;  //true. This is supposed to be 2000
    +'2e3' === 2000; //true. This one's correct.

    parseInt("0xf", 10) === 0; //true. This is supposed to be 15
    +'0xf' === 15; //true. This one's correct.

JavaScript increment unary operator (++) on strings

The problem is that your ++ operator implicitly involves an assignment, and you can't assign a new value to a string constant. Note that

++2;

is also erroneous for the same reason.

Why is the result of ('b'+'a'+ + 'a' + 'a').toLowerCase() 'banana'?

+'a' resolves to NaN ("Not a Number") because it coerces a string to a number, while the character a cannot be parsed as a number.

document.write(+'a');

Why does F# have a unary plus operator?

I'll summarize the extended comments. Possible reasons (until a more authoritative answer is given):

  1. Consistency with OCaml, from which F# is derived (if you're doing something wrong/unnecessary it's best to keep doing it so people know what to expect :-))
  2. Overloading (mostly for custom types)
  3. Symmetry with unary negation

How to add two strings as if they were numbers?

I would use the unary plus operator to convert them to numbers first.

+num1 + +num2;

what is this + before u8 string literal prefix for?

The + is "explicitly" performing the array-to-pointer implicit conversion, producing a prvalue const char8_t* (or const char* before C++20), instead of an lvalue array.

This is unnecessary since reinterpret_cast<T> (when T is not a reference) performs this conversion anyways.

(Possibly it was used to prevent confusion with the similar reinterpret_cast<const char* const&>(u8"..."), which interprets the bytes of the array as a pointer, which is obviously not what was wanted. But I personally wouldn't worry about this)



Related Topics



Leave a reply



Submit