Which Is Faster:If (Bool) or If(Int)

Which is faster : if (bool) or if(int)?

Makes sense to me. Your compiler apparently defines a bool as an 8-bit value, and your system ABI requires it to "promote" small (< 32-bit) integer arguments to 32-bit when pushing them onto the call stack. So to compare a bool, the compiler generates code to isolate the least significant byte of the 32-bit argument that g receives, and compares it with cmpb. In the first example, the int argument uses the full 32 bits that were pushed onto the stack, so it simply compares against the whole thing with cmpl.

Which value is better to use? Boolean true or Integer 1?

A boolean true is, well, a boolean value. Use it whenever you want to express that a certain binary condition is met.

The integer literal 1 is a number. Use it, whenever you are counting something.

Don't use integers for booleans and vice versa. They are different.

Consider a variable int isEnabled. Of course, I can guess that 0 and 1 may be the only intended values for this variable. But language-wise, nothing keeps me from assigning 4247891. Using a boolean, however, restricts the valid values to true and false. This leaves no room for speculation.

(C++ int's and bools are somewhat convertible, but it's generally frowned upon)

Boolean vs Int in Javascript

Disclaimer, I can only speak for Firefox, but I guess Chrome is similar.

First example (http://jsperf.com/bool-vs-int):

  1. The Not operation
    JägerMonkey (Spidmonkey's JavaScript methodjit) inlines the check for boolean first and then just xors, which is really fast (We don't know the type of a/b, so we need to check the type).
    The second check is for int, so if a/b would be a int this would be a little bit slower.
    Code

  2. The Subtract operation.
    We again don't know the type of c/d. And again you are lucky we are going to assume ints and inline that first. But because in JavaScript number operations are specified to be IEEE 754 doubles, we need to check for overflow. So the only difference is "sub" and a "conditional jump" on overflow vs. plain xor in case 1.
    Code

Second example:
(I am not 100% sure about these, because I never really looked at this code before)

  1. and 3. The If.
    We inline a check for boolean, all other cases end up calling a function converting the value to a boolean.
    Code

  2. The Compare and If.
    This one is a really complex case from the implementation point of view, because it was really important to optimize equality operations. So I think I found the right code, that seems to suggest we first check for double and then for integers.
    And because we know that the result of a compare is always a boolean, we can optimize the if statement.
    Code

Followup I dumped the generated machine code, so if you are still interested, here you go.

Overall this is just a piece in a bigger picture. If we knew what kind of type the variables had and knew that the subtraction won't overflow then we could make all these cases about equally fast.
These efforts are being made with IonMonkey or v8's Crankshaft. This means you should avoid optimizing based of this information, because:

  1. it's already pretty fast
  2. the engine developers take care of optimizing it for you
  3. it will be even faster in the future.

Performance of bool vs int array

Looking at the assembly output (go run -gcflags '-S' test.go) there is some difference:

Bool:

0x0075 00117 (test.go:11)   MOVBLZX (AX)(BX*1), DI
0x0079 00121 (test.go:11) TESTB DIB, DIB

Int:

0x0075 00117 (test.go:28)   MOVQ    (AX)(BX*8), DI
0x0079 00121 (test.go:28) CMPQ DI, $1

Byte/uint8:

0x0075 00117 (test.go:28)   MOVBLZX (AX)(BX*1), DI
0x0079 00121 (test.go:28) CMPB DIB, $1

The rest of the assembly is near-identical for me on Go 1.8.*.

So: 1) data types are sized different 2) operations are different

Integer vs Boolean array Swift Performance

TL, DR:

  • Do not attempt to optimize your code in a Debug build. Always run it through the Profiler. Int was faster then Bool in Debug but the oposite was true when run through the Profiler.
  • Heap allocation is expensive. Use your memory judiciously. (This question discusses the complications in C, but also applicable to Swift)

Long answer

First, let's refactor your code for easier execution:

func useBoolArray(n: Int) {
var prime = [Bool](repeating: true, count: n+1)
var p = 2

while((p*p)<=n)
{
if(prime[p] == true)
{
var i = p*2
while (i<=n)
{
prime[i] = false
i = i + p
}
}
p = p+1
}
}

func useIntArray(n: Int) {
var prime = [Int](repeating: 1, count: n+1)
var p = 2

while((p*p)<=n)
{
if(prime[p] == 1)
{
var i = p*2
while (i<=n)
{
prime[i] = 0
i = i + p
}
}
p = p+1
}
}

Now, run it in the Debug build:

let count = 100_000_000
let start = DispatchTime.now()

useBoolArray(n: count)
let boolStop = DispatchTime.now()

useIntArray(n: count)
let intStop = DispatchTime.now()

print("Bool array:", Double(boolStop.uptimeNanoseconds - start.uptimeNanoseconds) / Double(NSEC_PER_SEC))
print("Int array:", Double(intStop.uptimeNanoseconds - boolStop.uptimeNanoseconds) / Double(NSEC_PER_SEC))

// Bool array: 70.097249517
// Int array: 8.439799614

So Bool is a lot slower than Int right? Let's run it through the Profiler by pressing Cmd + I and choose the Time Profile template. (Somehow the Profiler wasn't able to separate these functions, probably because they were inlined so I had to run only 1 function per attempt):

let count = 100_000_000
useBoolArray(n: count)
// useIntArray(n: count)

// Bool: 1.15ms
// Int: 2.36ms

Not only they are an order of magnitude faster than Debug but the results are reversed to: Bool is now faster than Int!!! The Profiler doesn't tell us why how so we must go on a witch hunt. Let's check the memory allocation by adding an Allocation instrument:

Bool Array
int Array

Ha! Now the differences are laid bare. The Bool array uses only one-eight as much memory as Int array. Swift array uses the same internals as NSArray so it's allocated on the heap and heap allocation is slow.

When you think even more about it: a Bool value only take up 1 bit, an Int takes 64 bits on a 64-bit machine. Swift may have chosen to represent a Bool with a single byte, while an Int takes 8 bytes, hence the memory ratio. In Debug, this difference may have caused all the difference as the runtime must do all kinds of checks to ensure that it's actually dealing with a Bool value so the Bool array method takes significantly longer.

Moral of the lesson: don't optimize your code in Debug mode. It can be misleading!

In java is boolean or integer arithmetic faster?

It will depend on the underlying architecture. In general, the fastest types will be the ones that correspond to your native word size: 32-bit on a 32-bit machine, or 64-bit on a 64-bit machine.

So int is faster than long on a 32-bit machine; long might be faster than int on a 64-bit machine, or the might be the same. As for boolean, I would imagine it's using the native word size anyway, so it will be pretty much exactly as fast as int or long.

Doubles (using floating point arithmetic) tend to be slower.

As long as you are dealing with primitive types, the difference will be negligible. The really slow types are the class types (like Integer or Boolean) -- avoid those if you want performance.

Why vectorint is faster than vectorbool in the following case

Access to single bits is usually slower than to complete addressable units (bytes in the lingo of C++). For example, to write a byte, you just issue a write instruction (mov on x86). To write a bit, you need to load the byte containing it, use bitwise operators to set the right bit within the byte, and then store the resulting byte.

The compact size of a bit vector is nice for storage requirements, but it will result in a slowdown except when your data becomes large enough that caching issues play a role.

If you want to have speed and still be more efficient than 4 bytes per value, try a vector<unsigned char>.

Is accessing a boolean variable faster than accessing int flags?

Optimization should hing only on how often you are actually going to use something. This is a big killer on programming efficiency. The way to determine IF you need to optimize something is figure out how often it will actually be used.

So, if you are only going to use something once or twice a second or when the user manually does something then the answer is most likely going to be "don't bother". On the other hand if the optimization is in something that runs continuously a thousand times a second then the answer is "maybe". The later depends on how many clocks the operation is eating. The case of flag checking the answer is most likely no as the clock ticks for either are negligible. The better optimization might be on why you are calling that function, routine, etc ... so many times to begin with.



Related Topics



Leave a reply



Submit