Difference Between Jvm's Lookupswitch and Tableswitch

Difference between JVM's LookupSwitch and TableSwitch?

The difference is that

  • lookupswitch uses a table with keys and labels
  • tableswitch uses a table with labels only.

When performing a tableswitch, the int value on top of stack is directly used as an index into the table to grab the jump destination and perform the jump immediately. The whole lookup+jump process is an O(1) operation, that means it's blazing fast.

When performing a lookupswitch, the int value on top of the stack is compared against the keys in the table until a match is found and then the jump destination next to this key is used to perform the jump. Since a lookupswitch table always must be sorted so that keyX < keyY for every X < Y, the whole lookup+jump process is a O(log n) operation as the key will be searched using a binary search algorithm (it's not necessary to compare the int value against all possible keys to find a match or to determine that none of the keys matches). O(log n) is somewhat slower than O(1), yet it is still okay since many well known algorithms are O(log n) and these are usually considered fast; even O(n) or O(n * log n) is still considered a pretty good algorithm (slow/bad algorithms have O(n^2), O(n^3), or even worse).

The decision which instruction to use is made by the compiler based upon the fact how compact the switch statement is, e.g.

switch (inputValue) {
case 1: // ...
case 2: // ...
case 3: // ...
default: // ...
}

The switch above is perfectly compact, it has no numeric "holes". The compiler will create a tableswitch like this:

 tableswitch 1 3
OneLabel
TwoLabel
ThreeLabel
default: DefaultLabel

The pseudo code from the Jasmin page explains this pretty well:

int val = pop();                // pop an int from the stack
if (val < low || val > high) { // if its less than <low> or greater than <high>,
pc += default; // branch to default
} else { // otherwise
pc += table[val - low]; // branch to entry in table
}

This code is pretty clear on how such a tableswitch works. val is inputValue, low would be 1 (the lowest case value in the switch) and high would be 3 (the highest case value in the switch).

Even with some holes a switch can be compact, e.g.

switch (inputValue) {
case 1: // ...
case 3: // ...
case 4: // ...
case 5: // ...
default: // ...
}

The switch above is "almost compact", it only has a single hole. A compiler could generate the following instruction:

 tableswitch 1 6
OneLabel
FakeTwoLabel
ThreeLabel
FourLabel
FiveLabel
default: DefaultLabel

; <...code left out...>

FakeTwoLabel:
DefaultLabel:
; default code

As you can see, the compiler has to add a fake case for 2, FakeTwoLabel. Since 2 is no real value of the switch, FakeTwoLabel is in fact a label that changes code flow exactly where the default case is located, since a value of 2 should in fact execute the default case.

So a switch doesn't have to be perfectly compact for the compiler to create a tableswitch, yet it should at least be pretty close to compactness. Now consider the following switch:

switch (inputValue) {
case 1: // ...
case 10: // ...
case 100: // ...
case 1000: // ...
default: // ...
}

This switch is nowhere near compactness, it has more than hundred times more holes than values. One would call this a sparse switch. The compiler would have to generate almost thousand fake cases to express this switch as a tableswitch. The result would be a huge table, blowing up the size of the class file dramatically. This is not practical. Instead it will generate a lookupswitch:

lookupswitch
1 : Label1
10 : Label10
100 : Label100
1000 : Label1000
default : DefaultLabel

This table has only 5 entries, instead of over thousand ones. The table has 4 real values, O(log 4) is 2 (log is here log to the base of 2 BTW, not to the base of 10, since computer operate on binary numbers). That means it takes the VM at most two comparisons to find the label for the inputValue or to come to the conclusion, that the value is not in the table and thus the default value must be executed. Even if the table had 100 entries, it would take the VM at most 7 comparisons to find the correct label or decide to jump to the default label (and 7 comparisons is a lot less than 100 comparisons, don't you think?).

So it's nonsense that these two instructions are interchangeable or that the reason for two instructions has historical reasons. There are two instructions for two different kind of situations, one for switches with compact values (for maximum speed) and one for switches with sparse values (not maximum speed, yet still good speed and very compact table representation regardless of the numeric holes).

Force tableswitch instead of lookupswitch

You should not care about the bytecode, since modern JVMs are smart enough to compile both lookupswitch and tableswitch in a similarly efficient way.

Intuitively tableswitch should be faster, and this is also suggested by
JVM specification:

Thus, a tableswitch instruction is probably more efficient than a lookupswitch where space considerations permit a choice.

However, the spec was written 20+ years ago, when JVM had no JIT compiler. Is there a performance difference in a modern HotSpot JVM?

The benchmark

package bench;

import org.openjdk.jmh.annotations.*;

@State(Scope.Benchmark)
public class SwitchBench {
@Param({"1", "2", "3", "4", "5", "6", "7", "8"})
int n;

@Benchmark
public long lookupSwitch() {
return Switch.lookupSwitch(n);
}

@Benchmark
public long tableSwitch() {
return Switch.tableSwitch(n);
}
}

To have precise control over the bytecode, I build Switch class with Jasmin.

.class public bench/Switch
.super java/lang/Object

.method public static lookupSwitch(I)I
.limit stack 1

iload_0
lookupswitch
1 : One
2 : Two
3 : Three
4 : Four
5 : Five
6 : Six
7 : Seven
default : Other

One:
bipush 11
ireturn
Two:
bipush 22
ireturn
Three:
bipush 33
ireturn
Four:
bipush 44
ireturn
Five:
bipush 55
ireturn
Six:
bipush 66
ireturn
Seven:
bipush 77
ireturn
Other:
bipush -1
ireturn
.end method

.method public static tableSwitch(I)I
.limit stack 1

iload_0
tableswitch 1
One
Two
Three
Four
Five
Six
Seven
default : Other

One:
bipush 11
ireturn
Two:
bipush 22
ireturn
Three:
bipush 33
ireturn
Four:
bipush 44
ireturn
Five:
bipush 55
ireturn
Six:
bipush 66
ireturn
Seven:
bipush 77
ireturn
Other:
bipush -1
ireturn
.end method

The results show no performance difference between lookupswitch/tableswitch benchmarks, but there is a small variation depending on switch argument:

lookupswitch vs. tableswitch performance

Assembly

Let's verify the theory by looking at generated assembly code.

The following JVM option will help: -XX:CompileCommand=print,bench.Switch::*

  # {method} {0x0000000017498a48} 'lookupSwitch' '(I)I' in 'bench/Switch'
# parm0: rdx = int
# [sp+0x20] (sp of caller)
0x000000000329b240: sub $0x18,%rsp
0x000000000329b247: mov %rbp,0x10(%rsp) ;*synchronization entry
; - bench.Switch::lookupSwitch@-1

0x000000000329b24c: cmp $0x4,%edx
0x000000000329b24f: je 0x000000000329b2a5
0x000000000329b251: cmp $0x4,%edx
0x000000000329b254: jg 0x000000000329b281
0x000000000329b256: cmp $0x2,%edx
0x000000000329b259: je 0x000000000329b27a
0x000000000329b25b: cmp $0x2,%edx
0x000000000329b25e: jg 0x000000000329b273
0x000000000329b260: cmp $0x1,%edx
0x000000000329b263: jne 0x000000000329b26c ;*lookupswitch
; - bench.Switch::lookupSwitch@1
...

What we see here is a binary search starting with a mid value 4 (this explains why case 4 has the best performance in the graph above).

But the interesting thing is that tableSwitch is compiled exactly the same way!

  # {method} {0x0000000017528b18} 'tableSwitch' '(I)I' in 'bench/Switch'
# parm0: rdx = int
# [sp+0x20] (sp of caller)
0x000000000332c280: sub $0x18,%rsp
0x000000000332c287: mov %rbp,0x10(%rsp) ;*synchronization entry
; - bench.Switch::tableSwitch@-1

0x000000000332c28c: cmp $0x4,%edx
0x000000000332c28f: je 0x000000000332c2e5
0x000000000332c291: cmp $0x4,%edx
0x000000000332c294: jg 0x000000000332c2c1
0x000000000332c296: cmp $0x2,%edx
0x000000000332c299: je 0x000000000332c2ba
0x000000000332c29b: cmp $0x2,%edx
0x000000000332c29e: jg 0x000000000332c2b3
0x000000000332c2a0: cmp $0x1,%edx
0x000000000332c2a3: jne 0x000000000332c2ac ;*tableswitch
; - bench.Switch::tableSwitch@1
...

Jump table

But wait... Why binary search, not a jump table?

HotSpot JVM has an heuristics to generate a jump table only for switches with 10+ cases. This can be altered by the option -XX:MinJumpTableSize=.

OK, let's extend our test case with some more labels and check the generated code once again.

  # {method} {0x0000000017288a68} 'lookupSwitch' '(I)I' in 'bench/Switch'
# parm0: rdx = int
# [sp+0x20] (sp of caller)
0x000000000307ecc0: sub $0x18,%rsp ; {no_reloc}
0x000000000307ecc7: mov %rbp,0x10(%rsp) ;*synchronization entry
; - bench.Switch::lookupSwitch@-1

0x000000000307eccc: mov %edx,%r10d
0x000000000307eccf: dec %r10d
0x000000000307ecd2: cmp $0xa,%r10d
0x000000000307ecd6: jb 0x000000000307ece9
0x000000000307ecd8: mov $0xffffffff,%eax
0x000000000307ecdd: add $0x10,%rsp
0x000000000307ece1: pop %rbp
0x000000000307ece2: test %eax,-0x1faece8(%rip) # 0x00000000010d0000
; {poll_return}
0x000000000307ece8: retq
0x000000000307ece9: movslq %edx,%r10
0x000000000307ecec: movabs $0x307ec60,%r11 ; {section_word}
0x000000000307ecf6: jmpq *-0x8(%r11,%r10,8) ;*lookupswitch
; - bench.Switch::lookupSwitch@1
^^^^^^^^^^^^^^^^^^^^^^^^^

Yes! Here is our computed jump instruction. Note, this is generated for lookupswitch. But there will be exactly the same one for tableswitch.

Amazingly, HotSpot JVM can generate jump tables even for switches with gaps and with outliers. -XX:MaxJumpTableSparseness controls how large the gaps can be. E.g. if there are labels from 1 to 10, then from 13 to 20 and the last label with the value 99 - JIT will generate a guard test for the value 99, and for the rest labels it will create a table.

Source code

HotSpot source code will finally convince there should be no performance difference between lookupswitch and tableswitch after a method is JIT-compiled with C2. That's basically because the parsing of both instructions ends up with a call to the same jump_switch_ranges function that works for an arbitrary set of labels.

Conclusion

As we saw, HotSpot JVM may compile tableswitch using a binary search and lookupswitch using a jump table, or vice versa. This depends on the number and the density of labels, but not on the bytecode itself.

So, answering your original question - you don't need to!

Why does the format of JVM tableswitch/lookupswitch instruction has between 0 and 3 bytes padding?

The padding exists for historical reasons. The first Java Virtual Machines did not have JIT compilation, they interpreted bytecode instructions one by one. To allow interpreters read 32-bit offsets directly from the bytecode stream, the offsets are 32-bit aligned - their addresses are exact multiple of 4 bytes.

RISC processors (like SPARC, ARMv5 and earlier, etc.) permitted only aligned memory access. E.g. to read a 32-bit value from memory with a single CPU instruction, the address must be 32-bit aligned. If the address is not aligned, getting a 32-bit value requires four 8-bit memory reads, which is, of course, slower.

Nowadays the optimization is not useful anymore.

Bytecode: LOOKUPSWITCH and TABLESWITCH

It was a bug. But it has been fixed a while ago now.

tree:generic jbevain$ svn log -c 1081190 && svn diff -c 1081190
------------------------------------------------------------------------
r1081190 | dbrosius | 2011-03-13 19:41:20 +0100 (Sun, 13 Mar 2011) | 1 line

Bug 48908 - Select instructions should implement StackConsumer instead of StackProducer
------------------------------------------------------------------------
Index: Select.java
===================================================================
--- Select.java (revision 1081189)
+++ Select.java (revision 1081190)
@@ -33,7 +33,7 @@
* @see InstructionList
*/
public abstract class Select extends BranchInstruction implements VariableLengthInstruction,
- StackProducer {
+ StackConsumer {

private static final long serialVersionUID = 2806771744559217250L;
protected int[] match; // matches, i.e., case 1: ...

Select is the base class for LOOKUPSWITCH and TABLESWITCH.

Is String a numeric type regarding switch and always compiled to lookupswitch?

The specification doesn’t tell how to compile switch statements, that’s up to the compiler.

In that regard, the JVMS statement, “Other numeric types must be narrowed to type int for use in a switch” does not say that the Java programming language will do such a conversion nor that String or Enum are numeric types. I.e. long, float and double are numeric types, but there’s no support for using them with switch statements in the Java programming language.

So the language specification says that switch over String is supported, hence, the compilers must find a way to compile them to bytecode. Using an invariant property like the hash code is a common solution, but in principle, other properties like the length or an arbitrary character could be used as well.

As discussed in “Why switch on String compiles into two switches” and “Java 7 String switch decompiled: unexpected instruction”, javac currently generates two switch instructions on the bytecode level when compiling switch over String values (ECJ also generates two instructions, but details may differ).

Then, the compiler has to pick either, a lookupswitch or tableswitch instruction. javac does use tableswitch when the numbers are not sparse, but only if the statement has more than two case labels.

So when I compile the following method:

public static char two(String s) {
switch(s) {
case "a": return 'a';
case "b": return 'b';
}
return 0;
}

I get

public static char two(java.lang.String);
Code:
0: aload_0
1: astore_1
2: iconst_m1
3: istore_2
4: aload_1
5: invokevirtual #9 // Method java/lang/String.hashCode:()I
8: lookupswitch { // 2
97: 36
98: 50
default: 61
}
36: aload_1
37: ldc #10 // String a
39: invokevirtual #11 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
42: ifeq 61
45: iconst_0
46: istore_2
47: goto 61
50: aload_1
51: ldc #12 // String b
53: invokevirtual #11 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
56: ifeq 61
59: iconst_1
60: istore_2
61: iload_2
62: lookupswitch { // 2
0: 88
1: 91
default: 94
}
88: bipush 97
90: ireturn
91: bipush 98
93: ireturn
94: iconst_0
95: ireturn

but when I compile,

public static char three(String s) {
switch(s) {
case "a": return 'a';
case "b": return 'b';
case "c": return 'c';
}
return 0;
}

I get

public static char three(java.lang.String);
Code:
0: aload_0
1: astore_1
2: iconst_m1
3: istore_2
4: aload_1
5: invokevirtual #9 // Method java/lang/String.hashCode:()I
8: tableswitch { // 97 to 99
97: 36
98: 50
99: 64
default: 75
}
36: aload_1
37: ldc #10 // String a
39: invokevirtual #11 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
42: ifeq 75
45: iconst_0
46: istore_2
47: goto 75
50: aload_1
51: ldc #12 // String b
53: invokevirtual #11 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
56: ifeq 75
59: iconst_1
60: istore_2
61: goto 75
64: aload_1
65: ldc #13 // String c
67: invokevirtual #11 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
70: ifeq 75
73: iconst_2
74: istore_2
75: iload_2
76: tableswitch { // 0 to 2
0: 104
1: 107
2: 110
default: 113
}
104: bipush 97
106: ireturn
107: bipush 98
109: ireturn
110: bipush 99
112: ireturn
113: iconst_0
114: ireturn

It’s not clear why javac makes this choice. While tableswitch has a higher base footprint (one additional 32 bit word) compared to lookupswitch, it would still be shorter in bytecode, even for the two case labels scenario.

But the consistency of the decision can be shown with the second statement, which will always have the same value range, but compiles to lookupswitch or tableswitch depending only on the number of labels. So when using truly sparse values:

public static char three(String s) {
switch(s) {
case "a": return 'a';
case "b": return 'b';
case "": return 0;
}
return 0;
}

it compiles to

public static char three(java.lang.String);
Code:
0: aload_0
1: astore_1
2: iconst_m1
3: istore_2
4: aload_1
5: invokevirtual #9 // Method java/lang/String.hashCode:()I
8: lookupswitch { // 3
0: 72
97: 44
98: 58
default: 83
}
44: aload_1
45: ldc #10 // String a
47: invokevirtual #11 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
50: ifeq 83
53: iconst_0
54: istore_2
55: goto 83
58: aload_1
59: ldc #12 // String b
61: invokevirtual #11 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
64: ifeq 83
67: iconst_1
68: istore_2
69: goto 83
72: aload_1
73: ldc #13 // String
75: invokevirtual #11 // Method java/lang/String.equals:(Ljava/lang/Object;)Z
78: ifeq 83
81: iconst_2
82: istore_2
83: iload_2
84: tableswitch { // 0 to 2
0: 112
1: 115
2: 118
default: 120
}
112: bipush 97
114: ireturn
115: bipush 98
117: ireturn
118: iconst_0
119: ireturn
120: iconst_0
121: ireturn

using lookupswitch for the sparse hash codes, but tableswitch for the second switch.

Why does Java switch on contiguous ints appear to run faster with added cases?

As pointed out by the other answer, because the case values are contiguous (as opposed to sparse), the generated bytecode for your various tests uses a switch table (bytecode instruction tableswitch).

However, once the JIT starts its job and compiles the bytecode into assembly, the tableswitch instruction does not always result in an array of pointers: sometimes the switch table is transformed into what looks like a lookupswitch (similar to an if/else if structure).

Decompiling the assembly generated by the JIT (hotspot JDK 1.7) shows that it uses a succession of if/else if when there are 17 cases or less, an array of pointers when there are more than 18 (more efficient).

The reason why this magic number of 18 is used seems to come down to the default value of the MinJumpTableSize JVM flag (around line 352 in the code).

I have raised the issue on the hotspot compiler list and it seems to be a legacy of past testing. Note that this default value has been removed in JDK 8 after more benchmarking was performed.

Finally, when the method becomes too long (> 25 cases in my tests), it is in not inlined any longer with the default JVM settings - that is the likeliest cause for the drop in performance at that point.


With 5 cases, the decompiled code looks like this (notice the cmp/je/jg/jmp instructions, the assembly for if/goto):

[Verified Entry Point]
# {method} 'multiplyByPowerOfTen' '(DI)D' in 'javaapplication4/Test1'
# parm0: xmm0:xmm0 = double
# parm1: rdx = int
# [sp+0x20] (sp of caller)
0x00000000024f0160: mov DWORD PTR [rsp-0x6000],eax
; {no_reloc}
0x00000000024f0167: push rbp
0x00000000024f0168: sub rsp,0x10 ;*synchronization entry
; - javaapplication4.Test1::multiplyByPowerOfTen@-1 (line 56)
0x00000000024f016c: cmp edx,0x3
0x00000000024f016f: je 0x00000000024f01c3
0x00000000024f0171: cmp edx,0x3
0x00000000024f0174: jg 0x00000000024f01a5
0x00000000024f0176: cmp edx,0x1
0x00000000024f0179: je 0x00000000024f019b
0x00000000024f017b: cmp edx,0x1
0x00000000024f017e: jg 0x00000000024f0191
0x00000000024f0180: test edx,edx
0x00000000024f0182: je 0x00000000024f01cb
0x00000000024f0184: mov ebp,edx
0x00000000024f0186: mov edx,0x17
0x00000000024f018b: call 0x00000000024c90a0 ; OopMap{off=48}
;*new ; - javaapplication4.Test1::multiplyByPowerOfTen@72 (line 83)
; {runtime_call}
0x00000000024f0190: int3 ;*new ; - javaapplication4.Test1::multiplyByPowerOfTen@72 (line 83)
0x00000000024f0191: mulsd xmm0,QWORD PTR [rip+0xffffffffffffffa7] # 0x00000000024f0140
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@52 (line 62)
; {section_word}
0x00000000024f0199: jmp 0x00000000024f01cb
0x00000000024f019b: mulsd xmm0,QWORD PTR [rip+0xffffffffffffff8d] # 0x00000000024f0130
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@46 (line 60)
; {section_word}
0x00000000024f01a3: jmp 0x00000000024f01cb
0x00000000024f01a5: cmp edx,0x5
0x00000000024f01a8: je 0x00000000024f01b9
0x00000000024f01aa: cmp edx,0x5
0x00000000024f01ad: jg 0x00000000024f0184 ;*tableswitch
; - javaapplication4.Test1::multiplyByPowerOfTen@1 (line 56)
0x00000000024f01af: mulsd xmm0,QWORD PTR [rip+0xffffffffffffff81] # 0x00000000024f0138
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@64 (line 66)
; {section_word}
0x00000000024f01b7: jmp 0x00000000024f01cb
0x00000000024f01b9: mulsd xmm0,QWORD PTR [rip+0xffffffffffffff67] # 0x00000000024f0128
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@70 (line 68)
; {section_word}
0x00000000024f01c1: jmp 0x00000000024f01cb
0x00000000024f01c3: mulsd xmm0,QWORD PTR [rip+0xffffffffffffff55] # 0x00000000024f0120
;*tableswitch
; - javaapplication4.Test1::multiplyByPowerOfTen@1 (line 56)
; {section_word}
0x00000000024f01cb: add rsp,0x10
0x00000000024f01cf: pop rbp
0x00000000024f01d0: test DWORD PTR [rip+0xfffffffffdf3fe2a],eax # 0x0000000000430000
; {poll_return}
0x00000000024f01d6: ret

With 18 cases, the assembly looks like this (notice the array of pointers which is used and suppresses the need for all the comparisons: jmp QWORD PTR [r8+r10*1] jumps directly to the right multiplication) - that is the likely reason for the performance improvement:

[Verified Entry Point]
# {method} 'multiplyByPowerOfTen' '(DI)D' in 'javaapplication4/Test1'
# parm0: xmm0:xmm0 = double
# parm1: rdx = int
# [sp+0x20] (sp of caller)
0x000000000287fe20: mov DWORD PTR [rsp-0x6000],eax
; {no_reloc}
0x000000000287fe27: push rbp
0x000000000287fe28: sub rsp,0x10 ;*synchronization entry
; - javaapplication4.Test1::multiplyByPowerOfTen@-1 (line 56)
0x000000000287fe2c: cmp edx,0x13
0x000000000287fe2f: jae 0x000000000287fe46
0x000000000287fe31: movsxd r10,edx
0x000000000287fe34: shl r10,0x3
0x000000000287fe38: movabs r8,0x287fd70 ; {section_word}
0x000000000287fe42: jmp QWORD PTR [r8+r10*1] ;*tableswitch
; - javaapplication4.Test1::multiplyByPowerOfTen@1 (line 56)
0x000000000287fe46: mov ebp,edx
0x000000000287fe48: mov edx,0x31
0x000000000287fe4d: xchg ax,ax
0x000000000287fe4f: call 0x00000000028590a0 ; OopMap{off=52}
;*new ; - javaapplication4.Test1::multiplyByPowerOfTen@202 (line 96)
; {runtime_call}
0x000000000287fe54: int3 ;*new ; - javaapplication4.Test1::multiplyByPowerOfTen@202 (line 96)
0x000000000287fe55: mulsd xmm0,QWORD PTR [rip+0xfffffffffffffe8b] # 0x000000000287fce8
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@194 (line 92)
; {section_word}
0x000000000287fe5d: jmp 0x000000000287ff16
0x000000000287fe62: mulsd xmm0,QWORD PTR [rip+0xfffffffffffffe86] # 0x000000000287fcf0
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@188 (line 90)
; {section_word}
0x000000000287fe6a: jmp 0x000000000287ff16
0x000000000287fe6f: mulsd xmm0,QWORD PTR [rip+0xfffffffffffffe81] # 0x000000000287fcf8
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@182 (line 88)
; {section_word}
0x000000000287fe77: jmp 0x000000000287ff16
0x000000000287fe7c: mulsd xmm0,QWORD PTR [rip+0xfffffffffffffe7c] # 0x000000000287fd00
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@176 (line 86)
; {section_word}
0x000000000287fe84: jmp 0x000000000287ff16
0x000000000287fe89: mulsd xmm0,QWORD PTR [rip+0xfffffffffffffe77] # 0x000000000287fd08
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@170 (line 84)
; {section_word}
0x000000000287fe91: jmp 0x000000000287ff16
0x000000000287fe96: mulsd xmm0,QWORD PTR [rip+0xfffffffffffffe72] # 0x000000000287fd10
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@164 (line 82)
; {section_word}
0x000000000287fe9e: jmp 0x000000000287ff16
0x000000000287fea0: mulsd xmm0,QWORD PTR [rip+0xfffffffffffffe70] # 0x000000000287fd18
;*dmul
; - javaapplication4.Test1::multiplyByPowerOfTen@158 (line 80)


Related Topics



Leave a reply



Submit