Could Not Load Shared Library Symbols for Linux-Vdso.So.1. While Debugging

Could not load shared library symbols for linux-vdso.so.1. while debugging


Not loading VDSO.so is one of the famous bugs you encounter while using gdb and glibc >2.2.

No, it's not. The problem here is simply a useless warning, which you can safely ignore.

I found a work-around here Here, but I didn't understand it so how to apply it.

You didn't find a "workaround". You found a patch to GDB, which disables the warning.

To apply it, use the patch command, and then build your own GDB. But it is much simpler to just ignore the warning in the first place.

GDB strange behavior - Linux

I am too using archlinux.

gdb -v
GNU gdb (GDB) 7.8

qtcreator -version
Qt Creator 3.2.0 based on Qt 5.3.1`

Faced this behavior and solved the problem downgrading gdb to version 7.7

And yes the gdb warining you mention has nothing to do with the issue at hand.

But did not find why does this happen.

How do I update my libthread_db shared library so as to match libpthread shared library?

Your question is the exact duplicate of the two answers you've already found: someone installed GLIBC-2.17 onto your system (which likely had GLIBC-2.12 originally), but failed to install matching libthread_db.so.1.

The output of file -- GNU/Linux 2.6.18 is a red herring and is irrelevant.

The good news is that updating libpthread_db is risk-free -- if you screw up, the worst that can happen is that GDB will not work for multi-threaded programs, but since that is the current state, you can't make it any worse.

But you should be able to make it better:

  • download GLIBC-2.17 source, configure and build it (but don't make install, or you'll likely brick your system).
  • as part of the build, there should now exist build/nptl_db/libthread_db.so.1
  • test it by pointing GDB to it with set libthread-db-search-path /path/to/build/nptl_db and trying to debug your program again.

If that works, you can overwrite the non-working /lib64/libthread_db.so.1 with the one you just built.

Update:

Is there any command I can use to grab some version information of these shared library files?

For libpthread*.so.0, this is quite easy:

gdb -q /lib/x86_64-linux-gnu/libpthread.so.0
Reading symbols from /lib/x86_64-linux-gnu/libpthread.so.0...

(gdb) x/s nptl_version
0x16037 <nptl_version>: "2.30"

For libthread_db.so.1 it's quite a bit harder: you need to disassemble td_ta_new, and look for memcmp call, which could be inlined:

gdb -q /lib/x86_64-linux-gnu/libthread_db.so.1
Reading symbols from /lib/x86_64-linux-gnu/libthread_db.so.1...

(gdb) disas td_ta_new
Dump of assembler code for function td_ta_new:
0x0000000000002270 <+0>: push %r13
0x0000000000002272 <+2>: push %r12
0x0000000000002274 <+4>: push %rbp
0x0000000000002275 <+5>: mov %rsi,%rbp
...
0x00000000000022d6 <+102>: callq 0x2110 <ps_pdread@plt>
0x00000000000022db <+107>: mov %eax,%r12d
0x00000000000022de <+110>: test %eax,%eax
0x00000000000022e0 <+112>: jne 0x2318 <td_ta_new+168>
0x00000000000022e2 <+114>: cmpl $0x30332e32,0x13(%rsp)
0x00000000000022ea <+122>: je 0x2340 <td_ta_new+208>
0x00000000000022ec <+124>: mov $0x16,%r12d
0x00000000000022f2 <+130>: mov 0x18(%rsp),%rax
0x00000000000022f7 <+135>: xor %fs:0x28,%rax
0x0000000000002300 <+144>: jne 0x2388 <td_ta_new+280>
0x0000000000002306 <+150>: add $0x28,%rsp
0x000000000000230a <+154>: mov %r12d,%eax
0x000000000000230d <+157>: pop %rbx
0x000000000000230e <+158>: pop %rbp
0x000000000000230f <+159>: pop %r12
0x0000000000002311 <+161>: pop %r13
0x0000000000002313 <+163>: retq
...

Here, at address 0x022e2 is an inlined call to memcmp("2.30", <data-read-from-nptl_version>, 5).

How to use debug version of libc


I think that the version of libc with debug symbols is in /usr/lib/debug/lib. I tried setting my LD_LIBRARY_PATH variable to have this at the front of the path but that did not seem to make a difference.

These are not the droids you are looking for.

The libraries in /usr/lib/debug are not real libraries. Rather, they contain only debug info, but do not contain .text nor .data sections of the real libc.so.6. You can read about the separate debuginfo files here.

The files in /usr/lib/debug come from libc6-dbg package, and GDB will load them automatically, so long as they match your installed libc6 version. If your libc6 and libc6-dbg do not match, you should get a warning from GDB.

You can observe the files GDB is attempting to read by setting set verbose on. Here is what you should see when libc6 and libc6-dbg do match:

(gdb) set verbose on
(gdb) run
thread_db_load_search returning 0
Reading symbols from /lib64/ld-linux-x86-64.so.2...Reading symbols from /usr/lib/debug/lib/ld-2.11.1.so...done.
thread_db_load_search returning 0
done.
thread_db_load_search returning 0
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from system-supplied DSO at 0x7ffff7ffb000...done.
WARNING: no debugging symbols found in system-supplied DSO at 0x7ffff7ffb000.
thread_db_load_search returning 0
Reading in symbols for dl-debug.c...done.
Reading in symbols for rtld.c...done.
Reading symbols from /lib/librt.so.1...Reading symbols from /usr/lib/debug/lib/librt-2.11.1.so...done.
thread_db_load_search returning 0
... etc ...

Update:

For instance I see

Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done

That implies that your GDB is not searching /usr/lib/debug. One way that could happen is if you set debug-file-directory in your .gdbinit incorrectly.

Here is the default setting:

(gdb) show debug-file-directory
The directory where separate debug symbols are searched for is "/usr/lib/debug".

Manually loading libcrypto (dlmopen, dlsym) segfaults; dynamically linked works

Thanks for providing excellent repro instructions.

What Am I doing wrong?

You are using dlmopen, which is a minefield.

I suspect you are doing this in order to to have several incompatible versions of OpenSSL in a single process. My advice: just don't do it™️.

What's happening... is complicated.

Let's call the first libpthread.so linked into ./a.out P1, and the second copy (which is brought in via dlmopen as a dependency of libcrypto.so) P2. Let's call the dlmopened version of libcrypto C2.

Both P1 and P2 have separate hidden variables __pthread_keys. When C2 calls P2:pthread_key_create, that function looks in P2:__pthread_keys, and discovers that no keys have been used (which is true in P2, but not in P1 -- the loader has already used some keys from P1:__pthread_keys).

So C2 gets an answer from P2:pthread_key_create -- use key==0 (P2 is oblivious to the fact that key==0 has already been used in P1!).

Now C2 calls P2:pthread_getspecific(0), and expects to get a NULL back -- it hasn't called pthread_setspecific(0, ...) yet.

But pthread_getspecific looks in the thread control block, which is unique for the given thread and shared between P1 and P2, and herein lies the disaster: P2 doesn't get a NULL, it gets whatever P1:pthread_setspecific(0, ...) has set previously!

At that point, C2 decides that some other code must have already set up C2's thread-local data appropriately, and proceeds to use that data, with the resulting SIGSEGV.

So who calls P1:pthread_setspecific? It happens here:

Breakpoint 2, __GI___pthread_setspecific (key=0, value=value@entry=0x5555555592a0) at pthread_setspecific.c:33
33 pthread_setspecific.c: No such file or directory.
(gdb) bt
#0 __GI___pthread_setspecific (key=0, value=value@entry=0x5555555592a0) at pthread_setspecific.c:33
#1 0x00007ffff7f9eb3c in _dlerror_run (operate=operate@entry=0x7ffff7f9ee90 <dlmopen_doit>, args=args@entry=0x7fffffffdb60) at dlerror.c:157
#2 0x00007ffff7f9efd9 in __dlmopen (nsid=<optimized out>, file=<optimized out>, mode=<optimized out>) at dlmopen.c:93
#3 0x00005555555551f8 in main ()

And the subsequent call to P2:pthread_getspecific (note the same key==0 being re-used) happens here:

#0  __GI___pthread_getspecific (key=0) at pthread_getspecific.c:30
#1 0x00007ffff77985cd in CRYPTO_THREAD_get_local (key=<optimized out>) at crypto/threads_pthread.c:160
#2 0x00007ffff778a2d2 in get_thread_default_context () at crypto/context.c:166
#3 0x00007ffff778a2ee in get_default_context () at crypto/context.c:171
#4 0x00007ffff778a43b in ossl_lib_ctx_get_concrete (ctx=<optimized out>) at crypto/context.c:278
#5 0x00007ffff778a681 in ossl_lib_ctx_get_data (ctx=<optimized out>, index=index@entry=0, meth=meth@entry=0x7ffff79684e0) at crypto/context.c:356
#6 0x00007ffff776ab6c in get_evp_method_store (libctx=<optimized out>) at crypto/evp/evp_fetch.c:82
#7 0x00007ffff776ab9b in inner_evp_generic_fetch (methdata=methdata@entry=0x7fffffffd9c0, prov=<optimized out>, prov@entry=0x0, operation_id=operation_id@entry=10, name_id=name_id@entry=0, name=0x7ffff78aa0ac "X25519", properties=0x0, new_method=0x7ffff7772e68 <keymgmt_from_algorithm>, up_ref_method=0x7ffff7772d7a <EVP_KEYMGMT_up_ref>,
free_method=0x7ffff7772d88 <EVP_KEYMGMT_free>) at crypto/evp/evp_fetch.c:248
#8 0x00007ffff776b37a in evp_generic_fetch (libctx=<optimized out>, operation_id=operation_id@entry=10, name=<optimized out>, properties=<optimized out>, new_method=new_method@entry=0x7ffff7772e68 <keymgmt_from_algorithm>, up_ref_method=up_ref_method@entry=0x7ffff7772d7a <EVP_KEYMGMT_up_ref>, free_method=0x7ffff7772d88 <EVP_KEYMGMT_free>)
at crypto/evp/evp_fetch.c:372
#9 0x00007ffff77732e8 in EVP_KEYMGMT_fetch (ctx=<optimized out>, algorithm=<optimized out>, properties=<optimized out>) at crypto/evp/keymgmt_meth.c:230
#10 0x00007ffff777d066 in int_ctx_new (libctx=0x0, pkey=pkey@entry=0x0, e=e@entry=0x0, keytype=0x7ffff78aa0ac "X25519", propquery=0x0, id=<optimized out>, id@entry=-1) at crypto/evp/pmeth_lib.c:280
#11 0x00007ffff777d299 in EVP_PKEY_CTX_new_from_name (libctx=<optimized out>, name=<optimized out>, propquery=<optimized out>) at crypto/evp/pmeth_lib.c:368
#12 0x00007ffff7778c70 in new_raw_key_int (libctx=libctx@entry=0x0, strtype=strtype@entry=0x0, propq=propq@entry=0x0, nidtype=1034, e=0x0, key=0x555555556020 <scalar> "\001\002\003\004\005\006\a\b\t\020\021\022\023\024\025\026\027\030\031 !\"#$%&'()012main.c", len=32, key_is_priv=1) at crypto/evp/p_lib.c:406
#13 0x00007ffff7778f3e in EVP_PKEY_new_raw_private_key (type=<optimized out>, e=<optimized out>, priv=<optimized out>, len=<optimized out>) at crypto/evp/p_lib.c:497
#14 0x000055555555529d in main ()

P.S. This only took me 3 hours to debug, and is probably only the first of many problems you are likely to encounter.

P.P.S. Indeed this is only the first problem of many. See this GLIBC bug.

Debug with gdb an application running with different libc (ld-linux.so)


but none worked...

The add-symbol-file ... solution should work. I suspect you are not supplying correct .text address.

cat /proc/30622/maps | grep "r-xp" | grep "/root/test/test"

This assumes that the very first segment of /root/test/test has RX permissions.

That used to be the case, but no longer is on modern systems (see e.g. this answer).

You didn't provide output from readelf -Wl /root/test/test, but I bet it looks similar to the 4-segment example from the other answer (with the first LOAD segment having Read only permissions.

Generally you need to find the address of the first LOAD segment of the test executable in memory, and add the address of .text to that base address.

Update:

With the newly-supplied output from /proc/$pid/maps and readelf, we can see that my guess was correct: this binary has 4 LOAD segments, and the first one doesn't have r-x permissions.

The calculation then is: $address_of_the_first_PT_LOAD + $address_of_.text. That is (for the 16873 process):

(gdb) add-symbol-file /root/test/test 0x7f5e37862000+0x10c0

objdump -s --section=".text" /root/test/test | grep Contents -A 1 ...

This is unnecessarily convoluted. Much easier way:

readelf -WS /root/test/test | grep '\.text ' | nawk '{print "0x" $4}'

Update:

I need to run gdb in the test machine (which doesn't have nawk and other bells

Part of the difficulty here is that you have a PIE binary (relocated at runtime).

If the problem reproduces for non-PIE binary (built with -fno-pie -no-pie), then the address can be calculated once (e.g. on the development machine) and reused on the test machine. You wouldn't need a script to compute the address every time you run the binary.



If I compile the binary to have the interpreter path hardcoded

Generally that's the best approach. Why not use it?

If you want the binary to run both on the development and the test machine, make copies of /root/test available on both.



Related Topics



Leave a reply



Submit