Setting Ld_Library_Path from Inside R

Setting LD_LIBRARY_PATH from inside R

With help from Hans Lub, the way to solve the problem is by using the dyn.load() function and supplying the full path to the library:

dyn.load('path_to_library')

and then, loading via library should work.

How to specify (non-R) library path for dynamic library loading in R?

Normally, the iconv method is picked up from glibc, which is linked to during build of the R packages in question. For whatever reason, however, iconv is getting resolved to libiconv in this case, but it is not linked by the R packages during build.

Original Workaround

One can make the linking to libiconv explicit by adding the following line to the haven/src/Makevars source file

PKG_LIBS=-liconv

which then let's you install from source R CMD INSTALL haven. However, editing packages feels hacky, plus this is something that will need to be done every upgrade, which sounds like a hassle.

Cleaner Workaround

Another option is to use withr::with_makevars, which allows one to temporarily control Makevars content. With this technique, one can install directly from the repo:

withr::with_makevars(c(PKG_LIBS="-liconv"), install.packages("haven"), assignment="+=")

Credit: @knb suggested that I inspect the readxl.so with ldd and this turned out to be super useful because it showed that the shared object wasn't even trying to link to libiconv. Knowing that, I realized I could manually add the reference via the -liconv flag. Thanks @knb!

Additional Info

On the package side of things, relevant details about connecting libraries to R packages can be found in the guide for building libraries. On the system configuration side, the R-admin guide has some useful sections.

Set LD_LIBRARY_PATH before importing in python

UPDATE: see the EDIT below.

I would use:

import os

os.environ['LD_LIBRARY_PATH'] = os.getcwd() # or whatever path you want

This sets the LD_LIBRARY_PATH environment variable for the duration/lifetime of the execution of the current process only.

EDIT: it looks like this needs to be set before starting Python: Changing LD_LIBRARY_PATH at runtime for ctypes

So I'd suggest going with a wrapper .sh (or .py if you insist) script. Also, as @chepner pointed out, you might want to consider installing your .so files in a standard location (within the virtualenv).

See also Setting LD_LIBRARY_PATH from inside Python

Why do I need to set LD_LIBRARY_PATH after installing my binary?

Why do I need to set LD_LIBRARY_PATH after installing my binary?

The simple answer is, it is a failure of the system architects and the toolchain. On Linux they are the folks who maintain Binutils and glibc. Collectively the maintainers think it is OK to compile and link against one version of a library, and then runtime link against the wrong version of the library or lose the library. They have determined it is the number one use case (q.v.). Things don't work out of the box, and you have to do something special to get into a good state.

Consider, I have some Build Scripts that build and install about 70 common Linux utilities and libraries, like cURL, Git, SSH and Wget. The scripts build the program and all of its dependencies. Sometimes I attempt to use them for security testing and evaluation, like Asan and UBsan. The idea is to run the instrumented program and libraries in /usr/local and detect undefined behavior, memory errors, etc. Then, report the error to the project.

Trying to run instrumented builds on a Linux system is impossible. For example, once Bzip, iConv, Unicode, IDN, PRCE, and several other key libraries are built with instrumentation the system no longer functions. Programs in /usr/bin, like /usr/bin/bash, will load instrumented libraries in /usr/local/lib. The programs and libraries have so many faults the programs are constantly killed. You can no longer run a script because the programs in /usr/bin are using wrong libraries (and programs like bash have so many faults).

And of course there are problems like yours, where programs and libraries can't find its own libraries. The web is full of the carnage due to the maintainers' decisions.


To fix the issue for the program you are building, add the following to your LDFLAGS:

-Wl,-R,/usr/local/lib -Wl,--enable-new-dtags

-R,/usr/local/lib tells the runtime linker to look in /usr/local/lib for its libraries. --enable-new-dtags tells the linker to use RUNPATH as opposed to a RPATH.

RUNPATH's are overrideable with LD_LIBRARY_PATH, while RPATH's are not. Omitting -Wl,--enable-new-dtags is usually a bad idea. Also see Use RPATH but not RUNPATH?.

You can also use a relative runtime path:

-Wl,-R,'$$ORIGIN/../lib' -Wl,--enable-new-dtags

-R,'$$ORIGIN/../lib' tells the runtime linker to look in ../lib/ for its libraries. Presumably your binary is located in .../bin/, so ../lib/ is up-and-over to the the lib/ directory.

ORIGIN-based linker paths are good when you have a filesystem layout like:

My_App
|
+-- bin
| |
| +- my_app.exe
|
+-- lib
| |
| +- my_lib.so
|
+-- share

You can move My_App folder around the filesystem, and my_app.exe will always be able to find its library my_lib.so. And if the My_App is the root directory /, then it is a standard Linux file system used by distros (i.e., --prefix=/). And if the My_App is /usr/local, then it is a standard Linux file system used by the user (i.e., --prefix=/usr/local).

If you configure, make and make test, then you can use LD_LIBRARY_PATH to set the temporary path to the library for testing (like .../My_App/lib). In fact, a well written make test should do that for you during testing.


You should not have to add -Wl,-R,/usr/local/lib -Wl,--enable-new-dtags since the program was compiled and linked against the library in /usr/local/lib (presumably using -L<path> and -l<lib>). The reason you have to do it is because the maintainers keep making the same mistake over and over again. It is a complete engineering failure.

The Linux architects were warned about these path problems about 25 years ago when they were deciding what to do out of the box. There were two camps - one who wanted things to work out of the box, and one who wanted users to do something special to get things to work. The later won the debate. They chose the "let's link against the wrong library" and "let's lose the library" patterns. I wish I could find the Usenet posting where it was discussed so you could read it for yourself.

Change current process environment's LD_LIBRARY_PATH

The reason

os.environ["LD_LIBRARY_PATH"] = ...

doesn't work is simple: this environment variable controls behavior of the dynamic loader (ld-linux.so.2 on Linux, ld.so.1 on Solaris), but the loader only looks at LD_LIBRARY_PATH once at process startup. Changing the value of LD_LIBRARY_PATH in the current process after that point has no effect (just as the answer to this question says).

You do have some options:

A. If you know that you are going to need xyz.so from /some/path, and control the execution of python script from the start, then simply set LD_LIBRARY_PATH to your liking (after checking that it is not already so set), and re-execute yourself. This is what Java does.

B. You can import /some/path/xyz.so via its absolute path before importing x.so. When you then import x.so, the loader will discover that it has already loaded xyz.so, and will use the already loaded module instead of searching for it again.

C. If you build x.so yourself, you can add -Wl,-rpath=/some/path to its link line, and then importing x.so will cause the loader to look for dependent modules in /some/path.

What is LD_LIBRARY_PATH and how to use it?

Typically you must set java.library.path on the JVM's command line:

java -Djava.library.path=/path/to/my/dll -cp /my/classpath/goes/here MainClass

Where to set LD_LIBRARY_PATH on Solaris?

Usually I would just have a shell script that starts the application. In the shell script I would set LD_LIBRARY_PATH to whatever I need it to be for that app, then have the script start that app. Doing it that way should cause the path to be set only for that application.



Related Topics



Leave a reply



Submit