Difference Between Static and Shared Libraries

Difference between static and shared libraries?

Shared libraries are .so (or in Windows .dll, or in OS X .dylib) files. All the code relating to the library is in this file, and it is referenced by programs using it at run-time. A program using a shared library only makes reference to the code that it uses in the shared library.

Static libraries are .a (or in Windows .lib) files. All the code relating to the library is in this file, and it is directly linked into the program at compile time. A program using a static library takes copies of the code that it uses from the static library and makes it part of the program. [Windows also has .lib files which are used to reference .dll files, but they act the same way as the first one].

There are advantages and disadvantages in each method:

  • Shared libraries reduce the amount of code that is duplicated in each program that makes use of the library, keeping the binaries small. It also allows you to replace the shared object with one that is functionally equivalent, but may have added performance benefits without needing to recompile the program that makes use of it. Shared libraries will, however have a small additional cost for the execution of the functions as well as a run-time loading cost as all the symbols in the library need to be connected to the things they use. Additionally, shared libraries can be loaded into an application at run-time, which is the general mechanism for implementing binary plug-in systems.

  • Static libraries increase the overall size of the binary, but it means that you don't need to carry along a copy of the library that is being used. As the code is connected at compile time there are not any additional run-time loading costs. The code is simply there.

Personally, I prefer shared libraries, but use static libraries when needing to ensure that the binary does not have many external dependencies that may be difficult to meet, such as specific versions of the C++ standard library or specific versions of the Boost C++ library.

File format differences between a static library (.a) and a shared library (.so)?

A static library, e.g. libfoo.a is not an executable of any kind.
It is simply an indexed archive in unix ar format
of other files which happen to be ELF
object files.

A static library is created like any archive:

ar crs libfoo.a objfile0.o objfile1.0...objfileN.o

outputs the new archive (c) libfoo.a, with those object files inserted (r)
and index added (s).

You'll hear of linking libfoo.a in a program. This doesn't mean that
libfoo.a itself is linked into or with the program. It means that libfoo.a
is passed to the linker as an archive from which it can extract and link into
the program just those object files within the archive that the program needs.
So the format of a static libary (ar format) is just an object-file
bundling format for linker input: it could equally well have been some other bundling
format without any effect on the linker's mission, which is to digest a set of
object files and shared libraries and generate a program, or shared library,
from them. ar format was history's choice.

On the other hand a shared library, e.g. libfoo.so, is an ELF file
and not any sort of archive.

Don't be tempted to suspect that a static library is a sort of ELF file by
the fact that all the well-known ELF-parsers - objdump, readelf, nm -
will parse a static libary. These tools all know that a static library is
an archive of ELF object files, so they just parse all the object files
in the library as if you had listed them on the commandline.

The use of the -D option with nm just instructs the tool to select
only the symbols that are in the dynamic symbol table(s), if any,
of the ELF file(s) that it parses - the symbols visible to the runtime linker
- regardless of whether or not they are parsed from within an archive. It's
the same as objdump -T and readelf --dyn-syms. It is not
necessary to use these options to parse the symbols from a shared library. If
you don't do so, then by default you'll just see the full symbol table.
If you run nm -D on a static library you'll be told no symbols, for
each object file in the archive - likewise if you ran nm -D for each of
those object files individually. The reason for that is that an object file
hasn't got a dynamic symbol table: only a shared library or progam has one.

Object file, shared library and program are all variants of the ELF format.
If you're interested in ELF variants, those are the variants of interest.

The ELF format itself is a long and thorny technical read and is required
background for precisely distinguishing the variants. Intro: An ELF file
contains a ELF header structure one of whose fields contains a type-identifier
of the file as an object file, shared library, or program. When the file is a
program or shared library, it also contains an optional Program header table
structure
whose fields provide the runtime linker/loader with the parameters
it needs to load the file in a process. In terms of ELF structure,
the differences between a program and a shared library are slight: it's
the detailed content that makes the difference to the behaviour that they
elicit from the loader.

For the long and thorny technical read, try Excutable and Linkable Format (ELF)

Difference between shared objects (.so), static libraries (.a), and DLL's (.so)?

I've always thought that DLLs and shared objects are just different terms for the same thing - Windows calls them DLLs, while on UNIX systems they're shared objects, with the general term - dynamically linked library - covering both (even the function to open a .so on UNIX is called dlopen() after 'dynamic library').

They are indeed only linked at application startup, however your notion of verification against the header file is incorrect. The header file defines prototypes which are required in order to compile the code which uses the library, but at link time the linker looks inside the library itself to make sure the functions it needs are actually there. The linker has to find the function bodies somewhere at link time or it'll raise an error. It ALSO does that at runtime, because as you rightly point out the library itself might have changed since the program was compiled. This is why ABI stability is so important in platform libraries, as the ABI changing is what breaks existing programs compiled against older versions.

Static libraries are just bundles of object files straight out of the compiler, just like the ones that you are building yourself as part of your project's compilation, so they get pulled in and fed to the linker in exactly the same way, and unused bits are dropped in exactly the same way.

When to use dynamic vs. static libraries

Static libraries increase the size of the code in your binary. They're always loaded and whatever version of the code you compiled with is the version of the code that will run.

Dynamic libraries are stored and versioned separately. It's possible for a version of the dynamic library to be loaded that wasn't the original one that shipped with your code if the update is considered binary compatible with the original version.

Additionally dynamic libraries aren't necessarily loaded -- they're usually loaded when first called -- and can be shared among components that use the same library (multiple data loads, one code load).

Dynamic libraries were considered to be the better approach most of the time, but originally they had a major flaw (google DLL hell), which has all but been eliminated by more recent Windows OSes (Windows XP in particular).

Why are shared and static libraries different things?

So I see lots of answers talking about why you would want to use shared libraries instead of static libraries, but I think your question is why they are even distinct things nowadays, i.e. why isn't it possible to use a shared library as a static library and pull what you need out of it at build time?

Here are some reasons. Some of these are historical - keep in mind that something as fundamental as binary formats changes very slowly in computer systems.

Compiled Differently

Code can be compiled either to be dependent on the address it sits at (position-dependent) or independent (position-independent). This affects things like loads of global constants, function calls, etc. Position-dependent code needs fixups if it isn't loaded at the address it expects, i.e. the loader has to go over the code and actually change offsets.

For executables, this isn't a problem. An executable is the first thing that is loaded into the address space, so it will always be loaded at the same address. You generally don't need any fixups. But a shared library is used by different executables, by different processes. Multiple libraries can conflict: if they expect to be at overlapping address ranges, one will have to budge. When it does, and it is position-dependent, it needs to be fixed by the loader. But now you have process-specific changes in the library code, which means the code can't be shared (at runtime) with other processes anymore. You lose one of the big benefits of shared libraries.

If the shared library uses position-independent code (PIC), it doesn't need fixups. So PIC is good for shared libraries. On the other hand, PIC is slower on some architectures (notably x86, but not x64), so compiling executables as PIC is a waste of resources.

Executables were therefore usually compiled as position-dependent code, while shared libraries were compiled as position-independent code. If you used shared libraries as sources for code directly pulled into executables, you get PIC. If you want PDC, you need a separate code repository, and that's a static library.

Of course, on most modern architectures, PIC isn't less efficient than PDC, and security techniques like address space randomization make it useful to compile executables as PIC too, so this is more of a historical reason than a current one.

Contain Different Things

But there's another, more current reason for separating static and shared libraries, and that's link-time optimization.

Basically, the more information an optimizer has about a program, the better it can reason about it. Classical optimizer worked on a per-module basis: compile a .c file, optimize it, generate object code. The linker took all the object files and merged them together. This means that the optimizer can only reason about one module at a time. It cannot look into the called functions that are outside the module in order to reason about them, or even simply inline them.

In modern toolchains, however, the compiler often works differently. Instead of compiling and optimizing a module and then producing object code, it takes a module, produces an intermediate form, possibly optimizes it a bit, and then puts the intermediate form into the object file. The linker, instead of just merging object files and resolving references, actually merges the intermediate representation and then invokes the optimizer and code generator on the merged form. With much more information available, the optimizer can do a vastly better job.

This intermediate representation is more detailed, more faithful to the original code than machine code. You want this for your compilation process. You don't want to ship it to the customer, because it is much bigger, and if you use a closed-source model also because it is much easier to reverse-engineer. Moreover, there's no point in shipping it, because the loader doesn't understand it, and you don't want to re-optimize and recompile your program at startup time anyway (JIT languages aside).

Thus, a shared library contains real object code. A static library, on the other hand, is a good container for intermediate code, because it is consumed by the linker. This is a key difference between static and shared libraries.

Linkage Model

Finally, we have another semi-historical reason: linkage.

Linkage defines how a symbol (a variable or function name) is visible outside a code unit. The C language defines two linkages: internal (not visible outside the compilation unit, i.e. static) and external (visible to the whole program, i.e. extern). You generally have a lot of externally visible symbols.

Shared libraries, however, have their symbols resolved at load time, and this should be fast. Fewer symbols means lookup in the symbol table is faster. Of course this was more relevant when computers were slower, but it still can have a noticeable effect. It also affects the size of the libraries.

Therefore, object file specifications used by the operating systems (ELF for *nix, PE/COFF for Windows) defined separate visibilities for shared libraries. Instead of making everything that's external in C visible, you have the option to specify the visible functions explicitly. (In Windows, only things annotated as __declspec(dllexport), or listed in a .def file are exported from a DLL. In Linux, everything extern is exported by default, but you can use __attribute__((visibility("hidden"))) to not do that, or you can specify the -fvisibility=hidden command line switch or the visibility pragma to override the default.)

The end result is that a shared library throws away all symbol information except for the exported functions.

A static library has no need to throw away any symbol information. What's more, you don't want to do that, because carefully specifying which functions are exported and which aren't is some work, and you don't want to have to do that work unless necessary. If you're using static libraries, it isn't necessary.

So a shippable shared library should minimize its exported symbols in order to be fast and small. This makes it less useful as a code repository for static linking, where you may want a greater selection of functions to link in, especially once the interface functions get inlined (see link-time optimization above).

Difference between static and shared libraries in Android's NDK?

The term shared library is not a perfect fit regarding Android's NDK, because in many cases the .so libraries aren't actually shared between applications. It's better to classify the libraries that the NDK builds as static and dynamic.

Every Android application is a Java application, and the only entry point for the NDK code is loading it as a dynamic library and call it trough JNI.

Static libraries are an archives of compiled object files. They get bundled in other libraries at build time. Unused portions of code from static libraries are stripped by the NDK to reduce total size.

Dynamic libraries are loaded at runtime from separate files. They can contain static libraries that they are dependent on or load more dynamic libraries.

So what you actually need for Android development is at least one shared library, that will be called from Java code, and linked with it's dependencies as static libraries preferably.

What is the difference between a static library and a dynamic one

This concept might be a little bit too broad to explain, but i will try to give you a basic idea from which you can study further.

Firstly, you need to know what a library is. Basically, a library is a collection of functions. You may have noticed that we are using functions which are not defined in our code, or in that particular file. To have access to them, we include a header file, that contains declarations of those functions. After compile, there is a process called linking, that links those function declarations with their definitions, which are in another file. The result of this is the actual executable file.

Now, the linking as I described it is a static linking. This means that every executable file contains in it every library (collection of functions) that it needs. This is a waste of space, as there are many programs that may need the same functions. In this case, in memory there would be more copies of the same function. Dynamic linking prevents this, by linking at the run-time, not at the compile time. This means that all the functions are in a special memory space and every program can access them, without having multiple copies of them. This reduces the amount of memory required.

As I mentioned at the beginning of my answer, this is a very simplified summary to give you a basic understanding. I strongly suggest you study more on this topic.

Differences between static libraries and dynamic libraries ignoring how they are used by the linker/loader

It's a pity the terms static library and dynamic library are
both of the form ADJECTIVE library, because it perpetually leads programmers
to think that they denote variants of the essentially the same kind of thing.
This is almost as misleading as the thought that a badminton court and a supreme court
are essentially the same kind of thing. In fact it's far more misleading,
since nobody actually suffers from thinking that a badminton court and a supreme
court are essentially the same kind of thing.

Can someone throw some light on the differences between the contents of static and shared library files?

Let's use examples. To push back against the badminton court / supreme court fog
I'm going to use more accurate technical terms. Instead of static library I'll say ar archive, and instead of dynamic library I'll say
dynamic shared object, or DSO for short.

What an ar archive is

I'll make an ar archive starting with these three files:

foo.c

#include <stdio.h>

void foo(void)
{
puts("foo");
}

bar.c

#include <stdio.h>

void bar(void)
{
puts("bar");
}

limerick.txt

There once was a young lady named bright
Whose speed was much faster than light
She set out one day
In a relative way
And returned on the previous night.

I'll compile those two C source into Position Independent object files:

$ gcc -c -Wall -fPIC foo.c
$ gcc -c -Wall -fPIC bar.c

There's no need for object files destined for an ar archive to be compiled with
-fPIC. I just want these ones compiled that way.

Then I'll create an ar archive called libsundry.a containing the object files foo.o and bar.o,
plus limerick.txt:

$ ar rcs libsundry.a foo.o bar.o limerick.txt

An ar archive is created, of course, with ar,
the GNU general-purpose archiver. So it is not created by the linker. No linkage
happens. Here's how ar reports the contents of the archive:

$ ar -t libsundry.a 
foo.o
bar.o
limerick.txt

Here's what the limerick in the archive looks like:

$ rm limerick.txt
$ ar x libsundry.a limerick.txt; cat limerick.txt
There once was a young lady named bright
Whose speed was much faster than light
She set out one day
In a relative way
And returned on the previous night.

Q. What's the point of putting two object files and an ASCII limerick into the same ar archive?

A. To show that I can. To show that an ar archive is just a bag of files.

Let's see what file makes of libsundry.a.

$ file libsundry.a 
libsundry.a: current ar archive

Now I'll write a couple of programs that use libsundry.a in their linkage.

fooprog.c

extern void foo(void);

int main(void)
{
foo();
return 0;
}

Compile, link and run that one:

$ gcc -c -Wall fooprog.c
$ gcc -o fooprog fooprog.o -L. -lsundry
$ ./fooprog
foo

That's hunky dory. The linker apparently wasn't bothered by the presence of
an ASCII limerick in libsundry.a.

The reason for that is the linker didn't even try to link limerick.txt
into the program. Let's do the linkage again, this time with a diagnostic option
that will show us exactly what input files are linked:

$ gcc -o fooprog fooprog.o -L. -lsundry -Wl,-trace
/usr/bin/ld: mode elf_x86_64
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crt1.o
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crti.o
/usr/lib/gcc/x86_64-linux-gnu/5/crtbegin.o
fooprog.o
(./libsundry.a)foo.o
-lgcc_s (/usr/lib/gcc/x86_64-linux-gnu/5/libgcc_s.so)
/lib/x86_64-linux-gnu/libc.so.6
(/usr/lib/x86_64-linux-gnu/libc_nonshared.a)elf-init.oS
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
-lgcc_s (/usr/lib/gcc/x86_64-linux-gnu/5/libgcc_s.so)
/usr/lib/gcc/x86_64-linux-gnu/5/crtend.o
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crtn.o

Lots of default libraries and object files there, but the only object
files we have created that the linker consumed are:

fooprog.o
(./libsundry.a)foo.o

All that the linker did with ./libsundry.a was take foo.o out of
the bag and link it in the program
. After linking fooprog.o into the program,
it needed to find a definition for foo.
It looked in the bag. It found the definition in foo.o, so it took foo.o from
the bag and linked it in the program. In linking fooprog,

gcc -o fooprog fooprog.o -L. -lsundry

is exactly the same linkage as:

$ gcc -o fooprog fooprog.o foo.o

What does file say about fooprog?

$ file fooprog
fooprog: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), \
dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, \
for GNU/Linux 2.6.32, BuildID[sha1]=32525dce7adf18604b2eb5af7065091c9111c16e,
not stripped

Here's my second program:

foobarprog.c

extern void foo(void);
extern void bar(void);

int main(void)
{
foo();
bar();
return 0;
}

Compile, link and run:

$ gcc -c -Wall foobarprog.c
$ gcc -o foobarprog foobarprog.o -L. -lsundry
$ ./foobarprog
foo
bar

And here's the linkage again with -trace:

$ gcc -o foobarprog foobarprog.o -L. -lsundry -Wl,-trace
/usr/bin/ld: mode elf_x86_64
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crt1.o
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crti.o
/usr/lib/gcc/x86_64-linux-gnu/5/crtbegin.o
foobarprog.o
(./libsundry.a)foo.o
(./libsundry.a)bar.o
-lgcc_s (/usr/lib/gcc/x86_64-linux-gnu/5/libgcc_s.so)
/lib/x86_64-linux-gnu/libc.so.6
(/usr/lib/x86_64-linux-gnu/libc_nonshared.a)elf-init.oS
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
-lgcc_s (/usr/lib/gcc/x86_64-linux-gnu/5/libgcc_s.so)
/usr/lib/gcc/x86_64-linux-gnu/5/crtend.o
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crtn.o

So this time, our object files that the linker consumed were:

foobarprog.o
(./libsundry.a)foo.o
(./libsundry.a)bar.o

After linking foobarprog.o into the program, it needed to find definitions for foo and bar.
It looked in the bag. It found definitions respectively in foo.o and bar.o, so it took them from
the bag and linked them in the program. In linking foobarprog,

gcc -o foobarprog foobarprog.o -L. -lsundry

is exactly the same linkage as:

$ gcc -o foobarprog foobarprog.o foo.o bar.o

Summing all that up. An ar archive is just a bag of files. You can use
an ar archive to offer to the linker a bunch of object files from which to
pick the ones that it needs to continue the linkage. It will take those object files
out of the bag and link them into the output file. It has absolutely no other
use for the bag. The bag contributes nothing at all to the linkage.

The bag just makes your life a little simpler by sparing you the need to know
exactly what object files you need for a particular linkage. You only need
to know: Well, they're in that bag.

What a DSO is

Let's make one.

foobar.c

extern void foo(void);
extern void bar(void);

void foobar(void)
{
foo();
bar();
}

We'll compile this new source file:

$ gcc -c -Wall -fPIC foobar.c

and then make a DSO using foobar.o and re-using libsundry.a

$ gcc -shared -o libfoobar.so foobar.o -L. -lsundry -Wl,-trace
/usr/bin/ld: mode elf_x86_64
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crti.o
/usr/lib/gcc/x86_64-linux-gnu/5/crtbeginS.o
foobar.o
(./libsundry.a)foo.o
(./libsundry.a)bar.o
-lgcc_s (/usr/lib/gcc/x86_64-linux-gnu/5/libgcc_s.so)
/lib/x86_64-linux-gnu/libc.so.6
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
-lgcc_s (/usr/lib/gcc/x86_64-linux-gnu/5/libgcc_s.so)
/usr/lib/gcc/x86_64-linux-gnu/5/crtendS.o
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crtn.o

That has made the DSO libfoobar.so. Notice: A DSO is made by the linker. It
is linked just like a program is linked. The linkage of libfoopar.so looks very much
like the linkage of foobarprog, but the addition of the option
-shared instructs the linker to produce a DSO rather than a program. Here we see that our object
files consumed by the linkage were:

foobar.o
(./libsundry.a)foo.o
(./libsundry.a)bar.o

ar does not understand a DSO at all:

$ ar -t libfoobar.so 
ar: libfoobar.so: File format not recognised

But file does:

$ file libfoobar.so 
libfoobar.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), \
dynamically linked, BuildID[sha1]=16747713db620e5ef14753334fea52e71fb3c5c8, \
not stripped

Now if we relink foobarprog using libfoobar.so instead of libsundry.a:

$ gcc -o foobarprog foobarprog.o -L. -lfoobar -Wl,-trace,--rpath=$(pwd)
/usr/bin/ld: mode elf_x86_64
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crt1.o
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crti.o
/usr/lib/gcc/x86_64-linux-gnu/5/crtbegin.o
foobarprog.o
-lfoobar (./libfoobar.so)
-lgcc_s (/usr/lib/gcc/x86_64-linux-gnu/5/libgcc_s.so)
/lib/x86_64-linux-gnu/libc.so.6
(/usr/lib/x86_64-linux-gnu/libc_nonshared.a)elf-init.oS
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
-lgcc_s (/usr/lib/gcc/x86_64-linux-gnu/5/libgcc_s.so)
/usr/lib/gcc/x86_64-linux-gnu/5/crtend.o
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crtn.o

we see

foobarprog.o
-lfoobar (./libfoobar.so)

that ./libfoobar.so itself was linked. Not some object files "inside it". There
aren't any object files inside it. And how this has
contributed to the linkage can be seen in the dynamic dependencies of the program:

$ ldd foobarprog
linux-vdso.so.1 => (0x00007ffca47fb000)
libfoobar.so => /home/imk/develop/so/scrap/libfoobar.so (0x00007fb050eeb000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb050afd000)
/lib64/ld-linux-x86-64.so.2 (0x000055d8119f0000)

The program has come out with runtime dependency on libfoobar.so. That's what linking a DSO does.
We can see this runtime dependency is satisfied. So the program will run:

$ ./foobarprog
foo
bar

just the same as before.

The fact that a DSO and a program - unlike an ar archive - are both products
of the linker suggests that a DSO and a program are variants of the essentially the same kind of thing.
The file outputs suggested that too. A DSO and a program are both ELF binaries
that the OS loader can map into a process address space. Not just a bag of files.
An ar archive is not an ELF binary of any kind.

The difference between a program-type ELF file and non-program-type ELF lies in the different values
that the linker writes into the ELF Header structure and Program Headers
structure of the ELF file format. These differences instruct the OS loader to
initiate a new process when it loads a program-type ELF file, and to augment
the process that it has under construction when it loads a non-program ELF file. Thus
a non-program DSO gets mapped into the process of its parent program. The fact that a program
initiates a new process requires that a program shall have single default entry point
to which the OS will pass control: that entry point is the mandatory main function
in a C or C++ program. A non-program DSO, on the other hand, doesn't need a single mandatory entry point. It can be entered through any of the global functions it exports by function calls from the
parent program.

But from the point of view of file structure and content, a DSO and a program
are very similar things. They are files that can be components of a process.
A program must be the initial component. A DSO can be a secondary component.

It is still common for the further distinction to be made: A DSO must consist entirely
of relocatable code (because there's no knowing at linktime where the loader may need to
place it in a process address space), whereas a program consists of absolute code,
always loaded at the same address. But in fact its quite possible to link a relocatable
program:

$ gcc -pie -o foobarprog foobarprog.o -L. -lfoobar -Wl,--rpath=$(pwd)

That's what -pie (Position Independent Executable) does here. And then:

$ file foobarprog
foobarprog: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), ....

file will say that foobarprog is a DSO, which it is, although it is
also still a program:

$ ./foobarprog
foo
bar

And PIE executables are catching on. In Debian 9 and derivative distros (Ubuntu 17.04...) the GCC toolchain
builds PIE programs by default.



Related Topics



Leave a reply



Submit