Dynamic Loading and Weak Symbol Resolution

Dynamic loading and weak symbol resolution

Unfortunately, the authoritative documentation is the source code. Most distributions of Linux use glibc or its fork, eglibc. In the source code for both, the file that should document dlopen() reads as follows:

manual/libdl.texi

@c FIXME these are undocumented:
@c dladdr
@c dladdr1
@c dlclose
@c dlerror
@c dlinfo
@c dlmopen
@c dlopen
@c dlsym
@c dlvsym

What technical specification there is can be drawn from the ELF specification and the POSIX standard. The ELF specification is what makes a weak symbol meaningful. POSIX is the actual specification for dlopen() itself.

This is what I find to be the most relevant portion of the ELF specification.

When the link editor searches archive libraries, it extracts archive
members that contain definitions of undefined global symbols. The
member’s definition may be either a global or a weak symbol.

The ELF specification makes no reference to dynamic loading so the rest of this paragraph is my own interpretation. The reason I find the above relevant is that resolving symbols occurs at a single "when". In the example you give, when program a dynamically loads b.so, the dynamic loader attempts to resolve undefined symbols. It may end up doing so with either global or weak symbols. When the program then dynamically loads c.so, the dynamic loader again attempts to resolve undefined symbols. In the scenario you describe, symbols in b.so were resolved with weak symbols. Once resolved, those symbols are no longer undefined. It doesn't matter if global or weak symbols were used to defined them. They're already no longer undefined by the time c.so is loaded.

The ELF specification gives no precise definition of what a link editor is or when the link editor must combine object files. Presumably it's a non-issue because the document has dynamic-linking in mind.

POSIX describes some of the dlopen() functionality but leaves much up to the implementation, including the substance of your question. POSIX makes no reference to the ELF format or weak symbols in general. For systems implementing dlopen() there need not even be any notion of weak symbols.

http://pubs.opengroup.org/onlinepubs/9699919799/functions/dlopen.html

POSIX compliance is part of another standard, the Linux Standard Base. Linux distributions may or may not choose to follow these standards and may or may not go to the trouble of being certified. For example, I understand that a formal Unix certification by Open Group is quite expensive -- hence the abundance of "Unix-like" systems.

An interesting point about the standards compliance of dlopen() is made on the Wikipedia article for dynamic loading. dlopen(), as mandated by POSIX, returns a void*, but C, as mandated by ISO, says that a void* is a pointer to an object and such a pointer is not necessarily compatible with a function pointer.

The fact remains that any conversion between function and object
pointers has to be regarded as an (inherently non-portable)
implementation extension, and that no "correct" way for a direct
conversion exists, since in this regard the POSIX and ISO standards
contradict each other.

The standards that do exist contradict and what standards documents there are may not be especially meaningful anyway. Here's Ulrich Drepper writing about his disdain for Open Group and their "specifications".

http://udrepper.livejournal.com/8511.html

Similar sentiment is expressed in the post linked by rodrigo.

The reason I've made this change is not really to be more conformant
(it's nice but no reason since nobody complained about the old
behaviour).

After looking into it, I believe the proper answer to the question as you've asked it is that there is no right or wrong behavior for dlopen() in this regard. Arguably, once a search has resolved a symbol it is no longer undefined and in subsequent searches the dynamic loader will not attempt to resolve the already defined symbol.

Finally, as you state in the comments, what you describe in the original post is not correct. Dynamically loaded shared libraries can be used to resolve undefined symbols in previously dynamically loaded shared libraries. In fact, this isn't limited to undefined symbols in dynamically loaded code. Here is an example in which the executable itself has an undefined symbol that is resolved through dynamic loading.

main.c

#include <dlfcn.h>

void say_hi(void);

int main(void) {
void* symbols_b = dlopen("./dyload.so", RTLD_NOW | RTLD_GLOBAL);
/* uh-oh, forgot to define this function */
/* better remember to define it in dyload.so */
say_hi();
return 0;
}

dyload.c

#include <stdio.h>
void say_hi(void) {
puts("dyload.so: hi");
}

Compile and run.

gcc-4.8 main -fpic -ldl -Wl,--unresolved-symbols=ignore-all -o main
gcc-4.8 dyload.c -shared -fpic -o dyload.so
$ ./main
dyload.so: hi

Note that the main executable itself was compiled as PIC.

Resolving symbols differently in different dynamically loaded objects

Unfortunately there is no way to tweak symbol resolution at per-library level so there is not easy way to achieve this.

If foo is actually implemented in main executable (not just copy-relocated to it) there's nothing you can do because symbols from main executables get the highest priority during resolution (unless you are ok with ultimately hacky runtime-patching of GOT which you aren't).

But if

  • foo is implemented in c.so
  • and you are desperate enough

you could do the following:

  • get return address inside interceptor in a.so (use __builtin_return_address)
  • match it against boundaries of b.so (can be obtained from /proc/self/maps)
  • depending on result, either do special processing (if caller is in b.so) or forward call to RTLD_NEXT

This of course has obvious limitations e.g. won't work if b.so calls function from yet another d.so which then calls foo but it may be enough in many cases. And yes, I've seen this approach deployed in practice.

Dynamic Loading: Undefined Symbol In Shared Static Library

@nbt:

Linking the .so against the .a is the obvious and correct thing to
do.

This should not generate symbol conflicts when loading the .so into an executable.

Why the weak symbol defined in the same .a file but different .o file is not used as fall back?


these subtle behaviors

There isn't really anything subtle here.

  1. A weak definition means: use this symbol unless another strong definition is also present, in which case use the other symbol.


    Normally two same-named symbols result in a multiply-defined link error, but when all but one definitions are weak, no multiply-defined error is produced.

  2. A weak (unresolved) reference means: don't consider this symbol when deciding whether to pull an object which defines this symbol out of archive library or not (an object may still be pulled in if it satisfies a different strong undefined symbol).


    Normally if the symbol is unresolved after all objects are selected, the linker will report unresolved symbol error. But if the unresolved symbol is weak, the error is suppressed.

That's really all there is to it.

Update:

You are repeating incorrect understanding in comments.

What makes me feel subtle is, for a weak reference, the linker doesn't pull an object from an archive library, but still check a standalone object file.

This is entirely consistent with the answer above. When a linker deals with archive library, it has to make a decision: to select contained foo.o into the link or not. It is that decision that is affected by the type of reference.

When bar.o is given on the link line as a "standalone object file", the linker makes no decisions about it -- bar.o will be selected into the link.

And if that object happens to contain a definition for the weak reference, will the weak reference be also resolved by the way?

Yes.

Even the weak attribute tells the linker not to.

This is the apparent root of misunderstanding: the weak attribute doesn't tell the linker not to resolve the reference; it only tells the linker (pardon repetition) "don't consider this symbol when deciding whether to pull an object which defines this symbol out of archive library".

I think it's all about whether or not an object containing a definition for that weak reference is pulled in for linking.

Correct.

Be it a standalone object or from an archive lib.

Wrong: a standalone object is always selected into the link.

__attribute__((weak)) and static libraries

To explain what's going on here, let's talk first about your original source files, with

a.h (1):

void foo() __attribute__((weak));

and:

a.c (1):

#include "a.h"
#include <stdio.h>

void foo() { printf("%s\n", __FILE__); }

The mixture of .c and .cpp files in your sample code is irrelevant to the
issues, and all the code is C, so we'll say that main.cpp is main.c and
do all compiling and linking with gcc:

$ gcc -Wall -c main.c a.c b.c
ar rcs a.a a.o
ar rcs b.a b.o

First let's review the differences between a weakly declared symbol, like
your:

void foo() __attribute__((weak));

and a strongly declared symbol, like

void foo();

which is the default:

  • When a weak reference to foo (i.e. a reference to weakly declared foo) is linked in a program, the
    linker need not find a definition of foo anywhere in the linkage: it may remain
    undefined. If a strong reference to foo is linked in a program,
    the linker needs to find a definition of foo.

  • A linkage may contain at most one strong definition of foo (i.e. a definition
    of foo that declares it strongly). Otherwise a multiple-definition error results.
    But it may contain multiple weak definitions of foo without error.

  • If a linkage contains one or more weak definitions of foo and also a strong
    definition, then the linker chooses the strong definition and ignores the weak
    ones.

  • If a linkage contains just one weak definition of foo and no strong
    definition, inevitably the linker uses the one weak definition.

  • If a linkage contains multiple weak definitions of foo and no strong
    definition, then the linker chooses one of the weak definitions arbitrarily.

Next, let's review the differences between inputting an object file in a linkage
and inputting a static library.

A static library is merely an ar archive of object files that we may offer to
the linker from which to select the ones it needs to carry on the linkage.

When an object file is input to a linkage, the linker unconditionally links it
into the output file.

When static library is input to a linkage, the linker examines the archive to
find any object files within it that provide definitions it needs for unresolved symbol references
that have accrued from input files already linked. If it finds any such object files
in the archive, it extracts them and links them into the output file, exactly as
if they were individually named input files and the static library was not mentioned at all.

With these observations in mind, consider the compile-and-link command:

gcc main.c a.o b.o

Behind the scenes gcc breaks it down, as it must, into a compile-step and link
step, just as if you had run:

gcc -c main.c     # compile
gcc main.o a.o b.o # link

All three object files are linked unconditionally into the (default) program ./a.out. a.o contains a
weak definition of foo, as we can see:

$ nm --defined a.o
0000000000000000 W foo

Whereas b.o contains a strong definition:

$ nm --defined b.o
0000000000000000 T foo

The linker will find both definitions and choose the strong one from b.o, as we can
also see:

$ gcc main.o a.o b.o -Wl,-trace-symbol=foo
main.o: reference to foo
a.o: definition of foo
b.o: definition of foo
$ ./a.out
b.c

Reversing the linkage order of a.o and b.o will make no difference: there's
still exactly one strong definition of foo, the one in b.o.

By contrast consider the compile-and-link command:

gcc main.cpp a.a b.a

which breaks down into:

gcc -c main.cpp     # compile
gcc main.o a.a b.a # link

Here, only main.o is linked unconditionally. That puts an undefined weak reference
to foo into the linkage:

$ nm --undefined main.o
w foo
U _GLOBAL_OFFSET_TABLE_
U puts

That weak reference to foo does not need a definition. So the linker will
not attempt to find a definition that resolves it in any of the object files in either a.a or b.a and
will leave it undefined in the program, as we can see:

$ gcc main.o a.a b.a -Wl,-trace-symbol=foo
main.o: reference to foo
$ nm --undefined a.out
w __cxa_finalize@@GLIBC_2.2.5
w foo
w __gmon_start__
w _ITM_deregisterTMCloneTable
w _ITM_registerTMCloneTable
U __libc_start_main@@GLIBC_2.2.5
U puts@@GLIBC_2.2.5

Hence:

$ ./a.out
no foo

Again, it doesn't matter if you reverse the linkage order of a.a and b.a,
but this time it is because neither of them contributes anything to the linkage.

Let's turn now to the different behavior you discovered by changing a.h and a.c
to:

a.h (2):

void foo();

a.c (2):

#include "a.h"
#include <stdio.h>

void __attribute__((weak)) foo() { printf("%s\n", __FILE__); }

Once again:

$ gcc -Wall -c main.c a.c b.c
main.c: In function ‘main’:
main.c:4:18: warning: the address of ‘foo’ will always evaluate as ‘true’ [-Waddress]
int main() { if (foo) foo(); else printf("no foo\n"); }

See that warning? main.o now contains a strongly declared reference to foo:

$ nm --undefined main.o
U foo
U _GLOBAL_OFFSET_TABLE_

so the code (when linked) must have a non-null address for foo. Proceeding:

$ ar rcs a.a a.o
$ ar rcs b.a b.o

Then try the linkage:

$ gcc main.o a.o b.o
$ ./a.out
b.c

And with the object files reversed:

$ gcc main.o b.o a.o
$ ./a.out
b.c

As before, the order makes no difference. All the object files are linked. b.o provides
a strong definition of foo, a.o provides a weak one, so b.o wins.

Next try the linkage:

$ gcc main.o a.a b.a
$ ./a.out
a.c

And with the order of the libraries reversed:

$ gcc main.o b.a a.a
$ ./a.out
b.c

That does make a difference. Why? Let's redo the linkages with diagnostics:

$ gcc main.o a.a b.a -Wl,-trace,-trace-symbol=foo
/usr/bin/x86_64-linux-gnu-ld: mode elf_x86_64
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/Scrt1.o
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crti.o
/usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o
main.o
(a.a)a.o
libgcc_s.so.1 (/usr/lib/gcc/x86_64-linux-gnu/7/libgcc_s.so.1)
/lib/x86_64-linux-gnu/libc.so.6
(/usr/lib/x86_64-linux-gnu/libc_nonshared.a)elf-init.oS
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
libgcc_s.so.1 (/usr/lib/gcc/x86_64-linux-gnu/7/libgcc_s.so.1)
/usr/lib/gcc/x86_64-linux-gnu/7/crtendS.o
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crtn.o
main.o: reference to foo
a.a(a.o): definition of foo

Ignoring the default libraries, the only object files of ours that get
linked were:

main.o
(a.a)a.o

And the definition of foo was taken from the archive member a.o of a.a:

a.a(a.o): definition of foo

Reversing the library order:

$ gcc main.o b.a a.a -Wl,-trace,-trace-symbol=foo
/usr/bin/x86_64-linux-gnu-ld: mode elf_x86_64
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/Scrt1.o
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crti.o
/usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o
main.o
(b.a)b.o
libgcc_s.so.1 (/usr/lib/gcc/x86_64-linux-gnu/7/libgcc_s.so.1)
/lib/x86_64-linux-gnu/libc.so.6
(/usr/lib/x86_64-linux-gnu/libc_nonshared.a)elf-init.oS
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
libgcc_s.so.1 (/usr/lib/gcc/x86_64-linux-gnu/7/libgcc_s.so.1)
/usr/lib/gcc/x86_64-linux-gnu/7/crtendS.o
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crtn.o
main.o: reference to foo
b.a(b.o): definition of foo

This time the object files linked were:

main.o
(b.a)b.o

And the definition of foo was taken from b.o in b.a:

b.a(b.o): definition of foo

In the first linkage, the linker had an unresolved strong reference to
foo for which it needed a definition when it reached a.a. So it
looked in the archive for an object file that provides a definition,
and found a.o. That definition was a weak one, but that didn't matter. No
strong definition had been seen. a.o was extracted from a.a and linked,
and the reference to foo was thus resolved. Next b.a was reached, where
a strong definition of foo would have been found in b.o, if the linker still needed one
and looked for it. But it didn't need one any more and didn't look. The linkage:

gcc main.o a.a b.a

is exactly the same as:

gcc main.o a.o

And likewise the linkage:

$ gcc main.o b.a a.a

is exactly the same as:

$ gcc main.o b.o

Your real problem...

... emerges in one of your comments to the post:

I want to override [the] original function implementation when linking with a testing framework.

You want to link a program inputting some static library lib1.a
which has some member file1.o that defines a symbol foo, and you want to knock out
that definition of foo and link a different one that is defined in some other object
file file2.o.

__attribute__((weak)) isn't applicable to that problem. The solution is more
elementary. You just make sure to input file2.o to the linkage before you input
lib1.a (and before any other input that provides a definition of foo).
Then the linker will resolve references to foo with the definition provided in file2.o and will not try to find any other
definition when it reaches lib1.a. The linker will not consume lib1.a(file1.o) at all. It might as well not exist.

And what if you have put file2.o in another static library lib2.a? Then inputting
lib2.a before lib1.a will do the job of linking lib2.a(file2.o) before
lib1.a is reached and resolving foo to the definition in file2.o.

Likewise, of course, every definition provided by members of lib2.a will be linked in
preference to a definition of the same symbol provided in lib1.a. If that's not what
you want, then don't like lib2.a: link file2.o itself.

Finally

Is it possible to use [the] weak attribute with static linking at all?

Certainly. Here is a first-principles use-case:

foo.h (1)

#ifndef FOO_H
#define FOO_H

int __attribute__((weak)) foo(int i)
{
return i != 0;
}

#endif

aa.c

#include "foo.h"

int a(void)
{
return foo(0);
}

bb.c

#include "foo.h"

int b(void)
{
return foo(42);
}

prog.c

#include <stdio.h>

extern int a(void);
extern int b(void);

int main(void)
{
puts(a() ? "true" : "false");
puts(b() ? "true" : "false");
return 0;
}

Compile all the source files, requesting a seperate ELF section for each function:

$ gcc -Wall -ffunction-sections -c prog.c aa.c bb.c

Note that the weak definition of foo is compiled via foo.h into both
aa.o and bb.o, as we can see:

$ nm --defined aa.o
0000000000000000 T a
0000000000000000 W foo
$ nm --defined bb.o
0000000000000000 T b
0000000000000000 W foo

Now link a program from all the object files, requesting the linker to
discard unused sections (and give us the map-file, and some diagnostics):

$ gcc prog.o aa.o bb.o -Wl,--gc-sections,-Map=mapfile,-trace,-trace-symbol=foo
/usr/bin/x86_64-linux-gnu-ld: mode elf_x86_64
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/Scrt1.o
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crti.o
/usr/lib/gcc/x86_64-linux-gnu/7/crtbeginS.o
prog.o
aa.o
bb.o
libgcc_s.so.1 (/usr/lib/gcc/x86_64-linux-gnu/7/libgcc_s.so.1)
/lib/x86_64-linux-gnu/libc.so.6
(/usr/lib/x86_64-linux-gnu/libc_nonshared.a)elf-init.oS
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2
libgcc_s.so.1 (/usr/lib/gcc/x86_64-linux-gnu/7/libgcc_s.so.1)
/usr/lib/gcc/x86_64-linux-gnu/7/crtendS.o
/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/crtn.o
aa.o: definition of foo

This linkage is no different from:

$ ar rcs libaabb.a aa.o bb.o
$ gcc prog.o libaabb.a

Despite the fact that both aa.o and bb.o were loaded, and each contains
a definition of foo, no multiple-definition error results, because each definition
is weak. aa.o was loaded before bb.o and the definition of foo was linked from aa.o.

So what happened to the definition of foo in bb.o? The mapfile shows us:

mapfile (1)

...
...
Discarded input sections
...
...
.text.foo 0x0000000000000000 0x13 bb.o
...
...

The linker discarded the function section that contained the definition
in bb.o

Let's reverse the linkage order of aa.o and bb.o:

$ gcc prog.o bb.o aa.o -Wl,--gc-sections,-Map=mapfile,-trace,-trace-symbol=foo
...
prog.o
bb.o
aa.o
...
bb.o: definition of foo

And now the opposite thing happens. bb.o is loaded before aa.o. The
definition of foo is linked from bb.o and:

mapfile (2)

...
...
Discarded input sections
...
...
.text.foo 0x0000000000000000 0x13 aa.o
...
...

the definition from aa.o is chucked away.

There you see how the linker arbitrarily chooses one of multiple
weak definitions of a symbol, in the absence of a strong definition. It simply
picks the first one you give it and ignores the rest.

What we've just done here is effectively what the GCC C++ compiler does for us when we
define a global inline function. Rewrite:

foo.h (2)

#ifndef FOO_H
#define FOO_H

inline int foo(int i)
{
return i != 0;
}

#endif

Rename our source files *.c -> *.cpp; compile and link:

$ g++ -Wall -c prog.cpp aa.cpp bb.cpp

Now there is a weak definition of foo (C++ mangled) in each of aa.o and bb.o:

$ nm --defined aa.o bb.o

aa.o:
0000000000000000 T _Z1av
0000000000000000 W _Z3fooi

bb.o:
0000000000000000 T _Z1bv
0000000000000000 W _Z3fooi

The linkage uses the first definition it finds:

$ g++ prog.o aa.o bb.o -Wl,-Map=mapfile,-trace,-trace-symbol=_Z3fooi
...
prog.o
aa.o
bb.o
...
aa.o: definition of _Z3fooi
bb.o: reference to _Z3fooi

and throws away the other one:

mapfile (3)

...
...
Discarded input sections
...
...
.text._Z3fooi 0x0000000000000000 0x13 bb.o
...
...

And as you may know, every instantiation of the C++ function template in
global scope (or instantiation of a class template member function) is
an inline global function. Rewrite again:

#ifndef FOO_H
#define FOO_H

template<typename T>
T foo(T i)
{
return i != 0;
}

#endif

Recompile:

$ g++ -Wall -c prog.cpp aa.cpp bb.cpp

Again:

$ nm --defined aa.o bb.o

aa.o:
0000000000000000 T _Z1av
0000000000000000 W _Z3fooIiET_S0_

bb.o:
0000000000000000 T _Z1bv
0000000000000000 W _Z3fooIiET_S0_

each of aa.o and bb.o has a weak definition of:

$ c++filt _Z3fooIiET_S0_
int foo<int>(int)

and the linkage behaviour is now familiar. One way:

$ g++ prog.o aa.o bb.o -Wl,-Map=mapfile,-trace,-trace-symbol=_Z3fooIiET_S0_
...
prog.o
aa.o
bb.o
...
aa.o: definition of _Z3fooIiET_S0_
bb.o: reference to _Z3fooIiET_S0_

and the other way:

$ g++ prog.o bb.o aa.o -Wl,-Map=mapfile,-trace,-trace-symbol=_Z3fooIiET_S0_
...
prog.o
bb.o
aa.o
...
bb.o: definition of _Z3fooIiET_S0_
aa.o: reference to _Z3fooIiET_S0_

Our program's behavior is unchanged by the rewrites:

$ ./a.out
false
true

So the application of the weak attribute to symbols in the linkage of ELF objects -
whether static or dynamic - enables the GCC implementation of C++ templates
for the GNU linker. You could fairly say it enables the GCC implementation of modern C++.

Is dynamic loading strictly compatible with the C++ Standard?

The C++ standard doesn't have any provision for dynamic modules, so a certain amount of interpretation is necessary.

Yes, static-initialized variables in dynamically loaded modules will be initialized after dynamically initialized variables in the main module. You can observe this, and construct programs where it has an effect on the program's behavior. But if you think of the DLL as a separate program, one which shares the main program's memory space but has its own timeline, you can pretty much apply the same rules at the module level, and use them to predict behavior at the application-wide behavior. The compiler doesn't want to surprise you... it just has to sometimes.

Incidentally, initialization order is really the least of your concerns when it comes to the collision between C++ and DLLs. Dynamic modules break far more rules than that, particularly when it comes to RTTI.

Loading a dynamic library at run-time yields inconsistent and unexpected results, missing symbols and empty PLT entries. Why?


The above Should Just Work™, and indeed it does seem to...

No, it should not, and if it appears to, that's more of an accident. In general, using --unresolved-symbols=... is a really bad idea™, and will almost never do what you want.

The solution is trivial: you just need to look up zip_open and zip_close, like so:

int main(void) {
void *lib;
zip_t *p_open(const char *, int, int *);
void *p_close(zip_t*);
int err;
zip_t *myzip;

lib = dlopen("libzip.so", RTLD_LAZY | RTLD_GLOBAL);
if (lib == NULL)
return 1;

p_open = (zip_t(*)(const char *, int, int *))dlsym(lib, "zip_open");
if (p_open == NULL)
return 1;
p_close = (void(*)(zip_t*))dlsym(lib, "zip_close");
if (p_close == NULL)
return 1;

myzip = p_open("myzip.zip", ZIP_CREATE | ZIP_TRUNCATE, &err);
if (myzip == NULL)
return 1;

p_close(myzip);

return 0;
}


Related Topics



Leave a reply



Submit