Does Gcc Support Command Files

Does GCC support command files

Sure!

@file
Read command-line options from file. The options read are inserted
in place of the original @file option. If file does not exist, or
cannot be read, then the option will be treated literally, and not
removed.

Options in file are separated by whitespace. A whitespace
character may be included in an option by surrounding the entire
option in either single or double quotes. Any character (including
a backslash) may be included by prefixing the character to be
included with a backslash. The file may itself contain additional
@file options; any such options will be processed recursively.

You can also jury-rig this type of thing with xargs, if your platform has it.

Do all gcc compilers support the @FILE flag?

OK, I managed to find the answer myself.

@ was introduced in gcc 4.2.1. See the documentation.

So my compiler which is based on gcc 4.1.2 does not support it.

How does gcc handle local included files?

I then re-added the references to f, and forgot to re-add the #include. Despite this, 'gcc main.c sub.c -O3 -o test3' worked as expected.

For suitably loose definitions of "worked"; I'm going to bet that f() returns an int, and that gcc was defaulting to C89 mode.

Prior to C99, if the compiler encountered a function call before it saw a function definition or declaration, it assumed that the called function returned an int. Thus, as long as f() actually returns an int, your code will compile and run successfully. If f() doesn't return an int the code will still compile, but you will have a runtime problem. All the linker cares about is that the symbol is there; it doesn't care about type mismatches.

C99 did away with implicit int typing, so under a C99 compiler your code would fail to compile if you didn't have a declaration for f() in scope (either by including sub.h or adding the declaration manually).

The only conclusion I can draw from this is that as far as local files are concerned, if the file is a source code file then include it and don't add it to the command, else add its source to the command and don't bother including it, because apparently it doesn't matter whether you include it or not. I guess?

That is the exact wrong conclusion to draw. You do not want to include a .c file within another .c file as a regular practice, as it can lead to all kinds of mayhem. Everything in main.c is visible to sub.c and vice versa, leading to potential namespace collisions - for example, both files could define a "local" helper function named foo(). Normally such "local" functions aren't visible outside of the source file, but by including one source file within the other, both versions of foo() are visible and clash with each other. Another problem is that if a .c file includes another .c file which includes another .c file, etc., you may wind up with a translation unit that's too large for the compiler to handle. You will wind up recompiling both files every time you change one or the other where it isn't necessary. It's just bad practice all the way around.

The right thing to do is compile main.c and sub.c separately and make sure sub.h is included in both (you want to include sub.h in sub.c to make sure your declarations line up with your definitions; if they don't, the compiler will yak while translating sub.c).

Edit

Answering the following question in the comments:

When you say to compile main.c and sub.c separately, I'm assuming you mean to make object files out of them each individually and then link them (3 commands total)? Is there any way to do that with a single command?

The command gcc -o test main.c sub.c does the same thing, it just doesn't save the respective object files to disk. You could also create a simple Makefile, like so:

CC=gcc
CFLAGS=-O3 -std=c99 -pedantic -Wall -Werror

SRCS=main.c sub.c
OBJS=$(SRCS:.c=.o)

test: $(OBJS)
$(CC) -o $@ $(CFLAGS) $(OBJS)

clean:
rm -rf test $(OBJS)

Then all you need to do is type make test:

[fbgo448@n9dvap997]~/prototypes/simplemake: make test
gcc -O3 -std=c99 -pedantic -Wall -Werror -c -o main.o main.c
gcc -O3 -std=c99 -pedantic -Wall -Werror -c -o sub.o sub.c
gcc -o test -O3 -std=c99 -pedantic -Wall -Werror main.o sub.o

There are implicit rules for building object files from .c files, so you don't need to include those rules in your Makefile. All you need to do is specify targets and prerequisites.

You may need to drop the -pedantic flag to use some platform-specific utilities, and you may need to specify a different standard (c89, gnu89, etc.) as well. You will definitely want to keep the -Wall -Werror flags, though - they'll enable all warnings and treat all warnings as errors; they'll force you to deal with warnings.

Do all gcc versions support gcc's @file option?

This was committed to gcc source here:

2005-11-23  Mark Mitchell  <mark@codesourcery.com>

* doc/invoke.texi: For man pages, include gcc-vers.texi.
List @file in the option summary. Include the libiberty
documentation for @file.
* gcc.c (main): Call expandargv.
* Makefile.in (gcc-vers.texi): Define srcdir.

According to release history, this should have made it into GCC 4.1

gcc and g++ command prompt compiling and linking

Right Click My COmputer. GO to properties. Go to advanced tab. There is a button below Environment Variables. Find "PATH" in the global environment variables. Add

c:\program files\gcc\bin

after appending the semicolon at the end of previous entry.

What does gcc -E option stand for?

Summary

  • This option was introduced on compilers much earlier than GCC, and GCC used the same naming for compatibility.

  • My best guess from the historical evidence is that -E stands for "expand macros".

  • The authors of those earlier compilers couldn't call it -P because there was already a -P option, which also ran only the preprocessor, but wrote the output to .i files instead of to standard output. -p was also taken.

  • Over time, -E became preferred to -P, and when GCC was written, it supported only -E and not -P.


Despite being supposedly off topic for Stack Overflow, I think a bit of history will help explain how this option got its name.

gcc(1) inherited its basic command line options from much earlier Unix C compilers. Many versions can be found in the Unix Tree archive.

It looks like the first version that supported -E was Research Unix V7, circa 1979: cc(1) source, man page source. There was also a -P option that also just ran the preprocessor, but sent the result to a file foo.i instead of to standard output. V6 had already supported -P but not -E: cc(1) source, man page source.

This at least answers why -E wasn't named -P instead: because -P was already in use. (And -p was also taken, it was used to request profiling.) The only hint I found as to why -E was chosen is that the corresponding flag variable in the source code is named exflag. I would hazard a guess that this stands for "expand", as what -E does is basically to expand macros.

It appears that -P was eventually deprecated in favor of -E. V8 still supported it, but omitted it from the man page. V10 (circa 1989) included two versions of the compiler, cc which compiled traditional C, and lcc for ANSI C. The man page says that cc supports -P with the preprocessor behavior, but for lcc, -P did something else ("write declarations for all defined globals on standard error"). They both supported -E. On other fronts, 32V and BSD, at least initially, supported both, but -P would emit a warning that it was obsolete and -E should be used instead.

It looks like gcc, from its earliest version, only supported -E and not -P. Since then, a -P option has been introduced, but it does something else ("inhibit generation of linemarkers").

Combining C files with gcc through cmd

As far as how to do it, @Barmar has already answered correctly,

gcc -o programname main.c hello.c

will compile bothe the files and link them together.

But I think you are more interested in how it works.

Checkout the following two questions How does the compilation/linking process work? How does C++ linking work in practice?

I'll try to briefly explain here also...

Essentially the command gcc is a not just a compiler, it is what they call a driver. It will take the files main.c and hello.c through many stages of processing and help make an executable out of all that source code.

Based on the fact that gcc got two .c files as argument, it will infer that you want to use the code in both main.c and hello.c to make your executable program.

It will first take both those files individually through preprocessing, compiling and assembling phase. Then it will run the linker and input both files to it.

Preprocessing just expands macros like #include and #define.

Compiling is where the meat of the operations happen, gcc reads the c file and makes a syntax tree out of it, it might try to optimize it depending on the other optimization parameters that you passed, and then it writes a text file for each of the c files. These text files are assembly language representation of the functions written in c.

Compiling is also where gcc's compiler cc1 will encounter the line of code in main.c where you are calling hello(); and wonder what is hello and where is it. The way you can tell the compiler that hello is a function that's going to be defined elsewhere is called forward declaration. This is how it works

void hello(void);

int main(void){
hello();
...

Now when cc1 encounters the call to hello() it understands to be patient and just mark this as a reference to symbol hello that it will eventually have to find.

The best practice for forward declarations is to put them in a header file, usually with the same name as the c file where the code is.

So you'd make a hello.h (like @Barmar already said) and it will read

void hello(void);

and then in the main.c you'd have a #include directive as follows...

#include "hello.h"

int main(void){
...

Now the forward declaration in hello.h will be copied into the c code at the time of pre-processing.

In the assembler phase gcc invokes the as assembler utility which is actually not part of the gcc package, but part of binutils package. This is the utility that will take the assembly language text and make binary object files with .o extensions (hello.o and main.o). These file contain machine level instructions pertaining to each of the functions defined in both main.c and hello.c.

Finally the real answer to your question lies in the linker ld. Another utility that's part of the binutils package. The linker will first do the simple task of combining all the code from the different .o files into one file which will eventually become the executable. The linker has the ability to take two or more .o files, examine the machine code and find references to foreign objects.. e.g. the code in main.o has referenced a function from the hello.o file. In the main.o file this is documented as a reference to a symbol, but now the linked can actually find that symbol, so it will connect that reference with the actual code object. This process is called static linking.

Finally when you run a program, the machine code gets loaded, which is essentially a set of function objects and the references between function objects are hard wired at that time based on at which memory address the functions finally get deployed.

I have probably cut many corners in my explanation, but hopefully you get the gist of it and can move forward in your quest to understand how things work.

HTH



Related Topics



Leave a reply



Submit