Gccgo on Precise

gccgo on Precise

This was recently brought up on the golang-nuts group: compiling with gccgo from packaged binaries.

It's a known issue in Ubuntu (Bug #966570). To work around it, you can link with the static libgcc by specifying -static-libgcc in the gccgoflags. i.e.

go build -compiler gccgo -gccgoflags '-static-libgcc'

What are the primary differences between 'gc' and 'gccgo'?

You can see more in "Setting up and using gccgo":

gccgo, a compiler for the Go language. The gccgo compiler is a new frontend for GCC.

Note that gccgo is not the gc compiler

As explained in "Gccgo in GCC 4.7.1" (July 2012)

The Go language has always been defined by a spec, not an implementation. The Go team has written two different compilers that implement that spec: gc and gccgo.

  • Gc is the original compiler, and the go tool uses it by default.
  • Gccgo is a different implementation with a different focus

Compared to gc, gccgo is slower to compile code but supports more powerful optimizations, so a CPU-bound program built by gccgo will usually run faster.

Also:

  • The gc compiler supports only the most popular processors: x86 (32-bit and 64-bit) and ARM.
  • Gccgo, however, supports all the processors that GCC supports.

    Not all those processors have been thoroughly tested for gccgo, but many have, including x86 (32-bit and 64-bit), SPARC, MIPS, PowerPC and even Alpha.

    Gccgo has also been tested on operating systems that the gc compiler does not support, notably Solaris.

if you install the go command from a standard Go release, it already supports gccgo via the -compiler option: go build -compiler gccgo myprog.


In short: gccgo: more optimization, more processors.


However, as commented by OneOfOne (source), there is often a desynchronization between go supported by gccgo, and the latest go release:

gccgo only supports up to version go v1.2, so if you need anything new in 1.3 / 1.4 (tip) gccgo cant be used. –

GCC release 4.9 will contain the Go 1.2 (not 1.3) version of gccgo.

The release schedules for the GCC and Go projects do not coincide, which means that 1.3 will be available in the development branch but that the next GCC release, 4.10, will likely have the Go 1.4 version of gccgo.


twotwotwo mentions in the comments the slide of Brad Fitzpatrick's presentation

gccgo generates very good code

... but lacks escape analysis: kills performance with many small allocs + garbage

... GC isn't precise. Bad for 32-bit.

twotwotwo adds:

Another slide mentions that non-gccgo ARM code generation is wonky.

Assuming it's an interesting option for your project, probably compare binaries for your use case on your target architecture.


As peterSO comments, Go 1.5 now (Q3/Q4 2015) means:

The compiler and runtime are now written entirely in Go (with a little assembler).

C is no longer involved in the implementation, and so the C compiler that was once necessary for building the distribution is gone.

The "Go in Go" slide do mention:

C is gone.

Side note: gccgo is still going strong.


Berkant asks in the comments if gccgo is what gc was bootstrapped from.

Jörg W Mittag answers:

No, gccgo appeared after gc.

gc was originally written in C. It is based on Ken Thompson's C compiler from the Plan9 operating system, the successor to Unix, designed by the same people. gc was iteratively refactored to have more and more of itself written in Go.

gccgo was started by Ian Lance Taylor, a GCC hacker not affiliated with the Go project.

Note that the first fully self-hosted Go compiler was actually a proprietary commercial closed-source implementation for Windows whose name seems to have vanished from my brain the same way it did from the Internet. They claimed to have a self-hosted compiler written in Go, targeting Windows at a time where gccgo did not yet exist and gc was extremely painful to set up on Windows. (You basically had to set up a full Cygwin environment, patch the source code and compile from source.) The company seems to have folded, however, before they ever managed to market the product.

Hector Chu did release a Windows port of Go in Nov. 2009.

And the go-lang.cat-v.org/os-ports page mentions Joe/Joseph Poirier initial work as well. In this page:

Any chance that someone in the know could request that
one of the guys (Alex Brainman - Hector Chu - Joseph Poirier) involved in
producing the Windows port could make a wiki entry detailing their build
environment?

Add to that (in Writing Web Apps in Go) !光京 (Wei Guangjing).

How to install the current version of Go in Ubuntu Precise

[Updated (previous answer no longer applied)]

For fetching the latest version:

sudo add-apt-repository ppa:longsleep/golang-backports
sudo apt update
sudo apt install golang-go

Also see the wiki

invalid memory address or nil pointer dereference

You are not doing anything wrong. This looks like a bug in the compiler when doing full-static linkage. Try linking with -static-libgo instead, and it should work.

This is the backtrace in gdb:

Program received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) bt
#0 0x0000000000000000 in ?? ()
#1 0x00000000004adf67 in __wrap_pthread_create ()
#2 0x000000000040657e in runtime_newm ()
#3 0x000000000040665b in matchmg ()
#4 0x0000000000406f15 in syscall.Entersyscall ()
#5 0x0000000000403e5c in runtime_MHeap_Scavenger ()
#6 0x0000000000406e15 in kickoff ()
#7 0x00000000004ba910 in ?? ()
#8 0x0000000000000000 in ?? ()

I'll see if there's a bug filed upstream for this already, or file one otherwise.

(issue filed: http://golang.org/issue/6375 )

Go is printing xgcc version but not go installed version

Looks like you have two versions of Go installed. One from ubuntu package manager and one you installed from source tar.

To confirm kindly try to remove gccgo :

sudo apt-get remove gccgo

Cannot install go package go.net/html

You most likely have an outdated Go version (see for example this GitHub issue).

Check the output of go version and update if necessary.

Can I compile Go programs on Xeon Phi (Knight's Landing) processors?

You're basing your Knight's Landing worries on this quote about Knight's Corner:

The Knight's Corner processor is based on an x86-64 foundation, yes, but it in fact has its own floating-point instruction set—no x87, no AVX, no SSE, no MMX... Oh, and then you can throw all that away when Knight's Landing (KNL) comes out.

By "throw all that away", they mean all the worries and incompatibilities. KNL is based on Silvermont and is fully x86-64 compatible (including x87, SSE, and SSE2 for both standard ways of doing FP math). It also supports AVX-512F, AVX-512ER, and a few other AVX-512 extensions, along with AVX and AVX2 and SSE up to SSE4.2. A lot like a Skylake-server CPU, except a different set of AVX-512 extensions.

The point of this is exactly to solve the problem you're worried about: so any legacy binary can run on KNL. To get good performance out of it, you want to be running code vectorized with AVX-512 vectors in the loops that do the heavy lifting, but all the surrounding code and other programs in the rest of the Linux distro or whatever can be running ordinary bog-standard code that uses whatever x87 and/or SSE.


Knight's Corner (first-gen commercial Xeon Phi) has its own variant / precursor of AVX-512 in a core based on P5-Pentium, and no other FP hardware.

Knight's Landing (second-gen commercial Xeon Phi) is based on Silvermont, with AVX-512, and is the first that can act as a "host" processor (bootable) instead of just a coprocessor.

This "host" mode is another reason for including enough hardware to decode and execute x87 and SSE: if you're running a whole system on KNL, you're much more likely to want to execute some legacy binaries for non-perf-sensitive tasks, not only binaries compiled specifically for it.

Its x87 performance is not great, though: like one scalar fmul per 2 clocks (https://agner.org/optimize). vs. 2-per-clock SSE mulsd (0.5c recip throughput). Same 0.5c throughput for other SSE/AVX math, including AVX-512 vfma132ps zmm to do 16x single-precision Fused-Multiply-Add operations in one instruction.

So hopefully Go's compiler doesn't use x87 much. The normal way to do scalar math in 64-bit mode (that C compilers and their math libraries use) is SSE, in XMM registers. x86-64 C compilers only use x87 for types like long double.

long double cast to int data type effects double division precision (powerpc, ppc32)

Ok. I found the bug I think:

GCC assembly of 'long double' to 'int' casting:

  7c:   39 3f 00 40     addi    r9,r31,64
80: c8 09 00 00 lfd f0,0(r9)
84: c8 29 00 08 lfd f1,8(r9)
88: fd 80 04 8e mffs f12
8c: ff e0 00 4c mtfsb1 31
90: ff c0 00 8c mtfsb0 30
94: fc 00 08 2a fadd f0,f0,f1
98: fc 02 65 8e mtfsf 1,f12
9c: fc 00 00 1e fctiwz f0,f0
a0: d8 1f 00 18 stfd f0,24(r31)
a4: 81 3f 00 1c lwz r9,28(r31)
a8: 91 3f 00 2c stw r9,44(r31)

This is crucial:

mtfsb1  31
mtfsb0 30

It is manipulating on Rounding bits. These bits are not restored after 'cast' operation. IMO it is GCC bug but I cannot find any reference.



Related Topics



Leave a reply



Submit