Coredump Is Getting Truncated

What use is a truncated coredump?


Can anything useful be done with such a truncated coredump?

Yes, lots of things.

The truncated core dump will usually contain the stack segments, so the commands where and thread apply all where will usually work. Often that's all one needs from a core. Commands to examine local variables and globals will likely work as well.

Commands to examine heap-allocated variables may work for some variables, but not necessarily for others. Still, this is much better than nothing.

Core dump file name truncated

The code for this can be found in exec.c here.

The code is going to copy the corename based on the pattern up to the first percentage (giving /cores/core.). At the percentage it's going to increment and process the 'e'. The code for processing the 'e' part prints out the pattern using snprintf based on the current->comm structure.

This is the executable name (excluding path) TRUNCATED to the value TASK_COMM_LEN. Since this is defined as 16 characters (at least in the Kernel I found) then SampleCrashApplication is truncated to 15 + 1 characters (1 for the null byte at the end) which explains why you get your truncated core dump name.

At to why this structure truncates the name TASK_COMM_LEN, that's a deeper question, but it's something internal to the kernel and there's some discussion here.

why does gdb complain that my core files are too small and then fail to produce a meaningful stack trace?


The Core Dump File Format

On a modern Linux system, core dump files are formatted using the ELF object file format, with a specific configuration.
ELF is a structured binary file format, with file offsets used as references between data chunks in the file.

  • Core Dump Files
  • The ELF object file format

For core dump files, the e_type field in the ELF file header will have the value ET_CORE.

Unlike most ELF files, core dump files make all their data available via program headers, and no section headers are present.
You may therefore choose to ignore section headers in calculating the size of the file, if you only need to deal with core files.

Calculating Core Dump File Size

To calculate the ELF file size:

  1. Consider all the chunks in the file:

    • chunk description (offset + size)
    • the ELF file header (0 + e_ehsize) (52 for ELF32, 64 for ELF64)
    • program header table (e_phoff + e_phentsize * e_phnum)
    • program data chunks (aka "segments") (p_offset + p_filesz)
    • the section header table (e_shoff + e_shentsize * e_shnum) - not required for core files
    • the section data chunks - (sh_offset + sh_size) - not required for core files
  2. Eliminate any section headers with a sh_type of SHT_NOBITS, as these are merely present to record the position of data that has been stripped and is no longer present in the file (not required for core files).
  3. Eliminate any chunks of size 0, as they contain no addressable bytes and therefore their file offset is irrelevant.
  4. The end of the file will be the end of the last chunk, which is the maximum of the offset + size for all remaining chunks listed above.

If you find the offsets to the program header or section header tables are past the end of the file, then you will not be able to calculate an expected file size, but you will know the file has been truncated.

Although an ELF file could potentially contain unaddressed regions and be longer than the calculated size, in my limited experience the files have been exactly the size calculated by the above method.

Truncated Core Files

gdb likely performs a calculation similar to the above to calculate the expected core file size.

In short, if gdb says your core file is truncated, it is very likely truncated.

One of the most likely causes for truncated core dump files is the system ulimit. This can be set on a system-wide basis in /etc/security/limits.conf, or on a per-user basis using the ulimit shell command [footnote: I don't know anything about systems other than my own].

Try the command "ulimit -c" to check your effective core file size limit:

$ ulimit -c
unlimited

Also, it's worth noting that gdb doesn't actually refuse to operate because of the truncated core file. gdb still attempts to produce a stack backtrace and in your case only fails when it tries to access data on the stack and finds that the specific memory locations addressed are off the end of the truncated core file.

Is 2G the limit size of coredump file on Linux?

@n.m. is correct.

(1) Modify /etc/systemd/coredump.conf file:

[Coredump]
ProcessSizeMax=8G
ExternalSizeMax=8G
JournalSizeMax=8G

(2) Reload systemd's configuration:

# systemctl daemon-reload

Notice this will only take effect for the new generated core dump files.



Related Topics



Leave a reply



Submit