Saving gmon.out before killing a process
First, I would like to thank @wallyk for giving me good initial pointers. I solved my issue as follows. Apparently, libc's gprof exit handler is called _mcleanup
. So, I registered a signal handler for SIGUSR1 (unused by the 3rd party library) and called _mcleanup
and _exit
. Works perfectly! The code looks as follows:
#include <dlfcn.h>
#include <stdio.h>
#include <unistd.h>
void sigUsr1Handler(int sig)
{
fprintf(stderr, "Exiting on SIGUSR1\n");
void (*_mcleanup)(void);
_mcleanup = (void (*)(void))dlsym(RTLD_DEFAULT, "_mcleanup");
if (_mcleanup == NULL)
fprintf(stderr, "Unable to find gprof exit hook\n");
else _mcleanup();
_exit(0);
}
int main(int argc, char* argv[])
{
signal(SIGUSR1, sigUsr1Handler);
neverReturningLibraryFunction();
}
How to use gprof to profile a daemon process without terminating it gracefully?
Need to profile a daemon written in C++, gprof says it need to terminate the process to get the gmon.out.
That fits the normal practice of debugging daemon processes: provision a switch (e.g. with command line option) which would force the daemon to run in foreground.
I'm wondering anyone has ideas to get the gmon.out with ctrl-c?
I'm not aware of such options.
Though in case of gmon, call to exit()
should suffice: if you for example intend to test say processing 100K messages, you can add in code a counter incremented on every processed message. When the counter exceeds the limit, simply call exit()
.
You also can try to add a handler for some unused signal (like SIGUSR1 or SIGUSR2) and call exit()
from there. Thought I do not have personal experience and cannot be sure that gmon would work properly in the case.
I want to find out the hot spot for cpu cycle
My usual practice is to create a test application, using same source code as the daemon but different main()
where I simulate precise scenario (often with a command line switch many scenarios) I need to debug or test. For the purpose, I normally create a static library containing the whole module - except the file with main()
- and link the test application with the static library. (That helps keeping Makefiles tidy.)
I prefer the separate test application to hacks inside of the code since especially in case of performance testing I can sometimes bypass or reduce calls to expensive I/O (or DB accesses) which often skews the profiler's sampling and renders the output useless.
Finding which functions are called in a multi-process program without modifying source?
Take a look at GCov: http://gcc.gnu.org/onlinedocs/gcc/Gcov.html
unintelligable names in gprof
I was able to determine that the functions in question were part of the third party library using callgrind.
The raw callgrind.out
file contains a listing of the functions in the program. Included in this is what functions are called as a part of the running of each function.
Using this, I was able to trace up from the functions in question until I reached a function that is part of the library API.
As for the goal of optimization, the API function that is eating up most of my run time (>75%) cannot be called fewer times. This is making it difficult (although not impossible) to spot other hotspots.
Related Topics
Is Timer Interrupt Independent of Whether System Is in Kernel Mode or User Mode
Capturing Performance with Pcap VS Raw Socket
Crontab Day of the Week Syntax
How to Connect to a Terminal to a Serial-To-Usb Device on Ubuntu 10.10 (Maverick Meerkat)
Linux Shell Script for Each File in a Directory Grab the Filename and Execute a Program
Allocate Writable Memory in the .Text Section
Convert Utf8 to Utf16 Using Iconv
Sighup for Reloading Configuration
Google-Chrome Failed to Move to New Namespace
Add User to Group But Not Reflected When Run "Id"
How to Make Sure the Numpy Blas Libraries Are Available as Dynamically-Loadable Libraries
How to Create Threads Without System Calls in Linux X86 Gas Assembly
One Core Exclusively for My Process
Selecting the Right Linux I/O Scheduler for a Host Equipped with Nvme Ssd
Read Line by Line in Bash Script