MySQL Code Causes PHP Script to Crash at Popen/Exec

MySQL code causes PHP script to crash at popen/exec

Is the parent process definitely exiting after forking? I had thought pclose would wait for the child to exit before returning.

If it isn't exiting, I'd speculate that because the mySQL connection is never closed, you're eventually hitting its connection limit (or some other limit) as you spawn the tree of child processes.

Edit 1

I've just tried to replicate this. I altered your script to fork every half-second, rather than every minute, and was able to kill it off within about 10 minutes.

It looks like the the repeat creation of child processes is generating ever more FDs, until eventually it can't have any more:

$ lsof | grep type=STREAM | wc -l
240
$ lsof | grep type=STREAM | wc -l
242
...
$ lsof | grep type=STREAM | wc -l
425
$ lsof | grep type=STREAM | wc -l
428
...

And that's because the child's inheriting the parent's FDs (in this case for the mySQL connection) when it forks.

If you close the mySQL connection before popen with (in your case):

DatabaseConnector::$db = null;

The problem will hopefully go away.

PHP script stops working suddenly without reason

For such cases i use with success nohup command like this:

nohup php  /home/cron.php >/dev/null  2>&1 &

You can check after that if script is running with:

jobs -l

Note:
When you use nohup command path for php file must to be absolute not relative.
I think is not very graceful to call from one php file another php file only to prevent that execution to stop before finish work.

External reference:
http://en.wikipedia.org/wiki/Nohup

Also make sure that you do not have memory leaks in your script, that make script after some time to crash because "out of memory".

How do I keep my mysql connection in the parent process after pcntl_fork?

The only thing you could try, is to let your children wait until each other child has finished its job. This way you could use the same database connection (provided there aren't any synchronization issues). But of course you'll have a lot of processes, which is not very good too (in my experience PHP has quite a big memory usage). If having multiple processes accessing the same database connection is not a problem, you could try to make "groups" of processes which share a connection. So you don't have to wait until each job finished (you can clean up when the whole group finished) and you don't have a lot of connections either..

You should ask yourself whether you really need a database connection for your worker processes. Why not let the parent fetch the data and write your results to a file?

If you do need the connection, you should consider using another language for the job. PHPs cli itself is not a "typical" use case (it was added in 4.3) and multiprocessing is more of a hack than a supported feature.

Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)

The SIGSEGV signal indicates a "segmentation violation" or a "segfault". More or less, this equates to a read or write of a memory address that's not mapped in the process.

This indicates a bug in your program. In a Python program, this is either a bug in the interpreter or in an extension module being used (and the latter is the most common cause).

To fix the problem, you have several options. One option is to produce a minimal, self-contained, complete example which replicates the problem and then submit it as a bug report to the maintainers of the extension module it uses.

Another option is to try to track down the cause yourself. gdb is a valuable tool in such an endeavor, as is a debug build of Python and all of the extension modules in use.

After you have gdb installed, you can use it to run your Python program:

gdb --args python <more args if you want>

And then use gdb commands to track down the problem. If you use run then your program will run until it would have crashed and you will have a chance to inspect the state using other gdb commands.



Related Topics



Leave a reply



Submit