Multiprocessing causes Python to crash and gives an error may have been in progress in another thread when fork() was called
This error occurs because of added security to restrict multithreading in macOS High Sierra and later versions of macOS. I know this answer is a bit late, but I solved the problem using the following method:
Set an environment variable .bash_profile
(or .zshrc
for recent macOS) to allow multithreading applications or scripts under the new macOS High Sierra security rules.
Open a terminal:
$ nano .bash_profile
Add the following line to the end of the file:
OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
Save, exit, close terminal and re-open the terminal. Check to see that the environment variable is now set:
$ env
You will see output similar to:
TERM_PROGRAM=Apple_Terminal
SHELL=/bin/bash
TERM=xterm-256color
TMPDIR=/var/folders/pn/vasdlj3ojO#OOas4dasdffJq/T/
Apple_PubSub_Socket_Render=/private/tmp/com.apple.launchd.E7qLFJDSo/Render
TERM_PROGRAM_VERSION=404
TERM_SESSION_ID=NONE
OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
You should now be able to run your Python script with multithreading.
Force a program created using `exec` to perform unbuffered I/O
You cannot do what you want in the general case (unbuffering after execve(2)
of arbitrary executables...)
Buffering is done by code (e.g. by some libc
code related to <stdio.h>
). And the code is defined by the program being execve
-ed.
You might perhaps play with LD_PRELOAD tricks which might call setvbuf(stdin, NULL, _IONBF, BUFSIZ);
after the execve
(but before the main
....); but this would work only with dynamically linked executables.
Perhaps using some constructor
function attribute in some initialization function of your LD_PRELOAD
-ed shared object might sometimes do the trick. Or redefine printf
, fopen
, .... in that shared object...
addenda
You commented that you do :
two-way communication with a sub-process using two pipes.
Then your approach is wrong. The parent process should monitor the two pipes, probably with a multiplexing call like poll(2), then (according to the result of that multiplexing) decide to read or to write to the child process. In reality, you want some event loop: either implement a simple event loop yourself (with e.g. poll
[iteratively called many times] inside a repeated loop) or use some existing one (see libevent or libev, or the one provided by some toolkit like GTK or Qt etc...)
You might also multiplex with select(2) but I recommend poll
because of the C10K problem
You won't lose your time by reading Advanced Linux Programming ...
See also this answer to your next related question.
Why does this program hang on exit? (interaction between signals and sudo)
sudo will suspend itself if the command it's running suspends itself. This allows you to, for example, run sudo -s
to start a shell, then type suspend
in that shell to get back to your top-level shell. If you have the source code for sudo, you can look at the suspend_parent
function to see how this is done.
When sudo (or any process) has been suspended, the only way to resume it is to send it a SIGCONT signal. Sending SIGCONT to the selfstop process won't do that.
>ps aux | grep [s]elf
root 7619 0.0 0.0 215476 4136 pts/4 T 18:16 0:00 sudo ./selfstop
root 7623 0.0 0.0 0 0 pts/4 Z 18:16 0:00 [selfstop] <defunct>
That indicates that selfstop has exited but hasn't yet been wait
ed for by its parent. It will remain a zombie until sudo is either resumed or killed.
How can you work around this? sudo and selfstop will be in the same process group (unless selfstop does something to change that). So you could send SIGCONT to sudo's process group, which will resume both processes, by doing kill -CONT -the-pid-of-sudo
(note the minus sign before the pid to denote a pgrp).
Why does `` redirect not capture substituted processes' stdout?
The shell that runs head
is spawned by the same shell that runs tee
, which means tee
and head
both inherit the same file descriptor for standard output, which file descriptor is connected to the pipe to cat
. That means both tee
and head
have their output piped to cat
, resulting in the behavior you see.
Tie the life of a process to the shell that started it
If your shell isn't a subshell, you can do the following; Put the following into a script called "ttywatch":
#!/usr/bin/perl
my $p=open(PI, "-|") || exec @ARGV; sleep 5 while(-t); kill 15,$p;
Then run your program as:
$ ttywatch commandline... & disown
Disowning the process will prevent the shell from complaining that there are running processes, and when the terminal closes, it will cause SIGTERM (15
) to be delivered to the subprocess (your app) within 5 seconds.
If the shell isn't a subshell, you can use a program like ttywrap to at least give it its own tty, and then the above trick will work.
Related Topics
List of Available Wireless Networks with Golang (Under Linux)
How to Determine a Tar Archive's Format
Problems with Sudo Inside Expect Script
How to View Function Names and Parameters Contained in an Elf File
Grep Array Parameter of Excluded Files
How to Modify The Linux Kernel to Change The Version String That Uname Returns
When to Use Linux Kernel Add_Timer Vs Queue_Delayed_Work
Ftrace: Only Print Output of Trace_Printk()
Cannot Push to My Github Private Repository
Difference Between ./Executable and . Executable
How to Get Window Id for Xdotool Automatically
In Linux, Do There Exist Functions Similar to _Clearfp() and _Statusfp()
Pg_Dump Not Been Killed Inside Script with Kill Command
Sending Mail in Bash Script Outputs Literal \N Instead of a New Line