Stopping python using ctrl+c
On Windows, the only sure way is to use CtrlBreak. Stops every python script instantly!
(Note that on some keyboards, "Break" is labeled as "Pause".)
Cannot kill Python script with Ctrl-C
Ctrl+C terminates the main thread, but because your threads aren't in daemon mode, they keep running, and that keeps the process alive. We can make them daemons:
f = FirstThread()
f.daemon = True
f.start()
s = SecondThread()
s.daemon = True
s.start()
But then there's another problem - once the main thread has started your threads, there's nothing else for it to do. So it exits, and the threads are destroyed instantly. So let's keep the main thread alive:
import time
while True:
time.sleep(1)
Now it will keep print 'first' and 'second' until you hit Ctrl+C.
Edit: as commenters have pointed out, the daemon threads may not get a chance to clean up things like temporary files. If you need that, then catch the KeyboardInterrupt
on the main thread and have it co-ordinate cleanup and shutdown. But in many cases, letting daemon threads die suddenly is probably good enough.
Python script can't be terminated through Ctrl+C or Ctrl+Break
This is a general problem that could raise when dealing with signal handling. Python signal is not an exception, it's a wrapper of operating system signal. Therefore, signal processing in python depends on operating system, hardware and many conditions. However, how to deal with these problem is similar.
According to this tutorial, I'll quote the following paragraphs: signal – Receive notification of asynchronous system events
Signals are an operating system feature that provide a means of
notifying your program of an event, and having it handled
asynchronously. They can be generated by the system itself, or sent
from one process to another. Since signals interrupt the regular flow
of your program, it is possible that some operations (especially I/O)
may produce error if a signal is received in the middle.Signals are identified by integers and are defined in the operating
system C headers. Python exposes the signals appropriate for the
platform as symbols in the signal module. For the examples below, I
will use SIGINT and SIGUSR1. Both are typically defined for all Unix
and Unix-like systems.
In my code:
os.system("python myParsePDB.py -i BP1.pdb -c 1 -s %s" % step)
inside the for loop will be executed for a bit of time and will spend some time on I/O files. If the keyboard interrupt is passing too fast and do not catch asynchronously after writing files, the signal might be blocked in operating system, so my execution will still remain the try
clause for loop. (Errors detected during execution are called exceptions and are not unconditionally fatal: Python Errors and Exceptions).
Therefore the simplest way to make them asynchonous is wait:
try:
for i in range(0,360, step):
os.system("python myParsePDB.py -i BP1.pdb -c 1 -s %s" % step)
time.sleep(0.2)
except KeyboardInterrupt:
print("Stop me!")
sys.exit(0)
It might hurt performance but it guaranteed that the signal can be caught after waiting the execution of os.system()
. You might also want to use other sync/async functions to solve the problem if better performance is required.
For more unix signal reference, please also look at: Linux Signal Manpage
Ctrl C won't kill looped subprocess in Python
What you have in your try/except block is too permissive, such that when Ctrl+C is pressed, the KeyboardInterrupt
exception is also handled by that same exception handler as the one that print "Command Failed"
, and as that is now properly handled there, the flow of the program is continued through the for loop. What you should do is:
- Replace
except:
withexcept Exception:
so that theKeyboardInterrupt
exception will not be trapped, such that any time Ctrl+C is pressed the program will terminate (including subprocesses that isn't stuck in some non-terminatable state); - After the
print
statement,break
out of the loop to prevent further execution from happening, if that is the intended behavior that you want this program to do.
Stop Python script with ctrl + c
class DummyThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self._running = True
signal.signal(signal.SIGINT, self.stop)
signal.signal(signal.SIGTERM, self.stop)
The program actually does not work as expected because of those last two lines and would work without them.
The reason is that, if you press Ctrl-C, the SIGINT
signal is handled by the signal handler that is set up by signal.signal
and self.stop
is called. So the thread should actually stop.
But in the main thread, the while True
loop is still running. Since the signal has already been handled, there will be no KeyboardInterrupt
exception raised by the Python runtime. Therefore you never get to the except
part.
if __name__ == "__main__":
try:
t = DummyThread()
t.start()
while True: # you are stuck in this loop
print("Main thread running")
time.sleep(0.5)
except KeyboardInterrupt: # this never happens
print("This never gets printed")
t.stop()
Only one signal handler should be set up to call the stop
method. So there are two options to solve the problem:
Handle the signal implicitly by catching the
KeyboardInterrupt
exception. This is achieved by simply removing the twosignal.signal(...)
lines.Set up an explicit signal handler (as you did by using
signal.signal
inDummyThread.__init__
), but remove thewhile True:
loop from the main thread and do not try to handleKeyboardInterrupt
. Instead, just wait for theDummyThread
to finish on its own by using itsjoin
method:if __name__ == "__main__":
t = DummyThread()
t.start()
t.join()
print("Exit")
Can't terminate multiprocessing program with ctrl + c
There are several answers on SO that address your question, but they do not seem to work with the map
function where the main process is blocked waiting for all the submitted tasks to complete. This may not be an ideal solution, but it does work:
- Issue a call to
signal.signal(signal.SIGINT, signal.SIG_IGN)
in each process in your process pool to ignore the interrupt entirely and leave the handling to the main process and - Use instead method
Pool.imap
(orPool.imap_unordered
) instead ofPool.map
which lazily evaluates your iterable argument for submitting tasks and processing results. In this way it (a) does not block waiting for all the results and (b) you save memory in not having to create an actual list forvalue_n_list
and instead use a generator expression. - Have the main process issue print statements periodically and frequently, for example reporting on the progress of the submitted tasks being completed. This is required for the keyboard interrupt to be perceived. In the code below a
tqdm
progress bar is being used but you can simply print a completion count every N task completions where N is selected so that you do not have to wait too long for the interrupt to take effect after Ctrl-c has been entered:
from multiprocessing import Pool
import signal
import tqdm
def init_pool():
signal.signal(signal.SIGINT, signal.SIG_IGN)
def process_number(number: int):
import time
# processes the number
time.sleep(.001)
if __name__ == '__main__':
control = 1
list_size = 100000
# No reason to create the pool over and over again:
with Pool(initializer=init_pool) as p:
try:
with tqdm.trange(list_size) as progress_bar:
while True:
#value_n_list = (n for n in range(control, control + list_size))
value_n_list = range(control, control + list_size)
progress_bar.reset()
result = []
# The iterable returned by `imap` must be iterated.
# If you don't care about the value, don't store it away and use `imap_unordered` instead:
for return_value in p.imap(process_number, value_n_list):
progress_bar.update(1)
result.append(return_value)
control += list_size
except KeyboardInterrupt:
print('Ctrl-c entered.')
Update
You did not specify which platform you were running under (you should always tag your question with the platform when you tag a question with multiprocessing
), but I assumed it was Windows. If , however, you are running under Linux, the following simpler solution should work:
from multiprocessing import Pool
import signal
def init_pool():
signal.signal(signal.SIGINT, signal.SIG_IGN)
def process_number(number: int):
import time
# processes the number
time.sleep(.001)
if __name__ == '__main__':
control = 1
list_size = 100000
# No reason to create the pool over and over again:
with Pool(initializer=init_pool) as p:
try:
while True:
value_n_list = [n for n in range(control, control + list_size)]
result = p.map(process_number, value_n_list)
control += list_size
except KeyboardInterrupt:
print('Ctrl-c entered.')
See Keyboard Interrupts with python's multiprocessing Pool
Update
If that is all your "worker" function, process_number
is doing (squaring a number), your performance will suffer from using multiprocessing. The overhead from (1) Creating and destroying the process pools (and thus the processes) and (2) writing and reading to arguments and return values from address space to another (using queues). The following benchmarks this:
Function
non-multiprocessing
performs 10 iterations (rather than an infinite loop for obvious reasons) of looping 10,000 times callingprocess_number
and saving all the return values inresult
.Function
multiprocessing_1
uses multiprocessing to perform the above but only creates the pool once (8 logical cores, 4 physical cores).Function
multiprocessing_2
re-creates the pool for each of the 10 iterations.Function
multiprocessing_3
is included as a "sanity check" and is identical tomultiprocessing_1
except it has the Linux Ctrl-c checking code.
The timings of each is printed out.
from multiprocessing import Pool
import time
import signal
def init_pool():
signal.signal(signal.SIGINT, signal.SIG_IGN)
def process_number(number: int):
# processes the number
return number * number
N_TRIALS = 10
list_size = 100_000
def non_multiprocessing():
t = time.time()
control = 1
for _ in range(N_TRIALS):
result = [process_number(n) for n in range(control, control + list_size)]
print(control, result[0], result[-1])
control += list_size
return time.time() - t
def multiprocessing_1():
t = time.time()
# No reason to create the pool over and over again:
with Pool() as p:
control = 1
for _ in range(N_TRIALS):
value_n_list = [n for n in range(control, control + list_size)]
result = p.map(process_number, value_n_list)
print(control, result[0], result[-1])
control += list_size
return time.time() - t
def multiprocessing_2():
t = time.time()
control = 1
for _ in range(N_TRIALS):
# Create the pool over and over again:
with Pool() as p:
value_n_list = [n for n in range(control, control + list_size)]
result = p.map(process_number, value_n_list)
print(control, result[0], result[-1])
control += list_size
return time.time() - t
def init_pool():
signal.signal(signal.SIGINT, signal.SIG_IGN)
def multiprocessing_3():
t = time.time()
# No reason to create the pool over and over again:
with Pool(initializer=init_pool) as p:
try:
control = 1
for _ in range(N_TRIALS):
value_n_list = [n for n in range(control, control + list_size)]
result = p.map(process_number, value_n_list)
print(control, result[0], result[-1])
control += list_size
except KeyboardInterrupt:
print('Ctrl-c entered.')
return time.time() - t
if __name__ == '__main__':
print('non_multiprocessing:', non_multiprocessing(), end='\n\n')
print('multiprocessing_1:', multiprocessing_1(), end='\n\n')
print('multiprocessing_2:', multiprocessing_2(), end='\n\n')
print('multiprocessing_3:', multiprocessing_3(), end='\n\n')
Prints:
1 1 10000000000
100001 10000200001 40000000000
200001 40000400001 90000000000
300001 90000600001 160000000000
400001 160000800001 250000000000
500001 250001000001 360000000000
600001 360001200001 490000000000
700001 490001400001 640000000000
800001 640001600001 810000000000
900001 810001800001 1000000000000
non_multiprocessing: 0.11899852752685547
1 1 10000000000
100001 10000200001 40000000000
200001 40000400001 90000000000
300001 90000600001 160000000000
400001 160000800001 250000000000
500001 250001000001 360000000000
600001 360001200001 490000000000
700001 490001400001 640000000000
800001 640001600001 810000000000
900001 810001800001 1000000000000
multiprocessing_1: 0.48778581619262695
1 1 10000000000
100001 10000200001 40000000000
200001 40000400001 90000000000
300001 90000600001 160000000000
400001 160000800001 250000000000
500001 250001000001 360000000000
600001 360001200001 490000000000
700001 490001400001 640000000000
800001 640001600001 810000000000
900001 810001800001 1000000000000
multiprocessing_2: 2.4370007514953613
1 1 10000000000
100001 10000200001 40000000000
200001 40000400001 90000000000
300001 90000600001 160000000000
400001 160000800001 250000000000
500001 250001000001 360000000000
600001 360001200001 490000000000
700001 490001400001 640000000000
800001 640001600001 810000000000
900001 810001800001 1000000000000
multiprocessing_3: 0.4850032329559326
Even with creating the pool once, multiprocessing took approximately 4 times longer than a straight non-multiprocessing implementation. But it runs approximately 5 times faster than the version that re-creates the pool for each of the 10 iterations. As expected, the running time of multiprocessing_3
is essentially identical to the running time for multiprocessing_1
, i.e. the Ctrl-c code has no effect on the running behavior.
Conclusions
- The Linux Ctrl-c code should have no significant effect on the running behavior of the program.
- Moving the pool-creation code outside the loop should greatly reduce the running time of the program. As to what effect, however, it should have on CPU-utilization, I cannot venture a guess.
- Your problem is not a suitable candidate as is for multiprocessing.
How to kill a child thread with Ctrl+C?
The problem there is that you are using thread1.join()
, which will cause your program to wait until that thread finishes to continue.
The signals will always be caught by the main process, because it's the one that receives the signals, it's the process that has threads.
Doing it as you show, you are basically running a 'normal' application, without thread features, as you start 1 thread and wait until it finishes to continue.
ctrl + c' doesn't work with this python code on windows
Couple of things going on here.
From https://docs.python.org/3/library/signal.html#signals-and-threads:
Python signal handlers are always executed in the main Python thread of the main interpreter, even if the signal was received in another thread.
The main thread creates a non-daemon thread and runs test_func() with an infinite loop. The main thread then exits. The non-Daemon thread goes on it way printing a msg then sleeps for 3 seconds, and so on forever.
Since the interrupt handler is on the main thread, the non-daemon thread keeps running and pressing Ctrl+C has no affect to stop execution.
However, on Windows you can typically press Ctrl+Pause or Ctrl+ScrLk to terminate the Python process when Ctrl+C doesn't work.
If code ran test_func() with the sleep call on the main thead, then pressing Ctrl+C would raise a KeyboardInterrupt exception.
def main():
test_func()
# thread0 = threading.Thread(target=test_func)
# thread0.daemon = True
# thread0.start()
Can't quit Python script with Ctrl-C if a thread ran webbrowser.open()
Two immediate caveats with this answer:
- There may be a way of accomplishing what you would like that is much closer to your original design. If you'd prefer not deviating as much from your original idea, perhaps another answerer can provide a better solution.
- This solution has not been tested on Windows and therefore may run up against the same or similar issues with it failing to recognize the Ctrl-C signal. Unfortunately, I do not have a Windows machine with a Python interpreter on hand to try it out first.
With that out of the way:
You may find that you have an easier time by placing the server in a seperate thread and then controlling it from the main (non-blocked) thread via some simple signals.
I've created a toy example below to demonstrate what I mean. You may prefer to put the class in a file by itself and then simply import it and instantiate a new instance of the class into your other scripts.
import ctypes
import threading
import webbrowser
import bottle
class SimpleExampleApp():
def __init__(self):
self.app = bottle.Bottle()
#define all of the routes for your app inside the init method
@self.app.get('/')
def index():
return 'It works!'
@self.app.get('/other_route')
def alternative():
return 'This Works Too'
def run(self):
self.start_server_thread()
#depending upon how much configuration you are doing
#when you start the server you may need to add a brief
#delay before opening the browser to make sure that it
#is ready to receive the initial request
webbrowser.open('http://localhost:8042')
def start_server_thread(self):
self.server_thread = threading.Thread(
target = self.app.run,
kwargs = {'host': 'localhost', 'port': 8042}
)
self.server_thread.start()
def stop_server_thread(self):
stop = ctypes.pythonapi.PyThreadState_SetAsyncExc(
ctypes.c_long(self.server_thread.get_ident()),
ctypes.py_object(KeyboardInterrupt)
)
#adding a print statement for debugging purposes since I
#do not know how well this will work on Windows platform
print('Return value of stop was {0}'.format(stop))
self.server_thread.join()
my_app = SimpleExampleApp()
my_app.run()
#when you're ready to stop the app, simply call
#my_app.stop_server_thread()
You'll likely need to modify this fairly heavily for the purposes of your actual application, but it should hopefully get you started.
Good luck!
What happens when a Python job is killed with Ctrl+C? Can I force an automatic save?
- Ctrl+C sends
SIGINT
, which can be trapped and handled or ignored.
The behavior of programs that are interrupted from the keyboard or other event is mediated by the operating system and so is not merely a python issue but would arise in C++, Java, JavaScript, etc. It can also differ somewhat between Linux/Unix and Windows. The usual default behavior is to stop the running process, and if there are complex storage or other procedures underway that should not be interrupted it is up to the programmer to set appropriate handlers or options with the OS. Even so, there are other mechanisms that can kill the process running the script. For example kill -9
or SIGKILL
can not be trapped or handled.
- For handling Ctrl+C see for example Python: Catch Ctrl-C command. Prompt "really want to quit (y/n)", resume execution if no
This is not an instant solution to your needs, but could be simply modified, say to print "Do not use Control+C, instead wait for the program to finish and use the exit command".
There is a danger in running "save" functionality in response to an interrupt or signal that the process was already saving data when interrupted. Depending on how and where you save data, an issue of duplicates or corruption could arise.
Related Topics
What Do I Need to Read Microsoft Access Databases Using Python
Python Script as Linux Service/Daemon
How to Remove Items from a List While Iterating
How to Remove Duplicates from a List, While Preserving Order
How to Parse an Iso 8601-Formatted Date
Pip' Is Not Recognized as an Internal or External Command
Difference Between Shallow Copy, Deepcopy and Normal Assignment Operation
Access Nested Dictionary Items Via a List of Keys
How to Determine the Size of an Object in Python
Process List on Linux Via Python
Show Matplotlib Plots (And Other Gui) in Ubuntu (Wsl1 & Wsl2)
How to Serve Static Files in Flask
How to Parse a String to a Float or Int
Why Does "Pip Install" Inside Python Raise a Syntaxerror