Keep Persistent Variables in Memory Between Runs of Python Script

Keep persistent variables in memory between runs of Python script

You can achieve something like this using the reload global function to re-execute your main script's code. You will need to write a wrapper script that imports your main script, asks it for the variable it wants to cache, caches a copy of that within the wrapper script's module scope, and then when you want (when you hit ENTER on stdin or whatever), it calls reload(yourscriptmodule) but this time passes it the cached object such that yourscript can bypass the expensive computation. Here's a quick example.

wrapper.py

import sys
import mainscript

part1Cache = None
if __name__ == "__main__":
while True:
if not part1Cache:
part1Cache = mainscript.part1()
mainscript.part2(part1Cache)
print "Press enter to re-run the script, CTRL-C to exit"
sys.stdin.readline()
reload(mainscript)

mainscript.py

def part1():
print "part1 expensive computation running"
return "This was expensive to compute"

def part2(value):
print "part2 running with %s" % value

While wrapper.py is running, you can edit mainscript.py, add new code to the part2 function and be able to run your new code against the pre-computed part1Cache.

Keeping Python Variables between Script Calls

(I misunderstood the original question, but the first answer I wrote has a different solution, which might be useful to someone fitting that scenario, so I am keeping that one as is and proposing second solution.
)

For a single machine, OS provided pipes are the best solution for what you are looking.

Essentially you will create a forever running process in python which reads from pipe, and process the commands entering the pipe, and then prints to sysout.

Reference: http://kblin.blogspot.com/2012/05/playing-with-posix-pipes-in-python.html

From above mentioned source

Workload
In order to simulate my workload, I came up with the following simple script called pipetest.py that takes an output file name and then writes some text into that file.

#!/usr/bin/env python

import sys

def main():
pipename = sys.argv[1]
with open(pipename, 'w') as p:
p.write("Ceci n'est pas une pipe!\n")

if __name__ == "__main__":
main()

The Code
In my test, this "file" will be a FIFO created by my wrapper code. The implementation of the wrapper code is as follows, I will go over the code in detail further down this post:

#!/usr/bin/env python

import tempfile
import os
from os import path
import shutil
import subprocess

class TemporaryPipe(object):
def __init__(self, pipename="pipe"):
self.pipename = pipename
self.tempdir = None

def __enter__(self):
self.tempdir = tempfile.mkdtemp()
pipe_path = path.join(self.tempdir, self.pipename)
os.mkfifo(pipe_path)
return pipe_path

def __exit__(self, type, value, traceback):
if self.tempdir is not None:
shutil.rmtree(self.tempdir)

def call_helper():
with TemporaryPipe() as p:
script = "./pipetest.py"
subprocess.Popen(script + " " + p, shell=True)
with open(p, 'r') as r:
text = r.read()
return text.strip()

def main():
call_helper()

if __name__ == "__main__":
main()

Keeping the data of a variable between runs of code

Simply pickle the data you want to keep persistent. Since your use case doesn't require very complex data storage, pickling is a very good option. A small example:

import pickle

word_list = ["cat", "hat", "jump", "house", "orange", "brick", "horse", "word"]

# do your thing here, like
word_list.append("monty")

# open a pickle file
filename = 'mypickle.pk'

with open(filename, 'wb') as fi:
# dump your data into the file
pickle.dump(word_list, fi)

Later when you need to use it again, just load it up:

# load your data back to memory when you need it
with open(filename, 'rb') as fi:
word_list = pickle.load(fi)

Ta-da! You have data persistence now. More reading here. A few important pointers:

  1. Notice the 'b' when I use open() to open a file. Pickles are commonly stored in a binary format, so you must open the file in a binary mode.
  2. I used the with context manager. This ensures that a file is safely closed once all my work with the file is done.

is it possible to save a float value in a variable even after exiting the script

import os 

# Insert code at beginning of script to load score from backup
dir_path = 'C:/Desktop/'
back_up = os.path.join(dir_path, 'backup.txt')

if os.path.isfile(back_up):
with open(back_up, 'r') as r:
score = r.read()
else:
score = open(back_up, 'w+')

# Insert code at end of script to save score to backup
with open(back_up, 'w') as w:
w.write(score)

keeping a set content between program runs in python

I'd suggest using pickle, which serializes and unserializes python data types.

Do you anticipate the program unexpectedly exiting? If not, you could write it at start and at close.

How to cache imported modules once off for multiple runs of a python script?

No, you can't. You would have to use persistent storage, like shelve, or something in-memory such as SQLite, where you'd store any expensive computations that persist between sessions, and subsequently you'd just read those results from memory/disk, depending on your chosen storage.

Moreover, do note modules are, in fact, being cached upon import in order to improve load time, however, not in memory, but rather on disk, as .pyc files under __pychache__ and the import time per se is insignificant in general, so your imports take that long not because of the import itself, but because of the computations inside those modules, so you might want to optimise those.

The reason you can't do what you want is because in order to keep data in memory, the process must keep running. Memory belongs to the process running the script, and once that script finished, the memory is freed. See here for additional details regarding your issue.

You can't just run a script and fill the memory with whatever computations you have until you might run it another time, because, firstly, the memory wouldn't know when that other time would be (it might be 1 min later, it might be 1 year later) and secondly, if you would be able to do that, then imagine how shortly you'd run out of memory when different scripts from different applications across the OS (it's not just your program out there) would fill the memory with the results of their computations.

So you can either run your code in an indefinite loop with sleep (and keep the process active) or you can use a crontab and store your previous results somewhere.



Related Topics



Leave a reply



Submit