How to Prevent Out of Memory (Oom) Freezes on Linux

How to keep executable code in memory even under memory pressure ? in Linux

To answer the question, here's a simple/preliminary patch to not evict Active(file)(as seen in /proc/meminfo) if it's less than 256 MiB, that seems to work ok (no disk thrashing) with linux-stable 5.2.4:

diff --git a/mm/vmscan.c b/mm/vmscan.c
index dbdc46a84f63..7a0b7e32ff45 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2445,6 +2445,13 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
BUG();
}

+ if (NR_ACTIVE_FILE == lru) {
+ long long kib_active_file_now=global_node_page_state(NR_ACTIVE_FILE) * MAX_NR_ZONES;
+ if (kib_active_file_now <= 256*1024) {
+ nr[lru] = 0; //don't reclaim any Active(file) (see /proc/meminfo) if they are under 256MiB
+ continue;
+ }
+ }
*lru_pages += size;
nr[lru] = scan;
}

Note that some yet-to-be-found regression on kernel 5.3.0-rc4-gd45331b00ddb will cause a system freeze(without disk thrashing, and sysrq will still work) even without this patch.

(any new developments related to this should be happening here.)

Any way to rescue when linux run out of memory, except resetting machine?

You can help the OOM killer choose it's target by adjusting a value for each process :

echo some_value > /proc/pid/oom_score_adj

This value is added to the score calculated by the oom killer to selects it's next victim. It can be used to either protect some processes if a negative value is used, or for the opposite ie increasing the likelyhood for a process to be targeted by the OOM killer.



Related Topics



Leave a reply



Submit