Vfs: File-Max Limit 1231582 Reached

VFS: file-max limit 1231582 reached

I hate to leave a question open, so a summary for anyone who finds this.

I ended up reposting the question on serverfault instead (this article)

They weren't able to come up with anything, actually, but I did some more investigation and ultimately found that it's a genuine bug with NFSv4, specifically the server-side locking code. I had an NFS client which was running a monitoring script every 5 seconds, using rrdtool to log some data to an NFS-mounted file. Every time it ran, it locked the file for writing, and the server allocated (but erroneously never released) an open file descriptor. That script (plus another that ran less frequently) resulted in about 900 open files consumed per hour, and two months later, it hit the limit.

Several solutions are possible:
1) Use NFSv3 instead.
2) Stop running the monitoring script.
3) Store the monitoring results locally instead of on NFS.
4) Wait for the patch to NFSv4 that fixes this (Bruce Fields actually sent me a patch to try, but I haven't had time)

I'm sure you can think of other possible solutions.

Thanks for trying.

VFS rename operation explaination

What is the purpose of new_dentry object ?
But why new_dentry->d_inode is empty ? shouldn't it contain the inode of the file I copied ?

new_dentry->inode will be valid if the destination already contains a file, which will be replaced if the rename succeeds. Something like mv dir1/file1 dir2/file2 where file2 will be replaced with file1. In your case, there was no replacement; only renaming the file.

How to find which process is leaking file handles in Linux?

Probably the root cause is a bug in NFSv4 implementation: https://stackoverflow.com/a/5205459/280758

They have similar symptoms.

Why do file systems limit maximum length of a file name?

long file name costs much more space and time than you can imagine

the 255 bytes limit of file name length is a long time trade off between human

onvenience and space/time efficiency

and backward compatibility , of course

back to the dark old days , the capacity of hard drive capacity was count by MB or a few GB

file name are often stored in some fixed length C structs ,

and the size of the struct was mostly round by the factor of 512 byte,

which is the size of a physical sector ,so that it can be read out by a single touch of the head

if the file system put a limit of 1MB on filename, it would run out of harddisk space with only a few hundred files. and memory limits also applys.....

How can locked files be monitored on a WIN 2000 server

Well,

After some research it seems that the best sysinternals tool for this purpose would be File Monitor. While wrapping the Handle programme (as suggested here) could work, File Monitor provides a fully customizable GUI for that purpose.

File Monitor is replaced by Process Monitor for OS versions later than WIN2000 SP4, but since I was having to monitor an earlier version, File Monitor was definitely the way to go.



Related Topics



Leave a reply



Submit