Does Linux guarantee the contents of a file is flushed to disc after close()?
From "man 2 close
":
A successful close does not guarantee that the data has been
successfully saved to disk, as the
kernel defers writes.
The man page says that if you want to be sure that your data are on disk, you have to use fsync() yourself.
Will data written via write() be flushed to disk if a process is killed?
Normally, the data already written by the application into a kernel buffer with write()
will not be affected by the application exiting or getting killed in any way. Exiting or getting killed implicitly closes all file descriptors, so there should be no difference, the kernel will handle the flushing afterwards. So no fdatasync()
or similar calls are neccessary.
There are two exceptions to this:
if the application uses user-land buffering (not calling the
write()
system call, but instead caching the data in a user-space buffer, withfwrite()
), those buffers might not get flushed unless a proper user-space file close is executed - getting killed by a SIGKILL will definitely cause you to lose the contents of those buffers,if the kernel dies as well (loss of power, kernel crash, etc.), your data might have missed getting written to the disks from the kernel buffers, and will then get lost.
Does close() call fsync() on Linux?
It does not. Calling close()
DOES NOT guarantee that contents are on the disk as the OS may have deferred the writes.
As a side note, always check the return value of close()
. It will let you know of any deferred errors up to that point. And if you want to ensure that the contents are on the disk always call fsync()
and check its return value as well.
One thing to keep in mind is what the backing store is. There are devices that may do internal write deferring and content can be lost in some cases (although newer storage media devices typically have super capacitors to prevent this, or ways to disable this feature).
after writing a files to an ext4 volume do I need to do more than flush to guarantee the file is fully written?
flush()
ensures that all processes see the file in the same state, but does not guarantee that all bytes have been written to disk. A further call to fsync()
or fdatasync()
is required.
How to prevent data loss when closing a file descriptor?
Does it mean releasing (freeing) those kernel buffers which contained my data? What will happen to my precious data, contained in those buffers? Will be lost?
No. The kernel buffers will not freed before it writes the data to the underlying file. So, there won't be any data loss (unless something goes really wrong - such as power outage to the system).
Whether the that data will be immediately written to the physical file is another question. It may dependent on the filesystem (which may be buffering) and/or any hardware caching as well.
As far as your user program is concerned, a successful close()
call can be considered as successful write to the file.
It may suggest that fsync does not have to precede close (or fclose, which contains a close), but can (even have to) come after it. So the close() cannot be very destructive...
After a call to close()
, the state of the file descriptor is left unspecified by POSIX (regardless of whether close()
succeeded). So, you are not allowed to use fsync(fd);
after the calling close()
.
See: POSIX/UNIX: How to reliably close a file descriptor.
And no, it doesn't suggest close()
can be destructive. It suggests that the C library may be doing its own buffering in the user and suggests to use fsync()
to flush it to the kernel (and now, we are in the same position as said before).
Is a file guaranteed to be openable for reading immidiately after ofstream::close() has returned?
There is a potential failure mode that I missed earlier: You don't seem to have a way of recovering when the file cannot be opened by secondprogram
. The problem is not that the file might be locked/inconsistent after close()
returns, but that another program, completely unrelated to yours, might open the file between close()
and system()
(say, an AV scanner, someone grep
ing through the directory containing the file, a backup process). If that happens, secondprogram
will fail even though your program behaves correctly.
TL/DR: Even though everything works as expected, you have to account for the case that secondprogram
may not be able to open the file!
File is not written on disk until program ends
If there's some time between the fputs and fclose, add
fflush(fp);
This will cause the contents of the disk file to be written.
Related Topics
Find Multiple Files and Rename Them in Linux
Maximum Number of Processes in Linux
Bluez: How to Set Up a Gatt Server from the Command Line
How to Single Step Arm Assembly in Gdb on Qemu
Can _Start Be the Thumb Function
Given Two Directory Trees, How to Find Out Which Files Differ by Content
Automating Running Command on Linux from Windows Using Putty
Can Awk Patterns Match Multiple Lines
Shell Command to Tar Directory Excluding Certain Files/Folders
Connection Refused to Mongodb Errno 111
Using the "Alternate Screen" in a Bash Script
Iterate Over a List of Files With Spaces
Bash Command Not Found When Setting a Variable
How to Recursively Find All Files in Current and Subfolders Based on Wildcard Matching