Quickly Create a Large File on a Linux System

Quickly create an uncompressible large file on a Linux system

You may tray use /dev/urandom or /dev/random to fill your file for example


@debian-10:~$ SECONDS=0; dd if=/dev/urandom of=testfile bs=10M count=1000 ;echo $SECONDS
1000+0 record in
1000+0 record out
10485760000 bytes (10 GB, 9,8 GiB) copied, 171,516 s, 61,1 MB/s
171

Using bigger bs a little small time is needed:


*@debian-10:~$ SECONDS=0; dd if=/dev/urandom of=testfile bs=30M count=320 ;echo $SECONDS
320+0 record in
320+0 record out
10066329600 bytes (10 GB, 9,4 GiB) copied, 164,498 s, 61,2 MB/s
165

171 seconds VS. 165 seconds

How to quickly create large files in C in Linux?

The fallocate system call on Linux has an option to zero the space.

#define _GNU_SOURCE
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>

int main(void)
{
int fd = open("testfile", O_RDWR | O_TRUNC | O_CREAT, 0755);
off_t size = 1024 * 1024 * 1024;

if (fd == -1) {
perror("open");
exit(1);
}

if (fallocate(fd, FALLOC_FL_ZERO_RANGE, 0, size) == -1) {
perror("fallocate");
exit(1);
}
}

Note that FALLOC_FL_ZERO_RANGE may not be supported by all filesystems. ext4 supports it.

Otherwise, you could write zeros yourself if you are looking for a more portable solution (which isn't that efficient, of course).

Fast Creation of a very large File in Debian Linux

First creat the file, then lseek to the wanted end, and write a dummy byte. Very quick way to create an arbitrary large but sparse file.


If you don't want the file to be sparse, then find out the block size of the drive (can be found out using stat on most POSIX platforms). Create a buffer of that size, and write it to the file until the wanted size.

If the stat structure doesn't have the st_blksize member, then most filesystems have a blocksize of 4 or 8 kB. You can probably make this buffer larger, but not too large. Experiment and benchmark!

How to create a file with a given size in Linux?

For small files:

dd if=/dev/zero of=upload_test bs=file_size count=1

Where file_size is the size of your test file in bytes.

For big files:

dd if=/dev/zero of=upload_test bs=1M count=size_in_megabytes

Create a large file with a given size with a pattern in Linux

while true ; do printf "DEADBEEF"; done | dd of=/tmp/bigfile bs=blocksize count=size iflag=fullblock

How to create large file (require long compress time) on Linux

There are two things going on here. The first is that tar won't compress anything unless you pass it a z flag along with what you already have to trigger gzip compression:

tar cvfz test.txt

For a very similar effect, you can invoke gzip directly:

gzip test.txt

The second issue is that with most compression schemes, a gigantic string of zeros, which is likely what you generate, is very easy to compress. You can fix that by supplying random data. On a Unix-like system you can use the pseudo-file /dev/urandom. This answer gives three options in decreasing order of preference, depending on what works:

  1. head that understands suffixes like G for Gibibyte:

    head -c 1G < /dev/urandom > test.txt
  2. head that needs it spelled out:

    head -c 1073741824 < /dev/urandom > test.txt
  3. No head at all, so use dd, where file size is block size (bs) times count (1073741824 = 1024 * 1048576):

    dd bs=1024 count=1048576 < /dev/urandom > test.txt

How to quickly create large files in C?

There is no way to do it instantly.

You need to have each block of the file written on disk and this is going to take a significant period of time, especially for a large file.

Quickly create large file on a Windows system

fsutil file createnew <filename> <length>

where <length> is in bytes.

For example, to create a 1MB (Windows MB or MiB) file named 'test', this code can be used.

fsutil file createnew test 1048576

fsutil requires administrative privileges though.



Related Topics



Leave a reply



Submit