Least Memory Intensive Way to Read a File in PHP

Read and parse contents of very large file

Yes, you can read it line by line:

$handle = @fopen("/tmp/inputfile.txt", "r");
if ($handle) {
while (($buffer = fgets($handle, 4096)) !== false) {
echo $buffer;
}
fclose($handle);
}

Is there a less memory extensive or more efficient way of merging MP3 than file_get_contents?

What about this approach?

<?php
$dir = $_POST['dir'];

if ($handle = opendir($dir))
{
$outFd = fopen(sprintf("mp3/%s.mp3", $dir), 'wb');

while (false !== ($entry = readdir($handle)))
{
if ($entry != "." && $entry != "..")
{
$path = urldecode(sprintf("%s/%s", $dir, $entry));

$inFd = fopen($path, 'rb');
stream_copy_to_stream($inFd, $outFd);
fclose($inFd);
}
}

fclose($outFd);

closedir($handle);
}

Now, we have some security concerns with not properly validating input, and if this is a public tool, you should fix them. But for now I'll just discuss the problem at hand.

Instead of storing the data from all of these files in memory at once, why don't we just open the output file first, then copy each input file to the end of the output file? We can do this with stream_copy_to_stream().

We could use a smaller chunk size for the operation as well via stream_set_chunk_size(), but it sounds like, for your application, that it would be okay to store one MP3 in memory temporarily (4-5Mb tops). You could also just read / write a chunk at a time manually rather than using stream_copy_to_stream(), but the general idea is the same.

This should result in less than one file being actually stored in memory at once.

How to save memory when reading a file in Php?

Unless you know the offset of the line, you will need to read every line up to that point. You can just throw away the old lines (that you don't want) by looping through the file with something like fgets(). (EDIT: Rather than fgets(), I would suggest @Gordon's solution)

Possibly a better solution would be to use a database, as the database engine will do the grunt work of storing the strings and allow you to (very efficiently) get a certain "line" (It wouldn't be a line but a record with an numeric ID, however it amounts to the same thing) without having to read the records before it.

PHP processing a large file

Yes. Its not hard. You seem to have already started - you should not open a file before running file_get_contents() on it (unless you explicitly want to lock it).

$search=Array(...
$replace=Array(...
$myfile = fopen("./m.sql", "r");
$output = fopen("./output.sql", "w");
while ($line=fgets($myfile)) {
fputs($output, str_replace($search, $replace, $line));
}

PHP - how to read big remote files efficiently and use buffer in loop

Like already suggested in my closevotes to your question (hence CW):

You can use SplFileObject which implements Iterator to iterate over a file line by line to save memory. See my answers to

  • Least memory intensive way to read a file in PHP and
  • How to save memory when reading a file in Php?

for examples.

Why does readfile() exhaust PHP memory?

Description
int readfile ( string $filename [, bool $use_include_path = false [, resource $context ]] )
Reads a file and writes it to the output buffer*.

PHP has to read the file and it writes to the output buffer.
So, for 300Mb file, no matter what the implementation you wrote (by many small segments, or by 1 big chunk) PHP has to read through 300Mb of file eventually.

If multiple user has to download the file, there will be a problem.
(In one server, hosting providers will limit memory given to each hosting user. With such limited memory, using buffer is not going to be a good idea. )

I think using the direct link to download a file is a much better approach for big files.



Related Topics



Leave a reply



Submit