How to Make Ffmpeg Write Its Output to a Named Pipe

How to make ffmpeg write its output to a named pipe

You could create a named pipe first and have ffmpeg write to it using the following approach:

ffmpeg output to named pipe:

# mkfifo outpipe

# ffmpeg -i input_file.avi -f avi pipe:1 > outpipe
FFmpeg version 0.6.5, Copyright (c) 2000-2010 the FFmpeg developers
built on Jan 29 2012 17:52:15 with gcc 4.4.5 20110214 (Red Hat 4.4.5-6)
...
[avi @ 0x1959670]non-interleaved AVI
Input #0, avi, from 'input_file.avi':
Duration: 00:00:34.00, start: 0.000000, bitrate: 1433 kb/s
Stream #0.0: Video: cinepak, yuv420p, 320x240, 15 tbr, 15 tbn, 15 tbc
Stream #0.1: Audio: pcm_u8, 22050 Hz, 1 channels, u8, 176 kb/s
Output #0, avi, to 'pipe:1':
Metadata:
ISFT : Lavf52.64.2
Stream #0.0: Video: mpeg4, yuv420p, 320x240, q=2-31, 200 kb/s, 15 tbn, 15 tbc
Stream #0.1: Audio: mp2, 22050 Hz, 1 channels, s16, 64 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Press [q] to stop encoding
frame= 510 fps= 0 q=11.5 Lsize= 1292kB time=33.96 bitrate= 311.7kbits/s
video:1016kB audio:265kB global headers:0kB muxing overhead 0.835379%

reading outpipe named pipe (Python example):

# python -c "import os; fifo_read = open('outpipe', 'r', 0); print fifo_read.read().splitlines()[0]"
RIFFAVI LIST<hdrlavih8j...
...

-- ab1

ffmpeg output pipeing to named windows pipe

It seems like the problem can be solved by adding the -y option to the ffmpeg command and specifying a buffer size for the pipe.

My ffmpeg command (see aergistal's comment why I also removed the -pass 1 flag): -y -f rawvideo -vcodec rawvideo -video_size 656x492 -r 10 -pix_fmt rgb24 -i \\.\pipe\to_ffmpeg -c:v libvpx -f webm \\.\pipe\from_ffmpeg

And defining the named pipe as follows:

p_from_ffmpeg = new NamedPipeServerStream(pipename_from, 
PipeDirection.In,
1,
PipeTransmissionMode.Byte,
System.IO.Pipes.PipeOptions.WriteThrough,
10000, 10000);

Multiple named pipes in ffmpeg

Using named pipes (Linux only):

Named pipes are required when there are two ore more input streams (that need to be piped from memory buffers).

Using named pipes is not trivial at all...

From FFmpeg point of view named pipes are like (non-seekable) input files.

Using named pipes in Python (in Linux):

Assume pipe1 is the name of the "named pipe" (e.g. pipe1 = "audio_pipe1").

  1. Create a "named pipe":

    os.mkfifo(pipe1)
  2. Open the pipe as "write only" file:

    fd_pipe = os.open(pipe_name, os.O_WRONLY)  # fd_pipe1 is a file descriptor (an integer).
  3. Write the data to the pipe in small chunks.

    According to this post, the default buffer size of the pipe in most Linux systems is 64KBytes.

    Because the data is larger than 65536 bytes, we need to write the data to the pipe in small chunks.

    I decided to use an arbitrary chunk size of 1024 bytes.

    Pipe writing operation is a "blocking" operation.

    I solved it by using a "writer" thread:

    def writer(data, pipe_name, chunk_size):
    # Open the pipes as opening "low level IO" files (open for "open for writing only").
    fd_pipe = os.open(pipe_name, os.O_WRONLY) # fd_pipe1 is a file descriptor (an integer)

    for i in range(0, len(data), chunk_size):
    # Write to named pipe as writing to a "low level IO" file (but write the data in small chunks).
    os.write(fd_pipe, data[i:chunk_size+i]) # Write 1024 bytes of data to fd_pipe
  4. Close the pipe:

    os.close(fd_pipe)
  5. Remove (unlink) the named pipe:

    os.unlink(pipe1)

Here is the sample from the previous post, using two named pipes:

import subprocess
import os
from threading import Thread


def create_samp():
# Read audio stream from https://freesound.org/data/previews/186/186942_2594536-hq.mp3
# Apply adelay audio filter.
# Encode the audio in mp3 format.
# FFmpeg output is passed to stdout pipe, and stored in sample bytes array.
sample1 = subprocess.run(["ffmpeg", "-i", "https://freesound.org/data/previews/186/186942_2594536-hq.mp3",
"-af", "adelay=15000|15000", "-f", "mp3", "pipe:"], stdout=subprocess.PIPE).stdout

# Read second audio sample from https://cdns-preview-b.dzcdn.net/stream/c-b0b684fe962f93dc43f1f7ea493683a1-3.mp3
sample2 = subprocess.run(["ffmpeg", "-i", "https://cdns-preview-b.dzcdn.net/stream/c-b0b684fe962f93dc43f1f7ea493683a1-3.mp3",
"-f", "mp3", "pipe:"], stdout=subprocess.PIPE).stdout

return sample1, sample2


def writer(data, pipe_name, chunk_size):
# Open the pipes as opening files (open for "open for writing only").
fd_pipe = os.open(pipe_name, os.O_WRONLY) # fd_pipe1 is a file descriptor (an integer)

for i in range(0, len(data), chunk_size):
# Write to named pipe as writing to a file (but write the data in small chunks).
os.write(fd_pipe, data[i:chunk_size+i]) # Write 1024 bytes of data to fd_pipe

# Closing the pipes as closing files.
os.close(fd_pipe)


def record(samp1, samp2):
# Names of the "Named pipes"
pipe1 = "audio_pipe1"
pipe2 = "audio_pipe2"

# Create "named pipes".
os.mkfifo(pipe1)
os.mkfifo(pipe2)

# Open FFmpeg as sub-process
# Use two audio input streams:
# 1. Named pipe: "audio_pipe1"
# 2. Named pipe: "audio_pipe2"
# Merge the two audio streams using amix audio filter.
# Store the result to output file: output.mp3
process = subprocess.Popen(["ffmpeg", "-y", '-f', 'mp3',
"-i", pipe1,
"-i", pipe2,
"-filter_complex", "amix=inputs=2:duration=longest", "output.mp3"],
stdin=subprocess.PIPE)

# Initialize two "writer" threads (each writer writes data to named pipe in chunks of 1024 bytes).
thread1 = Thread(target=writer, args=(samp1, pipe1, 1024)) # thread1 writes samp1 to pipe1
thread2 = Thread(target=writer, args=(samp2, pipe2, 1024)) # thread2 writes samp2 to pipe2

# Start the two threads
thread1.start()
thread2.start()

# Wait for the two writer threads to finish
thread1.join()
thread2.join()

process.wait() # Wait for FFmpeg sub-process to finish

# Remove the "named pipes".
os.unlink(pipe1)
os.unlink(pipe2)


sampl1, sampl2 = create_samp()
record(sampl1, sampl2)


Update:

Same solution using a class:

Implementing the solution using a class ("NamedPipeWriter" class) is a bit more elegant.

The class inherits Thread class, and overrides run method.

You may create a list of multiple objects, and iterate them in a loop, (instead of duplicating the code for each new input stream).

Here is the same solution using a class:

import subprocess
import os
import stat
from threading import Thread


def create_samp():
# Read audio stream from https://freesound.org/data/previews/186/186942_2594536-hq.mp3
# Apply adelay audio filter.
# Encode the audio in mp3 format.
# FFmpeg output is passed to stdout pipe, and stored in sample bytes array.
sample1 = subprocess.run(["ffmpeg", "-i", "https://freesound.org/data/previews/186/186942_2594536-hq.mp3",
"-af", "adelay=15000|15000", "-f", "mp3", "pipe:"], stdout=subprocess.PIPE).stdout

# Read second audio sample from https://cdns-preview-b.dzcdn.net/stream/c-b0b684fe962f93dc43f1f7ea493683a1-3.mp3
sample2 = subprocess.run(["ffmpeg", "-i", "https://cdns-preview-b.dzcdn.net/stream/c-b0b684fe962f93dc43f1f7ea493683a1-3.mp3",
"-f", "mp3", "pipe:"], stdout=subprocess.PIPE).stdout

return sample1, sample2


class NamedPipeWriter(Thread):
""" Write data (in small chunks) to a named pipe using a thread """

def __init__(self, pipe_name, data):
""" Initialization - get pipe name and data to be written """
super().__init__()
self._pipe_name = pipe_name
self._chunk_size = 1024
self._data = data


def run(self):
""" Open the pipe, write data in small chunks and close the pipe """
chunk_size = self._chunk_size
data = self._data

# Open the pipes as opening files (open for "open for writing only").
fd_pipe = os.open(self._pipe_name, os.O_WRONLY) # fd_pipe1 is a file descriptor (an integer)

for i in range(0, len(data), chunk_size):
# Write to named pipe as writing to a file (but write the data in small chunks).
os.write(fd_pipe, data[i:chunk_size+i]) # Write 1024 bytes of data to fd_pipe

# Closing the pipes as closing files.
os.close(fd_pipe)




def record(samp1, samp2):
# Names of the "Named pipes"
pipe1 = "audio_pipe1"
pipe2 = "audio_pipe2"

# Create "named pipes".
if not stat.S_ISFIFO(os.stat(pipe1).st_mode):
os.mkfifo(pipe1) # Create the pipe only if not exist.

if not stat.S_ISFIFO(os.stat(pipe2).st_mode):
os.mkfifo(pipe2)

# Open FFmpeg as sub-process
# Use two audio input streams:
# 1. Named pipe: "audio_pipe1"
# 2. Named pipe: "audio_pipe2"
# Merge the two audio streams using amix audio filter.
# Store the result to output file: output.mp3
process = subprocess.Popen(["ffmpeg", "-y", '-f', 'mp3',
"-i", pipe1,
"-i", pipe2,
"-filter_complex", "amix=inputs=2:duration=longest", "output.mp3"],
stdin=subprocess.PIPE)

# Initialize two "writer" threads (each writer writes data to named pipe in chunks of 1024 bytes).
named_pipe_writer1 = NamedPipeWriter(pipe1, samp1)
named_pipe_writer2 = NamedPipeWriter(pipe2, samp2)

# Start the two threads
named_pipe_writer1.start()
named_pipe_writer2.start()

# Wait for the two writer threads to finish
named_pipe_writer1.join()
named_pipe_writer1.join()

process.wait() # Wait for FFmpeg sub-process to finish

# Remove the "named pipes".
os.unlink(pipe1)
os.unlink(pipe2)


sampl1, sampl2 = create_samp()
record(sampl1, sampl2)

Notes:

  • The code was tested in Ubuntu 18.04 (in a virtual machine).

Pipe ffmpeg output to named pipe

Let's tackle the line endings first. ffmpeg uses carriage return ('\r') to send the cursor back to the start of the line so it doesn't fill up the terminal with progress messages. With tr, the fix is simple.

ffmpeg -i input.mov output.webm 2>&1 | tr '\r' '\n'

Now you should see each progress line separately. Things get more interesting if we pipe or redirect somewhere else.

ffmpeg -i input.mov output.webm 2>&1 | tr '\r' '\n' | cat

Notice that the output appears as chunks rather than line-by-line. If that's not acceptable for your purposes, you can use stdbuf to disable tr's output buffering.

ffmpeg -i input.mov video.webm 2>&1 | stdbuf -o0 tr '\r' '\n' | cat

For reading the output from a different shell, a named pipe might work. However, the pipe won't end until ffmpeg finishes, so tail won't print anything until then. You can read a pipe in progress with other tools like cat or grep, but it's probably easier to just use a plain file.

# shell 1
ffmpeg -i input.mov output.webm 2>&1 | stdbuf -o0 tr '\r' '\n' > fflog.txt

# shell 2
tail -f fflog.txt

Give input to ffmpeg using named pipes

From what I know, there aren't any requirements on the format of the video that will be put to the named pipe. You could put anything ffmpeg can open. For instance, I had developend a program using ffmpeg libraries that was reading an h264 video from a named pipe and retrieved statistics from it - the named pipe was filled through another program. This is really a very nice and clean solution for continous video.

Now, concerning your case, I believe that you have a small problem, since the named pipe is just one file and ffmpeg won't be able to know that there are multiple images in the same file! So if you declare the named pipe as input, ffmpeg will believe that you have only one image - not good enough ...

One solution I can think of is to declare that your named pipe contains a video - so ffmpeg will continously read from it and store it or stream it. Of course your C program would need generate and write that video to the named pipe... This isn't as hard as it seems! You could convert your images (you haven't told us what is their format) to YUV and simply write one after the other in the named pipe (a YUV video is a headerless series of YUV images - also you can easily convert from BPM to YUV, just check the wikipedia entry on YUV). Then ffmpeg will think that the named pipe contains a simple YUV file so you can finally read from it and do whatever you want with that.



Related Topics



Leave a reply



Submit