How to catch stdout stream in ffmpeg then pipe it to v4l2loopback
In your last command you are piping an MP4
to GStreamer. See the -f mp4 -
part:
./capture -F -d /dev/video0 -o | ffmpeg -f h264 -i - -vcodec copy -f mp4 - | gst-launch-0.10 -v fdsrc ! v4l2sink device=/dev/video3
What you want to do is pipe the H.264
stream inside the MP4 instead.
Try replacing -f mp4 -
with -f h264 -
.
In fact you could probably skip entirely the creation of an MP4 and just do:
./capture -F -d /dev/video0 -o | gst-launch-0.10 -v fdsrc ! v4l2sink device=/dev/video3
since the -F
option forces H.264.
How to pipe rawvideo to v4l2loopback using ffmpeg?
I see two issues here:
The
Unable to find a suitable output format
error is probably from a quotation/escaping issue within the command in your code. Running the same command from a command-line interface executes properly.Once you fix the quoting issue you will get another error:
V4L2 output device supports only a single raw video stream
. This means you can't use libx264 to output to v4l2. So instead let it automatically choose rawvideo output:
ffmpeg -y -f rawvideo -video_size 1280x720 -pixel_format bgr24 -i - -f v4l2 /dev/video0
If the colors look wrong, either use a different -pixel_format
value (see ffmpeg -pix_fmts
), or the player does not support the output pixel format being used by v4l2. In that case add -vf format=yuv420p
output option (anywhere after -i
and before /dev/video0
).
More info:
- rawvideo demuxer (also
ffmpeg -h demuxer=rawvideo
) - V4L2 output device
When using ffmpeg as producer I cannot see v4l2loopback device in chrome
the problem is most likely, that some software (probably including chrome
) is a bit picky about the supported colour format.
your ffmpeg
command doesn't specify any colour format, so I guess it will take one that is easiest to convert to from the NDI stream. NDI supports a number of different formats (including rather exotic ones like P216
) and it might well be that it picks an output format that is not usable by chrome
.
otoh, your gstreamer-pipeline uses a very specific format (I420
).
try enforcing the same format when using ffmpeg, e.g. using something like -vf format=pix_fmts=yuv420p
see also https://github.com/umlaeute/v4l2loopback/wiki/Colorspace-Issues
How to capture h264 with ffmpeg
Like the error message states it needs a single, raw video stream and not a H.264 raw stream:
if (s1->nb_streams != 1 ||
s1->streams[0]->codec->codec_type != AVMEDIA_TYPE_VIDEO ||
s1->streams[0]->codec->codec_id != AV_CODEC_ID_RAWVIDEO) {
av_log(s1, AV_LOG_ERROR,
"V4L2 output device supports only a single raw video stream\n");
return AVERROR(EINVAL);
}
FFmpeg/v4l2enc.c on GitHub
You are using -input_format h264
with -vcodec copy
. You must change it to -vcodec rawvideo
or omit it entirely for the v4l2
output format. You might also need to set the correct pixel format.
Pipe numpy array to virtual video device
As Programmer wrote in his answer, it is possible to create a dummy device with the package v4l2loopback. To publish images, videos or the desktop to the dummy device was already easy with ffmpeg, but i want to pipe it directly from the python script - where i capture the images - to the dummy device. I still think it's possible with ffmpeg-python, but i found this great answer from Alp which sheds light on the darkness. The package pyfakewebcam is a perfect solution for the problem.
For the sake of completeness, here is my extended minimal working example:
#!/usr/bin/env python3
import time
import cv2
import numpy as np
import pyfakewebcam
WIDTH = 1440
HEIGHT = 1080
DEVICE = '/dev/video0'
fake_cam = pyfakewebcam.FakeWebcam(DEVICE, WIDTH, HEIGHT)
window_name = 'virtual-camera'
cv2.namedWindow(window_name, cv2.WINDOW_GUI_EXPANDED)
img1 = np.random.uniform(0, 255, (HEIGHT, WIDTH, 3)).astype('uint8')
img2 = np.random.uniform(0, 255, (HEIGHT, WIDTH, 3)).astype('uint8')
for i in range(125):
time.sleep(0.04)
if i % 2:
img = img1
else:
img = img2
fake_cam.schedule_frame(img)
cv2.imshow(window_name, img)
cv2.waitKey(1)
cv2.destroyAllWindows()
Related Topics
Libpcap - Capture Packets from All Interfaces
How to Add Boost Library to Code::Blocks in Linux
Bash: No Such File or Directory
Shell Script for Process Monitoring
Find Value from One CSV in Another One (Like Vlookup) in Bash (Linux)
Too Many Open Files Error While Running Awk Command
What Do the Suffixes "+" and "-" After the Job Id of Background Jobs Mean
Linux How to Add a File to a Specific Folder Within a Zip File
Passing Environment Variables Not Working with Docker
How to Create a Zip File Without Entire Directory Structure
Sshpass: Command Not Found Error
Padding Empty Field in Unix Join Operation
How to Manage Log Verbosity Inside a Shell Script
How to Tell If Running in a Linux Console Versus an Ssh Session
Identifying the Preferred Ipv6 Source Address for an Adapter