How to Route Webcam Video to Virtual Video Device on Linux (Via Opencv)

Unable to route webcam video to virtual video device on Linux (via OpenCV)

After a lot of researching, I finally was able to develop a working solution. There are a lot of steps which need to be performed and which I will discuss in detail below:

General

As described in my question above, the goal is to be able to take the incoming stream of a webcam and forward it to a virtual video device, which in turn can be opened with tools like VLC. This is considered to be a first step in order to be able to do further image manipulation.

1) v4l2loopback

v4l2loopback is a virtual video device (kernel module) for linux. Sources can be downloaded from here https://github.com/umlaeute/v4l2loopback. After downloading, the following steps must be performed in order to run it:

make
sudo make install
sudo depmod -a
sudo modprobe v4l2loopback

If you want to use this video device in Chrome (WebRTC) you need to execute the last line with an additional parameter:

sudo modprobe v4l2loopback exclusive_caps=1

Note that exlusive_caps is an array, so if the above doesn't work, try:

sudo modprobe v4l2loopback exclusive_caps=1,1,1,1,1,1,1,1

Information: It is important to note that the v4l2loopback device must be set to the same resolution like the resolution you want to use in the sample below. I have set the defines in the sample to FullHD as you can see. If you want e.g. 800x600, you either need to change the default in the v4l2loopback code before compilation or change the resolution, when inserting the module, via the additional cmd line parameters max_width and max_height. The kernel module operates by default with a resolution of 640x480. You can get more details and all supported parameters by using:

modinfo v4l2loopback

2) OpenCV

OpenCV is a library which supports capturing and live video manipulation. For building OpenCV please got to this page
http://docs.opencv.org/3.0-beta/doc/tutorials/introduction/linux_install/linux_install.html which explains all steps in detail.

3) Sample code

You can build/run the sample code below in the following way:

g++ -ggdb `pkg-config --cflags --libs opencv` sample.cpp -o sample
./sample

Here is the code:

#include <stdio.h>
#include <fcntl.h>
#include <errno.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <linux/videodev2.h>
#include "opencv2/opencv.hpp"

#define VIDEO_OUT "/dev/video0" // V4L2 Loopack
#define VIDEO_IN "/dev/video1" // Webcam

#define WIDTH 1920
#define HEIGHT 1080

int main ( int argc, char **argv ) {
cv::VideoCapture cap;
struct v4l2_format vid_format;
size_t framesize = WIDTH * HEIGHT * 3;
int fd = 0;

if( cap.open ( VIDEO_IN ) ) {
cap.set ( cv::CAP_PROP_FRAME_WIDTH , WIDTH );
cap.set ( cv::CAP_PROP_FRAME_HEIGHT, HEIGHT );
} else {
std::cout << "Unable to open video input!" << std::endl;
}

if ( (fd = open ( VIDEO_OUT, O_RDWR )) == -1 )
printf ("Unable to open video output!");

memset ( &vid_format, 0, sizeof(vid_format) );
vid_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

if ( ioctl ( fd, VIDIOC_G_FMT, &vid_format ) == -1 )
printf ( "Unable to get video format data. Errro: %d\n", errno );

vid_format.fmt.pix.width = cap.get ( CV_CAP_PROP_FRAME_WIDTH );
vid_format.fmt.pix.height = cap.get ( CV_CAP_PROP_FRAME_HEIGHT );
vid_format.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB24;
vid_format.fmt.pix.sizeimage = framesize;
vid_format.fmt.pix.field = V4L2_FIELD_NONE;

if ( ioctl ( fd, VIDIOC_S_FMT, &vid_format ) == -1 )
printf ( "Unable to set video format! Errno: %d\n", errno );

cv::Mat frame ( cap.get(CV_CAP_PROP_FRAME_HEIGHT),
cap.get(CV_CAP_PROP_FRAME_WIDTH), CV_8UC3 );

printf ( "Please open the virtual video device (/dev/video<x>) e.g. with VLC\n" );

while (1) {
cap >> frame;
cv::cvtColor ( frame, frame, cv::COLOR_BGR2RGB ); // Webcams sometimes deliver video in BGR not RGB. so we need to convert
write ( fd, frame.data, framesize );
}
}

Can I create a virtual webcam and stream data to it?

You don't need to fool OpenCV into thinking the file is a webcam. You just need to add a delay between each frame. This code will do that:

#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"

using namespace cv;

int main(int argc, const char * argv[]) {

VideoCapture cap;
cap.open("/Users/steve/Development/opencv2/opencv_extra/testdata/python/videos/bmp24.avi");
if (!cap.isOpened()) {
printf("Unable to open video file\n");
return -1;
}
Mat frame;
namedWindow("video", 1);
for(;;) {
cap >> frame;
if(!frame.data)
break;
imshow("video", frame);
if(waitKey(30) >= 0) //Show each frame for 30ms
break;
}

return 0;
}

Edit: trying to read from a file being created by ffmpeg:

    for(;;) {
cap >> frame;
if(frame.data)
imshow("video", frame); //show frame if successfully loaded
if(waitKey(30) == 27) //Wait 30 ms. Quit if user presses escape
break;
}

I'm not sure how it will handle getting a partial frame at the end of the file while ffmpeg is still creating it.

Can I create a virtual webcam and stream data to it?

You don't need to fool OpenCV into thinking the file is a webcam. You just need to add a delay between each frame. This code will do that:

#include <iostream>
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"

using namespace cv;

int main(int argc, const char * argv[]) {

VideoCapture cap;
cap.open("/Users/steve/Development/opencv2/opencv_extra/testdata/python/videos/bmp24.avi");
if (!cap.isOpened()) {
printf("Unable to open video file\n");
return -1;
}
Mat frame;
namedWindow("video", 1);
for(;;) {
cap >> frame;
if(!frame.data)
break;
imshow("video", frame);
if(waitKey(30) >= 0) //Show each frame for 30ms
break;
}

return 0;
}

Edit: trying to read from a file being created by ffmpeg:

    for(;;) {
cap >> frame;
if(frame.data)
imshow("video", frame); //show frame if successfully loaded
if(waitKey(30) == 27) //Wait 30 ms. Quit if user presses escape
break;
}

I'm not sure how it will handle getting a partial frame at the end of the file while ffmpeg is still creating it.

How to write/pipe to a virtual webcam created by V4L2loopback module?

I have found an answer in the old V4L2loopback module's page on Google code.

http://code.google.com/p/v4l2loopback/source/browse/test.c

newer link: https://github.com/umlaeute/v4l2loopback/blob/master/examples/test.c

This has helped me so far just to write to the device.

How to write an opencv image (IplImage) to a V4L2 loopback device?

I worked out that the above issue was not an issue anymore as it seems that luvcview or skype does not fully support v4l2. But if I view the loopback device using VLC player, it works fine.

How to render images to /dev/video0 using v4l2loopback?

After a lot of hacking around, I've managed to generate a valid YUYV video/image to send to /dev/video0.

First I make a buffer to hold the frame:

// Allocate buffer for the YUUV frame
std::vector<uint8_t> buffer;
buffer.resize(vid_format.fmt.pix.sizeimage);

Then I write the current canvas to the buffer in YUYV format.

bool skip = true;
cimg_forXY(canvas, cx, cy) {
size_t row = cy * width * 2;
uint8_t r, g, b, y;
r = canvas(cx, cy, 0);
g = canvas(cx, cy, 1);
b = canvas(cx, cy, 2);

y = std::clamp<uint8_t>(r * .299000 + g * .587000 + b * .114000, 0, 255);
buffer[row + cx * 2] = y;
if (!skip) {
uint8_t u, v;
u = std::clamp<uint8_t>(r * -.168736 + g * -.331264 + b * .500000 + 128, 0, 255);
v = std::clamp<uint8_t>(r * .500000 + g * -.418688 + b * -.081312 + 128, 0, 255);
buffer[row + (cx - 1) * 2 + 1] = u;
buffer[row + (cx - 1) * 2 + 3] = v;
}
skip = !skip;
}

Note:
CImg has RGBtoYUV has an in-place RGB to YUV conversion, but for some reason calling this on a uint8_t canvas just zeros it.

It also has get_YUVtoRGB which (allocates and) returns a CImg<float> canvas, which I think you multiply each value by 255 to scale to a byte, however, whatever I tried that did not give the correct colour. Edit: I likely forgot the +128 bias (though I still prefer not reallocating for each frame)

My full code is here (if anyone wants to do something similar) https://gist.github.com/MacDue/36199c3f3ca04bd9fd40a1bc2067ef72

Create openCV VideoCapture from interface name instead of camera numbers

import re
import subprocess
import cv2
import os

device_re = re.compile("Bus\s+(?P<bus>\d+)\s+Device\s+(?P<device>\d+).+ID\s(?P<id>\w+:\w+)\s(?P<tag>.+)$", re.I)
df = subprocess.check_output("lsusb", shell=True)
for i in df.split('\n'):
if i:
info = device_re.match(i)
if info:
dinfo = info.groupdict()
if "Logitech, Inc. Webcam C270" in dinfo['tag']:
print "Camera found."
bus = dinfo['bus']
device = dinfo['device']
break

device_index = None
for file in os.listdir("/sys/class/video4linux"):
real_file = os.path.realpath("/sys/class/video4linux/" + file)
print real_file
print "/" + str(bus[-1]) + "-" + str(device[-1]) + "/"
if "/" + str(bus[-1]) + "-" + str(device[-1]) + "/" in real_file:
device_index = real_file[-1]
print "Hurray, device index is " + str(device_index)

camera = cv2.VideoCapture(int(device_index))

while True:
(grabbed, frame) = camera.read() # Grab the first frame
cv2.imshow("Camera", frame)
key = cv2.waitKey(1) & 0xFF

First search for desired string in USB devices list. Get BUS and DEVICE number.

Find symbolic link under video4linux directory. Extract device index from realpath and pass it to VideoCapture method.



Related Topics



Leave a reply



Submit