How to Open a Gstreamer Pipeline from Opencv with Videowriter

How to open a GStreamer pipeline from OpenCV with VideoWriter

Before using OpenCV's Gstreamer API, we need a working pipeline using the Gstreamer command line tool.

Sender: The OP is using JPEG encoding, so this pipeline will be using the same encoding.

gst-launch-1.0 -v v4l2src \
! video/x-raw,format=YUY2,width=640,height=480 \
! jpegenc \
! rtpjpegpay \
! udpsink host=127.0.0.1 port=5000

Receiver: The sink caps for rtpjpegdepay need to match the src caps of the rtpjpegpay of sender pipeline.

gst-launch-1.0 -v udpsrc port=5000 \
! application/x-rtp, media=video, clock-rate=90000, encoding-name=JPEG, payload=26 \
! rtpjpegdepay \
! jpegdec \
! xvimagesink sync=0

Now that we have working pipelines for sender and receiver, we can port them to OpenCV.

Sender:

void sender()
{
// VideoCapture: Getting frames using 'v4l2src' plugin, format is 'BGR' because
// the VideoWriter class expects a 3 channel image since we are sending colored images.
// Both 'YUY2' and 'I420' are single channel images.
VideoCapture cap("v4l2src ! video/x-raw,format=BGR,width=640,height=480,framerate=30/1 ! appsink",CAP_GSTREAMER);

// VideoWriter: 'videoconvert' converts the 'BGR' images into 'YUY2' raw frames to be fed to
// 'jpegenc' encoder since 'jpegenc' does not accept 'BGR' images. The 'videoconvert' is not
// in the original pipeline, because in there we are reading frames in 'YUY2' format from 'v4l2src'
VideoWriter out("appsrc ! videoconvert ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! jpegenc ! rtpjpegpay ! udpsink host=127.0.0.1 port=5000",CAP_GSTREAMER,0,30,Size(640,480),true);

if(!cap.isOpened() || !out.isOpened())
{
cout<<"VideoCapture or VideoWriter not opened"<<endl;
exit(-1);
}

Mat frame;

while(true) {

cap.read(frame);

if(frame.empty())
break;

out.write(frame);

imshow("Sender", frame);
if(waitKey(1) == 's')
break;
}
destroyWindow("Sender");
}

Receiver:

void receiver()
{
// The sink caps for the 'rtpjpegdepay' need to match the src caps of the 'rtpjpegpay' of the sender pipeline
// Added 'videoconvert' at the end to convert the images into proper format for appsink, without
// 'videoconvert' the receiver will not read the frames, even though 'videoconvert' is not present
// in the original working pipeline
VideoCapture cap("udpsrc port=5000 ! application/x-rtp,media=video,payload=26,clock-rate=90000,encoding-name=JPEG,framerate=30/1 ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink",CAP_GSTREAMER);

if(!cap.isOpened())
{
cout<<"VideoCapture not opened"<<endl;
exit(-1);
}

Mat frame;

while(true) {

cap.read(frame);

if(frame.empty())
break;

imshow("Receiver", frame);
if(waitKey(1) == 'r')
break;
}
destroyWindow("Receiver");
}

Opening a GStreamer pipeline from OpenCV with VideoWriter

I encountered a similar problem before. Since the pipe/file name ends with .mkv, OpenCV interprets it as a video file instead of a pipe.

You can try ending it with a dummy spacing like after mkv

video.open("appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv ", 0, (double)25, cv::Size(1024, 1024), true);

or with a dummy property like

video.open("appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv sync=false", 0, (double)25, cv::Size(1024, 1024), true);

running gstreamer pipeline command from opencv videowriter api to stream continuous image to hlssink

OpenCv's VideoWriter only supports BGR frames on its GStreamer interface. Probably VideoCapture will also convert image to BGR.

So you don't need to decode jpeg in your gstreamer pipeline. However x264enc does not always accept BGR as input, so you should add videoconvert between appsrc and x264enc`

t = cv2.VideoWriter('appsrc ! videoconvert ! x264enc tune=zerolatency ! '
'mpegtsmux ! hlssink location=/var/www/segment-%05d.ts '
'playlist-location=/var/www/index.m3u8 max-files=20 target-duration=15',
0, framerate, (640, 480))

Gstreamer stream is not working with OpenCV

For using gstreamer backend, opencv VideoCapture expects a valid pipeline string from your source to appsink (BGR format for color).

Your pipeline strings are not correct mainly because they start with the binary command (gstlaunch for gst-launch-1.0, playbin) that you would use in a shell for running these.

You may try instead this pipeline for reading from RTP/UDP an H264-encoded video, decoding with dedicated HW NVDEC, then copying from NVMM memory into system memory while converting into BGRx format, then using CPU-based videoconvert for BGR format as expected by opencv appsink:

    const char* context = "udpsrc port=5000 caps=application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";

Or for uridecodebin the ouput may be in NVMM memory if a NV decoder has been selected, or in system memory otherwise, so the following nvvidconv instance is first copying to NVMM memory, then the second nvvidconv converts into BGRx with HW and outputs into system memory:

    const char* local_context = "uridecodebin uri=file:///home/nvidia/repos/APPIDE/vidtest/THERMAL/thermalVideo.avi ! nvvidconv ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";

Note for high resolutions that:

  • CPU-based videoconvert may be a bottleneck. Enable all cores and boost the clocks.
  • OpenCv imshow may not be that fast depending on your OpenCv build's graphical backend (GTK, QT4, QT5..). In such case a solution is to use an OpenCv Videowriter using gstreamer backend to output to a gstreamer video sink.

Issue with configuration of cv2.VideoWriter and GStreamer

you were very close to the solution. The problem lies in the warning you yourself noticed warning: Invalid component. The problem is that rtp jpeg payloader gets stuck due to not supporting video format it is getting. Check this

However I was blind and missed what you wrote and went full debug mode into the problem.

So lets just keep the debug how-to for others or for similar problems:

1, First debugging step - check with wireshark if the receiving machine is getting udp packets on port 12344. Nope it does not.

2, Would this work without opencv stuff? Lets check with replacing opencv logic with some random processing - say rotation of video. Also eliminate appsrc/appsink to simplify.

Then I used this:

GST_DEBUG=3 gst-launch-1.0 udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! rotate angle=0.45 ! videoconvert ! jpegenc ! rtpjpegpay ! rtpstreampay ! queue ! udpsink host=[my ip] port=12344

Hm now I get weird warnings like:

0:00:00.174424533 90722 0x55cb38841060 WARN              rtpjpegpay gstrtpjpegpay.c:596:gst_rtp_jpeg_pay_read_sof:<rtpjpegpay0> warning: Invalid component
WARNING: from element /GstPipeline:pipeline0/GstRtpJPEGPay:rtpjpegpay0: Invalid component

3, Quick search yielded above mentioned GStreamer forum page.

4, When I added video/x-raw,format=I420 after videoconvert it started working and my second machine started getting the udp packets.

5, So the solution to your problem is just limit the jpegenc to specific video format that the subsequent rtp payloader can handle:

#!/usr/bin/python3

import signal, cv2
from multiprocessing import Process, Pipe

is_running = True

def signal_handler(sig, frame):
global is_running
print("Program was interrupted - terminating ...")
is_running = False

def produce(pipe):
global is_running
video_in = cv2.VideoCapture("udpsrc port=12345 ! application/x-rtp-stream,encoding-name=JPEG ! rtpstreamdepay ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink", cv2.CAP_GSTREAMER)

while is_running:
ret, frame = video_in.read()
if not ret: break
print("Receiving frame ...")

pipe.send(frame)

video_in.release()

if __name__ == "__main__":
consumer_pipe, producer_pipe = Pipe()

signal.signal(signal.SIGINT, signal_handler)
producer = Process(target=produce, args=(producer_pipe,))

# the only edit is here, added video/x-raw capsfilter: <-------
video_out = cv2.VideoWriter("appsrc ! videoconvert ! video/x-raw,format=I420 ! jpegenc ! rtpjpegpay ! rtpstreampay ! udpsink host=[receiver ip] port=12344", cv2.CAP_GSTREAMER, 0, 24, (800, 600), True)
producer.start()

while is_running:
frame = consumer_pipe.recv()
rr = video_out.write(frame)
print("Sending frame ...")
print(rr)

video_out.release()
producer.join()

How to write opencv mat to gstreamer pipeline?

After hours of searching and testing, I finally got the answer.
The key is to use only videoconvert after appsrc, no need to set caps. Therefore, a writer pipeline would look like appsrc ! videoconvert ! x264enc ! mpegtsmux ! udpsink host=localhost port=5000.

Following is a sample code that reads images from a gstreamer pipeline, doing some opencv image processing and write it back to the pipeline.

With this method, you can add any opencv process to a gstreamer pipeline easily.

// Compile with: $ g++ opencv_gst.cpp -o opencv_gst `pkg-config --cflags --libs opencv`

#include <stdio.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv.hpp>

int main(int argc, char** argv) {

// Original gstreamer pipeline:
// == Sender ==
// gst-launch-1.0 v4l2src
// ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB
// ! videoconvert
// ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4
// ! mpegtsmux
// ! udpsink host=localhost port=5000
//
// == Receiver ==
// gst-launch-1.0 -ve udpsrc port=5000
// ! tsparse ! tsdemux
// ! h264parse ! avdec_h264
// ! videoconvert
// ! ximagesink sync=false

// first part of sender pipeline
cv::VideoCapture cap("v4l2src ! video/x-raw, framerate=30/1, width=640, height=480, format=RGB ! videoconvert ! appsink");
if (!cap.isOpened()) {
printf("=ERR= can't create video capture\n");
return -1;
}

// second part of sender pipeline
cv::VideoWriter writer;
writer.open("appsrc ! videoconvert ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4 ! mpegtsmux ! udpsink host=localhost port=9999"
, 0, (double)30, cv::Size(640, 480), true);
if (!writer.isOpened()) {
printf("=ERR= can't create video writer\n");
return -1;
}

cv::Mat frame;
int key;

while (true) {

cap >> frame;
if (frame.empty())
break;

/* Process the frame here */

writer << frame;
key = cv::waitKey( 30 );
}
}

Hope this helps. ;)



Related Topics



Leave a reply



Submit