Using custom camera in OpenCV (via GStreamer)
Looks like we can call the camera using a proper GStreamer pipeline like below:
VideoCapture cap("mfw_v4lsrc ! ffmpegcolorspace ! video/x-raw-rgb ! appsink")
as the camera output is in YUV, we need to convert that to RGB to pass the frames to OpenCV.
This is where OpenCV makes sure it gets RGB colorspace.
Access Camera using OpenCV (Via GStreamer)
Here is the answer to my question(with @Alper Kucukkomurler's help)
You can access the MIPI camera through OpenCV (with GStreamer) with
VideoCapture cap("imxv4l2videosrc device=\"/dev/video0\" ! videoconvert ! appsink");
Also If you want to change the resolution of the input, imx-capture-mode
parameter can be used, which is of imxv4l2videosrc
element.
For example,
imxv4l2videosrc imx-capture-mode=5 ! <other elements>
streaming Opencv videocapture frames using GStreamer in python for webcam
Not sure this will solve your case, but the following may help:
- There seems to be typo in the camera capture, where
enable-max-performance=1
is not appropriate in video caps. This item is rather a plugin's property (probably from an encoder). It may be better to set framerate in case your camera driver provides other framerates with this resolution, otherwise you'll face a mismatch with writer fps.
camSet='v4l2src device=/dev/video0 ! video/x-raw,width=640,height=360,framerate=52/1 ! nvvidconv flip-method='+str(flip)+' ! video/x-raw(memory:NVMM), format=I420, width=640, height=360 ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! queue ! appsink drop=1'
- Multicast may hog wifi. Better stream to your receiver only:
... ! udpsink host=<receiver_IP> port=8000 auto-multicast=0
You would simply receive on receiver host only with:
'udpsrc port=8000 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=300 ! rtph264depay ! decodebin ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1'
# Variant for NVIDIA:
'udpsrc port=8000 auto-multicast=0 ! application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=300 ! rtph264depay ! decodebin ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1'
- When using gstreamer backend, note that the 4cc code is useless.
You may use HW accelerated encoder such as:
gst_str_rtp = "appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! video/x-raw(memory:NVMM),format=NV12,width=640,height=360,framerate=52/1 ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 idrinterval=30 ! h264parse ! rtph264pay ! udpsink host=<receiver_IP> port=8000 auto-multicast=0"
out = cv2.VideoWriter(gst_str_rtp, cv2.CAP_GSTREAMER, 0, float(52), (frame_width, frame_height), True)
Calling Gstreamer inside openCV
Actually, you can't use GStreamer API through OpenCV. What OpenCV has is a series of wrapper functions (as, for instance, cvCaptureFromCam
) which implement their functionalities through external multimedia libraries. For instance, aside from GStreamer, OpenCV might use other libraries such as ffmpeg, v4l.. in fact, if you check the complete list of files related with multimedia capture through different external libraries you will find:
(in opencv/modules/highgui/src)
cap_cmu.cpp
cap_dc1394.cpp
cap_ffmpeg.cpp
cap_gstreamer.cpp
...
So, if you compile OpenCV with GStreamer support, you will call the same highgui functions (as cvCaptureFromCam
) but, at a low level it will be calling functions like cvCreateCapture_GStreamer
which implement the calls to the GStreamer API. But it does not mean that you can call yourself to those low-level functions (hence the "was not declared in this scope" error).
Hope that it helps!
EDITED:
take a look to the cap.cpp file in the opencv source. Notice the different options for CvCreateCameraCapture_XXX
. It makes me think that you should be able to open your camera without some of the dependencies (by using others instead).
Python with Gstreamer pipeline
The Ubuntu/Debian version is old 2.4.x, to get the last one you need to compile it from source.
Here two tutorials on how to do that:
- https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.html#installing-opencv-from-source
- http://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/
The first is for Python 2.7
on Fedora
, the second for Python 3.4
on Ubuntu
.
Related Topics
How to Assign a Value to the Pointer 'This' in C++
Concurrent Writes in the Same Global Memory Location
Cmake Link Library Target Link Error
Understanding Lapack Calls in C++ with a Simple Example
Where Does the -Dndebug Normally Come From
Parse String Containing Numbers into Integer Array
C++ Conversion Operator for Converting to Function Pointer
Stl Map Containing References Does Not Compile
Assert That Code Does Not Compile
Std::Vector Calling Destructor Multiple Times During Push_Back
Should I Still Return Const Objects in C++11
Vector of Class Without Default Constructor
Why Is Taking the Address of a Temporary Illegal
Specialization of Member Function Template After Instantiation Error, and Order of Member Functions
Google Mock - How to Call Expect_Call Multiple Times on Same Mock Object