Permanent Fix for Opencv Videocapture

Permanent fix for Opencv videocapture

This answer is written with Linux and Python in mind, but the general idea can be applied to any OS and language supported by OpenCV.

The VideoCapture class not opening the video file can have many causes, but the following three applies to most cases.

OpenCV FFMPEG support:

By default OpenCV uses ffmpeg to read video files. OpenCV may not have been built with FFMPEG support. To find out if OpenCV was built with FFMPEG support, in terminal enter:

python -c "import cv2; print(cv2.getBuildInformation())" | grep -i ffmpeg

The output should be something like:

FFMPEG: YES

If the output is No then follow an online guide to build OpenCV from source with ffmpeg support.

FFMPEG Codec:

It's possible that FFMPEG does not have codec for your specific file. We are going to use this video as an example, to show if FFMPEG has decoding capability for this file.

First, we need to find the encoding format used in this video file. We will be using the mediainfo program. In terminal, enter:

mediainfo video_file.mp4

In the output, under the video heading, look for format. In this case the video encoding used is AVC, which is another name for H264.

pic

Now, we try to find if FFMPEG supports codec for decoding AVC encoded files. In terminal:

ffmpeg -codecs | grep -i avc

On my machine, the output is:

DEV.LS h264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (decoders: h264 h264_crystalhd h264_vdpau ) (encoders: libx264 libx264rgb )

We are interested in DEV, which stands for Decoding, Encoding and Video. This means that AVC is a video encoding format and FFMPEG supports both encoding and decoding capabilities for this format.

File PATH

Lastly, check if the file path is correct or even if the file exists.

OpenCV VideoCapture lag due to the capture buffer

OpenCV Solution

According to this source, you can set the buffersize of a cv::VideoCapture object.

cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); // internal buffer will now store only 3 frames

// rest of your code...

There is an important limitation however:

CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)

Update from comments. In newer versions of OpenCV (3.4+), the limitation seems to be gone and the code uses scoped enumerations:

cv::VideoCapture cap;
cap.set(cv::CAP_PROP_BUFFERSIZE, 3);

Hackaround 1

If the solution does not work, take a look at this post that explains how to hack around the issue.

In a nutshell: the time needed to query a frame is measured; if it is too low, it means the frame was read from the buffer and can be discarded. Continue querying frames until the time measured exceeds a certain limit. When this happens, the buffer was empty and the returned frame is up to date.

(The answer on the linked post shows: returning a frame from the buffer takes about 1/8th the time of returning an up to date frame. Your mileage may vary, of course!)


Hackaround 2

A different solution, inspired by this post, is to create a third thread that grabs frames continuously at high speed to keep the buffer empty. This thread should use the cv::VideoCapture.grab() to avoid overhead.

You could use a simple spin-lock to synchronize reading frames between the real worker thread and the third thread.

OpenCV real time streaming video capture is slow. How to drop frames or get synced with real time?

My hypothesis is that the jitter is most likely due to network limitations and occurs when a frame packet is dropped. When a frame is dropped, this causes the program to display the last "good" frame which results in the display freezing. This is probably a hardware or bandwidth issue but we can alleviate some of this with software. Here are some possible changes:

1. Set maximum buffer size

We set the cv2.videoCapture() object to have a limited buffer size with the cv2.CAP_PROP_BUFFERSIZE parameter. The idea is that by limiting the buffer, we will always have the latest frame. This can also help to alleviate the problem of frames randomly jumping ahead.

2. Set frame retrieval delay

Currently, I believe the read() is reading too fast even though it is in its own dedicated thread. This may be one reason why all the frames appear to pool up and suddenly burst in the next frame. For instance, say in a one second time interval, it may produce 15 new frames but in the next one second interval, only 3 frames are returned. This may be due to the network packet frame loss so to ensure that we obtain constant frame rates, we simply add a delay in the frame retrieval thread. A delay to obtain roughly ~30 FPS does a good job to "normalize" the frame rate and smooth the transition between frames incase there is packet loss.

Note: We should try to match the frame rate of the stream but I'm not sure what the FPS of the webcam is so I just guessed 30 FPS. Also, there is usually a "direct" stream link instead of going through a intermediate webserver which can greatly improve performance.


If you try using a saved .mp4 video file, you will notice that there is no jitter. This confirms my suspicion that the problem is most likely due to network latency.

from threading import Thread
import cv2, time

class ThreadedCamera(object):
def __init__(self, src=0):
self.capture = cv2.VideoCapture(src)
self.capture.set(cv2.CAP_PROP_BUFFERSIZE, 2)

# FPS = 1/X
# X = desired FPS
self.FPS = 1/30
self.FPS_MS = int(self.FPS * 1000)

# Start frame retrieval thread
self.thread = Thread(target=self.update, args=())
self.thread.daemon = True
self.thread.start()

def update(self):
while True:
if self.capture.isOpened():
(self.status, self.frame) = self.capture.read()
time.sleep(self.FPS)

def show_frame(self):
cv2.imshow('frame', self.frame)
cv2.waitKey(self.FPS_MS)

if __name__ == '__main__':
src = 'https://videos3.earthcam.com/fecnetwork/9974.flv/chunklist_w1421640637.m3u8'
threaded_camera = ThreadedCamera(src)
while True:
try:
threaded_camera.show_frame()
except AttributeError:
pass

Related camera/IP/RTSP/streaming, FPS, video, threading, and multiprocessing posts

  1. Python OpenCV streaming from camera - multithreading, timestamps

  2. Video Streaming from IP Camera in Python Using OpenCV cv2.VideoCapture

  3. How to capture multiple camera streams with OpenCV?

  4. OpenCV real time streaming video capture is slow. How to drop frames or get synced with real time?

  5. Storing RTSP stream as video file with OpenCV VideoWriter

  6. OpenCV video saving

  7. Python OpenCV multiprocessing cv2.VideoCapture mp4

opencv VideoCapture() couldn't open on c++ by opened by python

As I wrote before I use conan script to build opencv:

macro(run_conan)
# Download automatically, you can also just copy the conan.cmake file
if (NOT EXISTS "${CMAKE_BINARY_DIR}/conan.cmake")
message(STATUS "Downloading conan.cmake from https://github.com/conan-io/cmake-conan")
file(DOWNLOAD "https://github.com/conan-io/cmake-conan/raw/v0.15/conan.cmake" "${CMAKE_BINARY_DIR}/conan.cmake")
endif ()

include(${CMAKE_BINARY_DIR}/conan.cmake)

conan_add_remote(
NAME
bincrafters
URL
https://api.bintray.com/conan/bincrafters/public-conan)

conan_cmake_run(
REQUIRES
${CONAN_EXTRA_REQUIRES}
opencv/4.5.2
fmt/8.0.0
OPTIONS
${CONAN_EXTRA_OPTIONS}
BASIC_SETUP
CMAKE_TARGETS # individual targets to link to
BUILD
missing)
endmacro()

In Linux (manjaro in my case) you should install pacman -S v4l-utils and add somewhere in CMakeLists.txt instruction: set(CONAN_EXTRA_OPTIONS ${CONAN_EXTRA_OPTIONS} opencv:with_v4l=True).

After that in section of cv::getBuildInformation() I have the following:

Video I/O:
v4l/v4l2: YES (linux/videodev2.h)

And voila!



Related Topics



Leave a reply



Submit