Opencv Videocapture Lag Due to the Capture Buffer

OpenCV VideoCapture lag due to the capture buffer


OpenCV Solution

According to this source, you can set the buffersize of a cv::VideoCapture object.

cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); // internal buffer will now store only 3 frames

// rest of your code...

There is an important limitation however:

CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)

Update from comments. In newer versions of OpenCV (3.4+), the limitation seems to be gone and the code uses scoped enumerations:

cv::VideoCapture cap;
cap.set(cv::CAP_PROP_BUFFERSIZE, 3);

Hackaround 1

If the solution does not work, take a look at this post that explains how to hack around the issue.

In a nutshell: the time needed to query a frame is measured; if it is too low, it means the frame was read from the buffer and can be discarded. Continue querying frames until the time measured exceeds a certain limit. When this happens, the buffer was empty and the returned frame is up to date.

(The answer on the linked post shows: returning a frame from the buffer takes about 1/8th the time of returning an up to date frame. Your mileage may vary, of course!)


Hackaround 2

A different solution, inspired by this post, is to create a third thread that grabs frames continuously at high speed to keep the buffer empty. This thread should use the cv::VideoCapture.grab() to avoid overhead.

You could use a simple spin-lock to synchronize reading frames between the real worker thread and the third thread.

Open CV RTSP camera buffer lag

After searching online through multiple resources the suggestion for using threads to remove frames from the buffer came up ALOT. And although it seemed to work for a while it caused me issues with duplicate frames being displayed for some reason that I could not work out.

I then tried to build opencv from source with gstreamer support but even once it was compiled correctly it still didn't seem to like interfacing with gstreamer correctly.

Eventually I thought the best bet was to go back down the threading approach but again couldnt get it working. So I gave multiprocessing a shot.

I wrote the below class to handle the camera connection:

import cv2
import time
import multiprocessing as mp

class Camera():

def __init__(self,rtsp_url):
#load pipe for data transmittion to the process
self.parent_conn, child_conn = mp.Pipe()
#load process
self.p = mp.Process(target=self.update, args=(child_conn,rtsp_url))
#start process
self.p.daemon = True
self.p.start()

def end(self):
#send closure request to process

self.parent_conn.send(2)

def update(self,conn,rtsp_url):
#load cam into seperate process

print("Cam Loading...")
cap = cv2.VideoCapture(rtsp_url,cv2.CAP_FFMPEG)
print("Cam Loaded...")
run = True

while run:

#grab frames from the buffer
cap.grab()

#recieve input data
rec_dat = conn.recv()


if rec_dat == 1:
#if frame requested
ret,frame = cap.read()
conn.send(frame)

elif rec_dat ==2:
#if close requested
cap.release()
run = False

print("Camera Connection Closed")
conn.close()

def get_frame(self,resize=None):
###used to grab frames from the cam connection process

##[resize] param : % of size reduction or increase i.e 0.65 for 35% reduction or 1.5 for a 50% increase

#send request
self.parent_conn.send(1)
frame = self.parent_conn.recv()

#reset request
self.parent_conn.send(0)

#resize if needed
if resize == None:
return frame
else:
return self.rescale_frame(frame,resize)

def rescale_frame(self,frame, percent=65):

return cv2.resize(frame,None,fx=percent,fy=percent)

Displaying the frames can be done as below

cam = Camera("rtsp://admin:[somepassword]@192.168.0.40/h264Preview_01_main")

print(f"Camera is alive?: {cam.p.is_alive()}")

while(1):
frame = cam.get_frame(0.65)

cv2.imshow("Feed",frame)

key = cv2.waitKey(1)

if key == 13: #13 is the Enter Key
break

cv2.destroyAllWindows()

cam.end()

This solution has resolved all my issues of buffer lag and also repeated frames. #

Hopefully it will help anyone else in the same situation.

OpenCV real time streaming video capture is slow. How to drop frames or get synced with real time?


My hypothesis is that the jitter is most likely due to network limitations and occurs when a frame packet is dropped. When a frame is dropped, this causes the program to display the last "good" frame which results in the display freezing. This is probably a hardware or bandwidth issue but we can alleviate some of this with software. Here are some possible changes:

1. Set maximum buffer size

We set the cv2.videoCapture() object to have a limited buffer size with the cv2.CAP_PROP_BUFFERSIZE parameter. The idea is that by limiting the buffer, we will always have the latest frame. This can also help to alleviate the problem of frames randomly jumping ahead.

2. Set frame retrieval delay

Currently, I believe the read() is reading too fast even though it is in its own dedicated thread. This may be one reason why all the frames appear to pool up and suddenly burst in the next frame. For instance, say in a one second time interval, it may produce 15 new frames but in the next one second interval, only 3 frames are returned. This may be due to the network packet frame loss so to ensure that we obtain constant frame rates, we simply add a delay in the frame retrieval thread. A delay to obtain roughly ~30 FPS does a good job to "normalize" the frame rate and smooth the transition between frames incase there is packet loss.

Note: We should try to match the frame rate of the stream but I'm not sure what the FPS of the webcam is so I just guessed 30 FPS. Also, there is usually a "direct" stream link instead of going through a intermediate webserver which can greatly improve performance.


If you try using a saved .mp4 video file, you will notice that there is no jitter. This confirms my suspicion that the problem is most likely due to network latency.

from threading import Thread
import cv2, time

class ThreadedCamera(object):
def __init__(self, src=0):
self.capture = cv2.VideoCapture(src)
self.capture.set(cv2.CAP_PROP_BUFFERSIZE, 2)

# FPS = 1/X
# X = desired FPS
self.FPS = 1/30
self.FPS_MS = int(self.FPS * 1000)

# Start frame retrieval thread
self.thread = Thread(target=self.update, args=())
self.thread.daemon = True
self.thread.start()

def update(self):
while True:
if self.capture.isOpened():
(self.status, self.frame) = self.capture.read()
time.sleep(self.FPS)

def show_frame(self):
cv2.imshow('frame', self.frame)
cv2.waitKey(self.FPS_MS)

if __name__ == '__main__':
src = 'https://videos3.earthcam.com/fecnetwork/9974.flv/chunklist_w1421640637.m3u8'
threaded_camera = ThreadedCamera(src)
while True:
try:
threaded_camera.show_frame()
except AttributeError:
pass

Related camera/IP/RTSP/streaming, FPS, video, threading, and multiprocessing posts

  1. Python OpenCV streaming from camera - multithreading, timestamps

  2. Video Streaming from IP Camera in Python Using OpenCV cv2.VideoCapture

  3. How to capture multiple camera streams with OpenCV?

  4. OpenCV real time streaming video capture is slow. How to drop frames or get synced with real time?

  5. Storing RTSP stream as video file with OpenCV VideoWriter

  6. OpenCV video saving

  7. Python OpenCV multiprocessing cv2.VideoCapture mp4

OpenCV - buffer dimension when reading from webcam

Great question, as far as I believe when you run cap.read() OpenCV captures the frame from the buffer at that point of instance and goes on executing the program.

So once you execute your pipeline the frame from the camera is captured only when cap.read() is executed.

If you want image frames to be processed sequentially with respect to the time frame you should try:

  • Run capture and read in a different thread.
  • Append the captured frame to a stack.
  • Use the frames in the stack to perform inference.

VideoCapture.read() returns past image

I agree with Mark Serchell's comment. The way I am using is to set variable to time + x seconds and check. OpenCV have useful feature for skiping frames like cam.grab(). It will just read that frame from buffer but will not do anything with it. In that way you can avoid "suffering from buffering". Simple code would be:

import cv2
import time
cam = cv2.VideoCapture(url)
ret,frame = cam.read()
timeCheck = time.time()
future = 10*60 # delay
while ret:
if time.time() >= timeCheck:
ret,frame = cam.read()
# Do your staff here
timeCheck = time.time()+future
else:
# Read from buffer, but skip it
ret = cam.grab() # note that grab() function returnt only status code,not the frame


Related Topics



Leave a reply



Submit