Opencv Real Time Streaming Video Capture Is Slow. How to Drop Frames or Get Synced with Real Time

OpenCV real time streaming video capture is slow. How to drop frames or get synced with real time?

My hypothesis is that the jitter is most likely due to network limitations and occurs when a frame packet is dropped. When a frame is dropped, this causes the program to display the last "good" frame which results in the display freezing. This is probably a hardware or bandwidth issue but we can alleviate some of this with software. Here are some possible changes:

1. Set maximum buffer size

We set the cv2.videoCapture() object to have a limited buffer size with the cv2.CAP_PROP_BUFFERSIZE parameter. The idea is that by limiting the buffer, we will always have the latest frame. This can also help to alleviate the problem of frames randomly jumping ahead.

2. Set frame retrieval delay

Currently, I believe the read() is reading too fast even though it is in its own dedicated thread. This may be one reason why all the frames appear to pool up and suddenly burst in the next frame. For instance, say in a one second time interval, it may produce 15 new frames but in the next one second interval, only 3 frames are returned. This may be due to the network packet frame loss so to ensure that we obtain constant frame rates, we simply add a delay in the frame retrieval thread. A delay to obtain roughly ~30 FPS does a good job to "normalize" the frame rate and smooth the transition between frames incase there is packet loss.

Note: We should try to match the frame rate of the stream but I'm not sure what the FPS of the webcam is so I just guessed 30 FPS. Also, there is usually a "direct" stream link instead of going through a intermediate webserver which can greatly improve performance.


If you try using a saved .mp4 video file, you will notice that there is no jitter. This confirms my suspicion that the problem is most likely due to network latency.

from threading import Thread
import cv2, time

class ThreadedCamera(object):
def __init__(self, src=0):
self.capture = cv2.VideoCapture(src)
self.capture.set(cv2.CAP_PROP_BUFFERSIZE, 2)

# FPS = 1/X
# X = desired FPS
self.FPS = 1/30
self.FPS_MS = int(self.FPS * 1000)

# Start frame retrieval thread
self.thread = Thread(target=self.update, args=())
self.thread.daemon = True
self.thread.start()

def update(self):
while True:
if self.capture.isOpened():
(self.status, self.frame) = self.capture.read()
time.sleep(self.FPS)

def show_frame(self):
cv2.imshow('frame', self.frame)
cv2.waitKey(self.FPS_MS)

if __name__ == '__main__':
src = 'https://videos3.earthcam.com/fecnetwork/9974.flv/chunklist_w1421640637.m3u8'
threaded_camera = ThreadedCamera(src)
while True:
try:
threaded_camera.show_frame()
except AttributeError:
pass

Related camera/IP/RTSP/streaming, FPS, video, threading, and multiprocessing posts

  1. Python OpenCV streaming from camera - multithreading, timestamps

  2. Video Streaming from IP Camera in Python Using OpenCV cv2.VideoCapture

  3. How to capture multiple camera streams with OpenCV?

  4. OpenCV real time streaming video capture is slow. How to drop frames or get synced with real time?

  5. Storing RTSP stream as video file with OpenCV VideoWriter

  6. OpenCV video saving

  7. Python OpenCV multiprocessing cv2.VideoCapture mp4

Is there a way to increase speed for video processing with opencv?

Here are useful tutorials: link1 link2

How to simulate a video in real time and be able to extract its frames?

One way to do this might be to start a separate thread that reads the video file at its normal speed and stuffs each frame in a global variable. The main thread then just grabs that frame when it's ready.

So, if we use a video that just displays the time and the frame counter, we can grab a frame here and there as the fake camera updates the frame:

#!/usr/bin/env python3

import cv2
import sys
import time
import logging
import numpy as np
import threading, queue

logging.basicConfig(level=logging.DEBUG, format='%(levelname)s %(message)s')

# This is shared between main and the FakeCamera
currentFrame = None

def FakeCamera(Q, filename):
"""Reads the video file at its natural rate, storing the frame in a global called 'currentFrame'"""
logging.debug(f'[FakeCamera] Generating video stream from {filename}')

# Open video
video = cv2.VideoCapture(filename)
if (video.isOpened()== False):
logging.critical(f'[FakeCamera] Unable to open video {filename}')
Q.put('ERROR')
return

# Get height, width and framerate so we know how often to read a frame
h = int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))
w = int(video.get(cv2.CAP_PROP_FRAME_WIDTH))
fps = video.get(cv2.CAP_PROP_FPS)
logging.debug(f'[FakeCamera] h={h}, w={w}, fps={fps}')

# Initialise currentFrame
global currentFrame
currentFrame = np.zeros((h,w,3), dtype=np.uint8)

# Signal main that we are ready
Q.put('OK')

while True:
ret,frame = video.read()
if ret == False:
break
# Store video frame where main can access it
currentFrame[:] = frame[:]
# Try and read at appropriate rate
time.sleep(1.0/fps)

logging.debug('[FakeCamera] Ending')
Q.put('DONE')

if __name__ == '__main__':

# Create a queue for synchronising and communicating with our fake camera
Q = queue.Queue()

# Create a fake camera thread that reads the video in "real-time"
fc = threading.Thread(target=FakeCamera, args=(Q,'video.mov'))
fc.start()

# Wait for fake camera to intialise
logging.debug(f'[main] Waiting for camera to power up and initialise')
msg = Q.get()
if msg != 'OK':
sys.exit()

# Main processing loop should go here - we'll just grab a couple frames at different times
cv2.imshow('Video',currentFrame)
res = cv2.waitKey(2000)

cv2.imshow('Video',currentFrame)
res = cv2.waitKey(5000)

cv2.imshow('Video',currentFrame)
res = cv2.waitKey(2000)

# Wait for buddy to finish
fc.join()

Sample Image

Sample Output

DEBUG [FakeCamera] Generating video stream from video.mov
DEBUG [main] Waiting for camera to power up and initialise
DEBUG [FakeCamera] h=240, w=320, fps=25.0
DEBUG [FakeCamera] Ending

Keywords: Python, image processing, OpenCV, multi-thread, multi-threaded, multithread, multithreading, video, fake camera.

OpenCV VideoCapture lag due to the capture buffer

OpenCV Solution

According to this source, you can set the buffersize of a cv::VideoCapture object.

cv::VideoCapture cap;
cap.set(CV_CAP_PROP_BUFFERSIZE, 3); // internal buffer will now store only 3 frames

// rest of your code...

There is an important limitation however:

CV_CAP_PROP_BUFFERSIZE Amount of frames stored in internal buffer memory (note: only supported by DC1394 v 2.x backend currently)

Update from comments. In newer versions of OpenCV (3.4+), the limitation seems to be gone and the code uses scoped enumerations:

cv::VideoCapture cap;
cap.set(cv::CAP_PROP_BUFFERSIZE, 3);

Hackaround 1

If the solution does not work, take a look at this post that explains how to hack around the issue.

In a nutshell: the time needed to query a frame is measured; if it is too low, it means the frame was read from the buffer and can be discarded. Continue querying frames until the time measured exceeds a certain limit. When this happens, the buffer was empty and the returned frame is up to date.

(The answer on the linked post shows: returning a frame from the buffer takes about 1/8th the time of returning an up to date frame. Your mileage may vary, of course!)


Hackaround 2

A different solution, inspired by this post, is to create a third thread that grabs frames continuously at high speed to keep the buffer empty. This thread should use the cv::VideoCapture.grab() to avoid overhead.

You could use a simple spin-lock to synchronize reading frames between the real worker thread and the third thread.

How to increase speed of video playback within Python using openCV

This is the problem...

for x in range(width):
for y in range(height):
if canny[y,x] == 255:

Numpy.argmax is the solution...

for x in range(width-1):
# Slice the relevant column from the image
# The image 'column' is a tall skinny image, only 1px thick
column = np.array(canny[:,x:x+1])
# Use numpy to find the first non-zero value
railPoint = np.argmax(column)

Full Code:

import cv2, numpy as np, time
# Get start time
start = time.time()
# Read in the image
img = cv2.imread('/home/stephen/Desktop/rail.jpg')[40:,10:-10]
# Canny filter
canny = cv2.Canny(img, 85, 255)
# Get height and width
height, width = canny.shape
# Create list to store rail points
railPoints = []
# Iterate though each column in the image
for position in range(width-1):
# Slice the relevant column from the image
# The image 'column' is a tall skinny image, only 1px thick
column = np.array(canny[:,position:position+1])
# Use numpy to find the first non-zero value
railPoint = np.argmax(column)
# Add the railPoint to the list of rail points
railPoints.append(railPoint)
# Draw a circle on the image
cv2.circle(img, (position, railPoint), 1, (123,234,123), 2)
cv2.imshow('img', img)
k = cv2.waitKey(1)
cv2.destroyAllWindows()
print(time.time() - start)

My solution using Numpy took 6ms and your solution took 266ms.
rail output

Python OpenCV multiprocessing cv2.VideoCapture mp4

Here's a minimal working example with optional FPS control. If you need to extract frames from your process back to the main program, you can use multiprocessing.Queue() to transfer frames since multiprocesses have an independent memory stack.

import multiprocessing as mp
import cv2, time

def capture_frames():
src = 'test.mp4'
capture = cv2.VideoCapture(src)
capture.set(cv2.CAP_PROP_BUFFERSIZE, 2)

# FPS = 1/X, X = desired FPS
FPS = 1/120
FPS_MS = int(FPS * 1000)

while True:
# Ensure camera is connected
if capture.isOpened():
(status, frame) = capture.read()

# Ensure valid frame
if status:
cv2.imshow('frame', frame)
else:
break
if cv2.waitKey(1) & 0xFF == ord('q'):
break
time.sleep(FPS)

capture.release()
cv2.destroyAllWindows()

if __name__ == '__main__':
print('Starting video stream')
capture_process = mp.Process(target=capture_frames, args=())
capture_process.start()

Related camera/IP/RTSP/streaming, FPS, video, threading, and multiprocessing posts

  1. Python OpenCV streaming from camera - multithreading, timestamps

  2. Video Streaming from IP Camera in Python Using OpenCV cv2.VideoCapture

  3. How to capture multiple camera streams with OpenCV?

  4. OpenCV real time streaming video capture is slow. How to drop frames or get synced with real time?

  5. Storing RTSP stream as video file with OpenCV VideoWriter

  6. OpenCV video saving

  7. Python OpenCV multiprocessing cv2.VideoCapture mp4



Related Topics



Leave a reply



Submit