Video4Linux Y12 Pixel Format for use with OpenCV issues
Issue was related to missing pixel format in OpenCV (see Issue #16620) fixed by #16626
found by compareing video4linux pixelformats with those supported by openCV in modules/videoio/src/cap_v4l.cpp
Does a video have to be encoded before output on screen?
Your understanding is not correct. USB cameras will give image data in whatever format their firmware damn well pleases, in whatever binary stream the developers might have thought of. It's the job of camera drivers to convert that binary data into something that V4L understands, which can be YUV 4:2:2, but not necessarily so.
Now, what "you can show on screen" is the inverse of that: Graphic cards take various image formats, and it's up to the driver to define these formats, and offer an API to X11 and applications trying to display stuff.
What kind of formats directly are supported by the display driver depends on the device and the driver. For embedded GPUs, it's sometimes even possible to say "hey, here's an MPEG2 stream, display that in this and that rectangle", and let the silicon do the decoding, rendering and displaying. However, usually, you'd just get a frame that you want your video displayed in, and write the pixels that should be displayed by software.
Now, usually computer graphics are RGB-based, but many drivers/GPUs support a few YUV formats, too. For historical reasons, there's also numerous different interfaces with which an application might draw onto a buffer -- be it classical X11 buffers, DRI, opengl-accelerated acces, opengl texture access, XVideo, vendor-specific extensions (VDPAU etc), or, and that's kind of typical for embedded devices, simply a memory range mapping to the display which can be selectively blitted onto screen.
Where I can find example in using v4l2 codec driver
(adding another answer, since it's completely different :-))
A "codec" API in the V4L2 specs, refers to hardware codecs.
such a codec is a device that has the following features:
the hardware codec shows up as a
/dev/videoX
deviceit has a video input, where you're userland application can send a video-stream - e.g. containing JPEG-encoded video frames - to, so it has the
V4L2_CAP_VIDEO_OUTPUT
capability andit has a video output, where you're userland application can read a video-stream - e.g. containing the uncompressed frames - from, so it also has the
V4L2_CAP_VIDEO_CAPTURE
capability.
There are numerous applications that can write video to a v4l2 OUTPUT device, here are a few that i know of:
- GStreamer's
v4l2sink
element - LiVeS
- Gem
- ...
afaik, these application don't have any specific code to deal with "v4l2 codec devices", but can write to/read from v4l2 devices, which is all that you should need.
Also check the v4l-utils.git: Look in utils/v4l2-ctl/v4l2-ctl-streaming.cpp
Related Topics
How to Change File Extension in Linux Shell Script
How to Add More Algorithm in Cryptoapi in Linux
Linux: How to Send Tcp Packet from Specific Port
Xfs - How to Not Modify Mtime When Writing to File
How to Make Fork Changes Reflect with The Master When Updated
Echo - Syntax Error: Bad Substitution
Linux Directory Starting with Dot
Tmux .Tmux.Conf Doesn't Load Properly
Cups Linux: Help Printing These Media Types: Ms Excel, Ms Word and HTML
Why Can't I Install Any Packages with Ghc 7.8.4 on Raspberry Pi
Access Podman Rest API from Testcontainer
How to Delete 5 Lines Before and 6 Lines After Pattern Match Using Sed
How to Make Alias Called When It's Called by a Variable
How to Get The Output of at Command in Current or Another Terminal Window
How to Remove Everything Else in a Folder Except Filea
How to Add a Directory to The Perl Library Path at The System Level