There is an intersting submission for video capture device capabilities for “The Short-Range Intel® RealSenseâ„¢ Camera F200” camera. Another blog user earlier mentioned they have a good stock of the devices with plans to take advantage of new technology.
It sounds like the new cameras offer new opportunities for application in user interaction, in ability to conveniently enhance user experience with things like gestures etc.
This is what the camera looks like on the software side:
- Intel(R) RealSense(TM) 3D Camera Virtual Driver
- Intel(R) RealSense(TM) 3D Camera (Front F200) RGB
- Intel(R) RealSense(TM) 3D Camera (Front F200) Depth
Presumably, there are synchronized video and depth sources. It might so happen that SDK offers other presentations of the data (snapshots for combined data and combined stream?).
So what it is all about in terms of how it looks for a video capture application and APIs? The video sensor offers standard video caps and YUV 4:2:2 video stream at 60 fps at resolutions up to 960×540, higher resolutions up to 1920×1080 at 30 fps. This exceeds USB 2.0 bandwidth, so this is either USB 3.0 device or there is hardware compression, with internal software decompression. The video device does not offers compressed video feed capabilities.
There is another video source named “Depth”. It offers YUY2 feed as well as other options with fancy FourCCs (ILER, IRNI, IVNI, IZNI, RVNI, ZVNI?) which is presumably delivering depth information at 640×480@60. Respective SDK is supposedly have the formats documented.
At 60 frames per second and supposedly low latency, the data should be a source of good real-time information to track gestures and reconstruction of short range 3D scene in front of the camera.
Original DirectShow and Media Foundation capability files:
- DirectShow – Intel(R) RealSense(TM) 3D Camera (Front F200).md
- Media Foundation – Intel(R) RealSense(TM) 3D Camera (Front F200).md
Additional in-depth information about the technology: