Video compression in AVerMedia Live Gamer Ultra GC553

The next generation of game capture is here.” The device addresses needs of real time capture of video signal: offering a pass-through HDMI connection the box provides a video capture sink with USB 3.1 Type C interface and makes the video signal available to video capture applications via standard DirectShow and Media Foundation APIs.

I was interested whether the device implements video compression, H.264 and/or H.265/HEVC in hardware. The technical specifications include:

• Max Pass-Through Resolutions:2160p60 HDR /1440p144 / 1080p240
• Max Record Resolutions:2160p30 / 1440p60 / 1080p120 / 1080p60 HDR
• Supported Resolutions (Video input):2160p, 1440p, 1080p, 1080i, 720p, 576p, 480p
Record Format: MPEG 4 (H.264+AAC) or (H.265+AAC)*

Notes:
*H.265 Compression and HDR are supported by RECentral

So there is a direct mention of video compression, and given the state of the technology and the price of the box it makes sense to have it there. Logitech C930e camera has been offering H.264 video compression onboard for years.

So is it there in the Ultra thing? NO, IT IS NOT. Pathetic…

One could guess this of course from a study of FAQ section in the part of third party software configuration. The software is clearly expected to use external compression capabilities. However popular software is also known to not use the latest stuff, so there was a little chance that hardware codec is still there. I think it would fair to include that right there into technical specification that the product does not offer any encoding capabilities.

The good thing is that the box offers 10-bit video capture up to 2560×1440@30 – there is not so much of inexpensive hardware capable to do such job.

The specification mentions high rate 1920×1080@120 mode but I don’t see it in the effectively advertised capabilities.

Also, video capture capabilities in Media Foundation API suggest that it is possible to capture into video memory bypassing system memory mapping/copy. Even though it is irrelevant to most of the applications, some newer ones including those leveraging UWP video capture API could take advantage (such as, for example, video capture apps running on low power consumption devices).

Media Foundation on Raspberry Pi 3 Model B+

The interesting part with live WebM Media Foundation media source I mentioned in the previous post is that the whole thing works great on… Raspberry Pi 3 Model B+ running Windows 10 IoT Core (RaspberryPi 3B+ Technical Preview Build 17661).

Windows 10 IoT has quite the same Media Foundation infrastructure as in other Universal Windows Platform environments (Desktop, Xbox, HoloLens) including the core API, primitives, support in XAML MediaElement (MediaPlayerElement). There is no DirectX support on Raspberry Pi 3 Model B+ and video delivery fails, however this is a sort of known/expected problem with the Technical Preview build. Audio playback is okay.

The picture above is taken on C# UWP application (that’s ARM platform) running a MediaPlayerElement control taking live audio signal from network using a Windows.Networking.Sockets.MessageWebSocket connection.

A custom (the platform does not have a capable primitive out of the box) WebM live media source forwards the signal to media element for low latency audio playback. The codec is Opus and, yes, stock Media Foundation audio decoder MFT decodes the signal just fine.

Parsing live WebM stream is not so easy

A magic transition from E_BUFFER_NOT_FULL to E_FILE_FORMAT_INVALID in depths of libwebm

Media.dll!mkvparser::Block::Parse(const mkvparser::Cluster * pCluster) Line 7630    C++
Media.dll!mkvparser::BlockGroup::Parse() Line 7579 C++
Media.dll!mkvparser::Cluster::CreateBlockGroup(__int64 start_offset, __int64 size, __int64 discard_padding) Line 7253 C++
Media.dll!mkvparser::Cluster::CreateBlock(__int64 id, __int64 pos, __int64 size, __int64 discard_padding) Line 7154 C++
Media.dll!mkvparser::Cluster::ParseBlockGroup(__int64 payload_size, __int64 & pos, long & len) Line 6724 C++
Media.dll!mkvparser::Cluster::Parse(__int64 & pos, long & len) Line 6381 C++
Media.dll!mkvparser::Cluster::GetNext(const mkvparser::BlockEntry * pCurr, const mkvparser::BlockEntry * & pNext) Line 7369 C++
Media.dll!WebmLiveMediaSource::HandleData(std::vector,std::allocator > > & AsyncResultVector) Line 757 C++
Media.dll!WebmLiveMediaSource::ReadInvoke(AsyncCallbackT * AsyncCallback, IMFAsyncResult * AsyncResult) Line 1176 C++
Media.dll!AsyncCallbackT::Invoke(IMFAsyncResult * AsyncResult) Line 2344 C++
RTWorkQ.dll!CSerialWorkQueue::QueueItem::ExecuteWorkItem() Unknown
RTWorkQ.dll!CSerialWorkQueue::QueueItem::OnWorkItemAsyncCallback::Invoke() Unknown
RTWorkQ.dll!ThreadPoolWorkCallback() Unknown
ntdll.dll!TppWorkpExecuteCallback() Unknown
ntdll.dll!TppWorkerThread() Unknown
kernel32.dll!BaseThreadInitThunk() Unknown
ntdll.dll!RtlUserThreadStart() Unknown

The problem here is that the stream is live and is available in increments in ultra low latency consumption mode. The libwebm library code structure does not suggest it is sufficiently robust for such processing (or maybe it’s good and it’s just one bug? who can tell) even though it apparently have multiple code paths added for live signal. There is no problem to parse a complete file, of course, and then even a retry from the same point succeeds once new data is appended.

It looks like a reasonable workaround here is to check whether we are close to the edge of the stream and temporarily ignore errors like this.

Library parsing performance/efficiency is also a bit questionable. The library is not capable to process incremental reads via mkvparser::IMkvReader. Instead it keeps steppping back all over the parsing process and the live signal source has to keep a bit of processed data because it can be requested once again…

Apparently UWP as a platform has code capable to process this type of data reliably as MediaElement has built-in support for the format in Media Source Extensions (MSE) mode. However, this implementation is limited for internal consumers.

DirectShow VMR-7 bug in Windows 10

DirectShow Video Mixing Renderer (VMR-7) filter exhibits a (regression?) bug in Windows 10 systems. When aspect ratio preservation is enabled in VMR_ARMODE_LETTER_BOX mode, which makes overall sense as default mode quote so often, the letterboxing does not work as expected.

The problem is easy to reproduce with a well known DShowPlayerSDK sample application, with an edit enforcing VMR-7 mode. Once video is started, just resize the window and the parts not covered by video will not be erased as expected.

Apparently this worked well earlier.

UpdateVersionInfoGit: Multiple references/hashes

Some time ago I shared an application which I have been using to embed git reference into binary resources, especially as a post-build event in automated manner: Embedding a Git reference at build time.

This time I needed a small amendment related to use of a git repository as a sub-module of another repository. To make things easier for troubleshooting, when a project if built as a part of bigger build through a sub-module repository reference, both git details of the repository and its parent might be embedded into resources.

“$(ProjectDir)..\_Bin\Third Party\UpdateVersionInfoGit-$(PlatformName).exe” path “$(ProjectDir)..” path “$(ProjectDir)..\..” binary “$(TargetPath)”


The utility allows multiple path arguments, will go over all of them and concetenate the “git log” output. When multiple paths are given it is okay to some of them be invalid or unrelated to git repositories.

Download links

Infrared Camera in Media Foundation

Surface Pro (5th Gen) infrared camera streamed into Chrome browser in H.264 encoding over WebSocket connection

The screenshot above shows Surface Pro tablet’s infrared camera (known as “Microsoft IR Camera Front” on the device) captured live, encoded and streamed (everything is hosted by Microsoft Media Foundation Media Session by this point) over network using WebSockets into Chrome’s HTML5 video tag by means of Media Source Extensions (MSE).

Why? Because why not.

Unfortunately, Microsoft did not publish/document API to access infrared and depth (time-of-flight) cameras so that traditional applications could use the hardware capabilities. Nevertheless, the functionality is available in Universal Windows Platform (UWP), see Windows.Media.Capture.Frames and friends.

UWP implementation is apparently using Media Foundation on its backyard so the fucntionlaity could certainly be published for desktop applications as well. Another interesting thing is that my [undocumented] way to access the device seems to be bypassing frame server and talks to device directly, including video.

It does not look like Microsoft is planning to extend visibility of these new features to desktop Media Foundation API since they sequentially add new features without exposing them for public use outside UWP. UWP API itself is eclectic and I can’t imagine how one could get a good understanding of it without having a good grip on underlying API layers.