Windows video looping with high resolution mjpeg video

Hey folks,

I’m trying to make a simple video player that will loop through an MJPEG encoded 1920x1200 video on Windows. I’ve had the best luck with the WMF block so far—the performance is much smoother than the default QuickTime video block—but I am noticing that it stutters a little when the movie loops back to the beginning (see sample videos here: / it’s a little easier to see in the one that has the frame numbers in it).

The code is pretty similar to the SimplePlaybackApp in the examples folder for the block:
the main difference is that it loads the video from a file dialog, and enables looping on the player with setLoop(true)

Anyone have any tips or suggestions for how I can improve this? Or should I use another block instead?

Probably relevant: we’re on Windows 7 Embedded because the client finds it more stable in other installations she’s done, but I think we could upgrade to 10 Embedded because we’re going to have to do it eventually. Using VS2017 and cinder version is 0.9.2dev as of November 2. I’m new to Windows development, so let me know if there’s any other info I can provide—I’m not sure what’s helpful.


Hi Kate,

the WMF block is still the best solution we have for video on Windows, but it’s not perfect. Reason being that hardware accelerated video is only possible using a DirectX backend, while Cinder (and OpenFrameworks for that matter) of course uses OpenGL. The WMF block uses Windows Media Foundation and a DirectX video player to decode the video and audio and synchronise the two. It then shares the decoded video frame, which sits in GPU memory, with OpenGL using the WGL_NV_DX_interop extension.

All this to say that the majority of its functionality is handled by Media Foundation. Which is known for having problems looping videos. Yup, it’s infuriating, the state video is in on the Windows platform.

With that being said: in this forum thread they mention there is a solution using a sequencer, but that would require having to rewrite parts of the block, which is not a trivial matter. Another suggestion is to first call Pause() before restarting the movie. Unfortunately, I believe the block is already doing this, but perhaps you can confirm that?


When I did this a while back the only good solution I could find was to keep frames as DXT compressed textures and play them back directly to OpenGL (no decompression step). I started working on my own solution for this but ended up using the HAP codec for playback (this made it easy to compress HAP videos with Quicktime/AE/etc.). I believe I used this block as a starting point for playback (and got it to work on Windows):

I’ll see if I still have the source code lying around. With this solution I was able to get very fast video playback with extremely large textures. The caveat though was that my videos had no audio playback. Also you can still run out of video or saturate your bus pretty quickly even keeping the video frames compressed with DXT.

Hope that helps!

Thanks for replying so quickly, @paul.houx! From what I can tell, the block calls Stop() rather than Pause() before restarting the movie, but changing this (and having it seek back to 0) didn’t seem to change anything. The default video player in openFrameworks had its own quirks with MJPEG that I shared here, in case anyone’s curious.

I do think that HAP is a good option for this project @joelpryde—I’d be willing to test out your Windows port if you can find it! From what I’ve read, MJPEG doesn’t seem to hold up well past 2K and I think the client will want to work with higher resolution video in the future.

I’ve also looked through MPC-HC and the ffmpeg player a little bit to see what they use (because both loop very well), but both are way lower level than I can handle. Thanks again for the suggestions and I’ll update again if I figure out a solution.

Hap is a great codec of course but if for any reason you had to stick with MJPEG I’d recommend checking out OpenCV’s VideoCapture class. I was working with it very recently to capture logitech c922 webcam streams (using MJPG decoding) and I know that the same class takes care of video playback as well as capture. One of the cool things about it is that you can force specific APIs for the VideoCapture class as mentioned here.

The Cinder-OpenCV3 block works just fine but bear in mind that you might need to rebuild OpenCV to use this class (see this issue) especially if you want to enable something like ffmpeg backend for OpenCV (I don’t think Cinder’s version was built with ffmpeg).

Lastly you can easily convert cv::Mat to ci::gl::TextureRef for displaying the frames:
auto texture = gl::Texture::create( fromOcv(mMat) );

Edit: Good resources for building OpenCV on windows and using the VideoCapture class: (turn off SHARED_LIBS to get static .lib)


Horrible hack warning, but could you double-buffer / ping-pong players? i.e have 2 instances of the player that have the same video queued, and then when one finishes, rather than looping it you just trigger the second one to play, and then seek the first video back to the beginning while the other is playing and repeat the process? Not pretty, but then again neither are those unfortunate frame hitches :wink:

1 Like

Hey Kate! Nice to see you here.

Sorry for coming late to the party. You might also take a look at a lib I’ve been writing to deal with some of the very same issues:

Let me know if you need help integrating it or run into any issues. It’s still a little half-baked compared to libs that are based on bigger frameworks. But I am using it on a few installations right now.

I’ll also mention that one of the biggest bottlenecks when you start going over 2k is GPU memory bandwidth. I’ve been pushing 2 x 2k and 1 x 4k (which is equivalent to 1.5 x 4k) on a GTX970 at 24FPS, and I think that about saturates the bus. I think the 1080 Titans or something are the cards with bigger memory buses, and you might look at those if you’re still getting stuttering.

Also, if the stuttering is happening on loop, it sounds like an issue with seek speed. Which is largely affected by drive speed (and also encoding, but MJPEG is one of the best for random seek). Make sure your videos are on a local, high-speed SSD. If your software stack allows you, and your video is small enough, try loading the whole video into RAM.

Most video playback libraries don’t give you control over stuff like that, which–not to oversell myself–is why I like a libglvideo. It’s small and hackable.

Lastly, JPEG can be CPU intensive to decode at higher resolutions or framerates unless your playback lib provides GPU JPEG decoding. You might just check to see if your CPU is saturated, and upgrade it if it is. This is a big advantage of Hap, because it requires no CPU-side decoding, which also optimizes GPU memory bandwidth usage.

Feel free to hit me up on email/twitter if you want to discuss in more detail.

Thanks all for the additional suggestions! I’m sure the OpenCV backend will come in handy another time. The double buffering is a nice hack suggestion, too, but probably not ideal in this particular scenario (we’ll also be jumping between different videos).

And hi, Ian! Nice to see you here, too. We are using SSDs but yes, the videos are high enough resolution that they seem to be at the upper limit for what most commodity machines can decode with MJPEG. I think the loop stuttering issue is inherent to WMF as Paul suggests—other native players on the Windows (like MPC-HC) can loop the video fine, but the code is written at a level much lower than makes sense for this project. A friend had recommended your video library as a possible solution for this project. :slight_smile:

We ended up going with HAP for the video codec and Demolition Media’s plugin for Unity as the player. Unity ended up being a good option because the client is somewhat comfortable/familiar with it. So far it is working great; we’ll see how it ends up as the project develops.

1 Like