Using Cinder for sequential image playback and control?

Hello,

I am coordinating the display of sequences of large scientific image sets on a 5760 x 3240 video wall. We could run them through ffmpeg to create videos to play, but I’m also wondering about the feasibility of using Cinder to write each image to a texture to play sequentially at 30 fps and what interactivity this could afford. For example, if it is efficient to write the images to textures and play them, could we then provide an interface on a separate device in, say, Javascript that sends commands to the Cinder app to zoom in and out of a chosen point on the image as the image sequence is playing?

We’ll be running this on an Intel Core i7-9800X (8-Core) 3.80GHz with PNY Quadro RTX 4000.

I would love to hear some advice before going down this path.

A lot of this would be determined by your specific case. How many frames of video would you expect? Do you need to seek around in both directions at varying speeds? Do you need audio? Do you have file size limitations? Do your frames need an alpha channel?

There’s a solution for all of these cases but they all have various pros and cons and tradeoffs.

The good news is the external control thing is pretty trivial :slight_smile:

There will be about 16 different videos with 365 frames each. We do not need to play forward and backwards and do not need to vary the speeds. No audio. The file size is only constrained by the ability of the processor and video cards to process them at speed.

An alpha channel would allow for the possibility of layering frames on top of each other to combine the data/images on the fly, but this would be a nice-to-have feature to explore, and not the main goal.

In terms of priorities, this is what we are trying to achieve:

  1. smooth playback without stuttering, ripping, or tearing across a 3 x 3 1920 x 1080 video wall
  2. ability to use an external interface to choose which video is playing
  3. ability to zoom into an area of interest and back out again, as selected via the external interface
  4. ability to layer images with alpha channels so that a user could view data sets together, as selected via the external interface

To clarify, the system actually has 3 x Quadro RTX 4000, but I believe they exist in a master-slave relationship with one card doing the processing and the other two cards synced to pipe the output to the displays.

I don’t think you’d need to do anything particularly special to get that working. The Cinder-WMF block does a good job with hardware accelerated video playback (i’m assuming you’re on windows) but I don’t think it supports alpha channels and the last time I used it it had some problems with playing back multiple videos at once, but that may have been addressed in the interim.

If it were me I’d probably use HPV for this since your videos are quite short. It supports alpha and lots of simultaneous videos as well as synchronous seeking which will be good for if you need to be frame accurate across multiple videos (i.e the layering you mentioned) at the cost of some larger files and a bit more disk thrashing. I recommend a decent SSD for it.

I’ve used HPV in the past in some decently exotic setups and it’s held up well. Unfortunately I’m not able to release the production code but i posted an early precursor to it here that should give you something to play with. There’s a ton of other relevant info in that thread as well.

For the control stuff i’d just use OSC or TUIO, but there’s 100s of ways to skin that particular cat and none of them are particularly taxing so you should be fine with whatever method you go with.

Thank you for these excellent resources. I checked all my data sources and I will need an alpha channel.

Yeah I think the encoder they provide expects a folder of PNGs or something doesn’t it? I ended up implementing my own encoder because it was part of a larger pipeline so I can’t remember exactly.

If you run into trouble at that resolution you can always encode at half res and render at double. Depending on your animation / how you’re drawing things / your hardware it can look nearly indistinguishable from the full resolution.

1 Like

Well since you’re drawing the final frame that winds up on the screen you can do whatever you want. This is the misery and the beauty of being a low(ish) level graphics programmer.

There’s so many ways to go about this i really don’t know where do point you, but is there anything stopping you from just gl::draw-ing a texture on top of your video or am I underthinking your problem?