Hardware (pc, cpu, gpu) recoms for 6 x 1080p output?


We are picking up cinder for a project again. (Really excited about that as it’s been a while.)
Our goal is to display the graphics app on 6 x 1080p projectors.

I was wondering if anyone here has a recommendation for the computer we need to build. Any graphic cards that we should / shouldn’t use? (e.g. cinder has problems or accels with?)
All projectors are blending into each other so we need to make sure graphics are in synch.
The design/motion is 2D-ish… with some creative text elements, displacement maps etc… nothing too heavy.

The project is happening rather soon, so we need hardware that’s available and doesn’t have crazy wait times.

Please forgive me if this is wrong place for hardware questions but i figured it’s very much cinder related.


Hey Daniel,

I’m working on a touchwall project now which has 8 x HD displays.
In the past I would use an exotic graphic card with multiple outputs but now I’m running of standard GeForce 2060.
I’m using two datapath fx4 splitters to split two 4K outputs of the geforce to the 8 screens.
I know the people of bluecadet have been using this type of setup for their touchwalls.



This is interesting - @lab101 do the two 4K outputs render in sync? How do you avoid tearing across screens?

We’ve always used the NVidia Quadros (2 RTX Quadro 5000s are typical for the type of setup @dscheibel describes) but yea, they take a bit to get your hands on and information on how to build your own PC with them is scarce though I’ve seen it done. Always wanted to be able to use the gamer line of cards but didn’t know you could overcome the tearing issues.


Didn’t notice any tearing in the app but it’s not an application with fast motion.
It’s my first setup running Linux in order to benefit from HW decoding with gstreamer.

I’m using the option Xinerama in my xorg.conf which merges the two screen into one desktop.
I’m guessing the syncing is handled there as well.

It was a bit of risk for me running on Linux & one videocard since I never tried this combination but it’s running already since September without major issues so that’s nice :slight_smile:



Tearing would come into play if there was more than one GPU involved - since your setup is single GPU the buffer swap happens in sync and then splitted thus I believe no need for an exotic setup like the one Rich mentions.

Also if you are using the build-in GStreamer player without a custom GStreamer pipeline which takes advantage of nvdec ( the Nvidia specific GStreamer decoder ) then the decoding is not happening on the GPU but still you get some HW optimisations like color space conversion which can definitely help with performance especially with larger video resolutions.

There is way to use GstPlayer with a custom pipeline for getting zero-copy decoding-texture upload and the interface is there but since the player is wrapped around the QuickTime interface its not exposed externally by default and you would need to include GstPlayer directly into your app to take advantage of this functionality - its been something I have used on the Nvidia Jetson extensively since it really helps with performance on this device.

Just some info :slight_smile:


Ow didn’t know it wasn’t actually decoding fully on the gpu. Performance is still oke with multiple videos.
If I ever have new large project with multiple videos I will hire you to help me with the video part :wink: