Did this project a while ago but wanted to post it together with the new website which took a while to finish.
It’s our second museum touchwall build in cinder like all most of our work.
As in our previous project we used node.js & phantom.js to prerender all layouts made in html to png.
Saved us some rendertime in Cinder.
So if something changes in the CMS we pull it trough the templates and prepare png’s for all possible cases
Asides the technical stuff we tried to be as playfull as possible for the interface.
Visitors can select artworks by dragging the screens and find a good match (ratio).
You can read all about that here:
And there’s a video explaining it https://vimeo.com/268780113
Love the idea for the “screensaver” mode. Curious about the use of Node.js to prerender the content. Great work on the image loading. Nice job!
Our cinder app just displays images so no live text rendering. We take the text from the cms and put in a HTML template. HTML is more flexible when working with images and markup. So we have thousands of images for different usecases (portrait,landscape,different colors,etc). Diskspace is cheap compared to live rendering big complicated chunks of text on 6 screens.
Text gets rerendered when somebody makes a change in the backend.
Except when the user creates a box on screen where the width is larger than the height the text will placed at the bottom of the image.
In this case we couldn’t forsee a texblock with correct size because the user created the boundingbox.
For this we have a PhantomJS server which rerenders the html template on the fly.
We don’t have video in this project but sometimes we have node.js app running in the background to convert all the videos to DXT format, again heavy on diskspace but interesting for CPU.
Very cool and clean dynamic layout, it really makes the content pop up!
I’m curious to know how you handle the loading of all the matching images in real time while people are creating the boxes, and also what kind of touch hardware did you use. Is it an IR frame?
hey, it’s not so difficult we keep all images in memory with surfaces.
When dragging we create a small size texture.
If the images doesn’t change within x milliseconds we upload a better version.
The overlay is shadowsense, easy and fast enough for the drags.