Live-coding is gaining traction in the cross-section of art and technology. It’s useful in performance (where I use it) or for quick prototyping (e.g. the interest in the node-based ui thread (1)).
To this end, I’ve created oschader, a project that uses controls Cinder during runtime using osc. It defines
Programs that operate on fbos and take a number of inputs (textures, floats, etc). The key feature is the ability to, much like a node-based-ui, define a base layer and then apply a number of effects to it. Here’s a video that might help you visualize what’s going on.
Here’s where the RFC comes in: what would you want from a cinder block that lets you experiment with shaders at runtime? This will be my first Cinder block, so I’m not sure how to go about choosing what to leave in for completeness and what to leave out for simplicity. A few specific questions spring to mind.
Right now it uses osc messages as a control mechanism (sent by oscillare (2), a haskell-based control system). Should it include the osc messaging system in the block itself or leave the user to implement their own controller?
Since I’ve mainly used this to complement live audio performance, it has microphone, camera, and image file inputs. Which of these could be included, and which left to the user to include? Ideally whichever way it’s done, there would be an input state that could be extended and modified by the user.
Will this be useful as a cinder block? Should it be two blocks instead? Or is it so specific that it’s not really generally useful?
Thanks for taking the time to read this, I look forward to hearing suggestions, criticisms, or anything else.