Hm, I would imagine that wouldn’t work even with the call to
ctx->postProcess() in your loop. Problem is that the
ci::audio::Context is designed for real-time use only, so basically no
Node::process( buffer ) calls get called into essentially the hardware output (platform specific
OutputDeviceNode pulls the entire graph.
This is a perfect use case of an offline audio context, which has been a long planned feature (there’s an open github issue for it here). I actually needed the same thing (a time-mag spectrogram of an audio file) so I decided to explore what it would take to get this done. I managed to hack together an
OfflineContext without modifying any of cinder’s source code and while it’s working, it’s pushing me more towards wanting to redesign how
ci::audio handles some things around
Devices. The current design was heavily influenced by Web Audio (ex. the typical
audio::master()->getOutput()), but I think we’re approaching the right time to break away from that and move it in a direction that fits our needs, which are many times at a lower level.
So I have this code that sort of does an offline render of an audio graph, but I don’t think it’s ready to land in cinder core just yet. I still need to do some tests around buffer sizes, but for the time being I’m thinking to just upload it as a gist, if that will be helpful.