[Audio] Process audio file all at once

Hello!

I would like to process entire audio file and store the data.
For now I will be happy to just call the getMagSpectrum from a MonitorSpectralNode connected to a BufferPlayerNode.

But for some reason I am not able to do something like

    vector<vector<float>> sampleFft;
    for( auto i=0; i<1024; i++)
    {
      mBufferPlayerNode->start(); //It's not necessary to start here
      mBufferPlayerNode->seek(i * mBufferPlayerNode->getNumFrames()/1024);
      sampleFft.push_back(mMonitorSpectralNode->getMagSpectrum());
    }

For some reason all the floats inside the vectors inside sampleFft are equal to zero…
Am I missing something?
Is there a better way to analyse entirely an audio file?

Cheers
L

EDIT:

Looks like if I call the ctx->postProcess(); inside the loop, just before calling the getMagSpectrum(), I get the wanted result. Make sense, the context needs to process the network so that every node gets updated.

I still wander if there is a better way to achieve this.

Hm, I would imagine that wouldn’t work even with the call to ctx->postProcess() in your loop. Problem is that the ci::audio::Context is designed for real-time use only, so basically no Node::process( buffer ) calls get called into essentially the hardware output (platform specific OutputDeviceNode pulls the entire graph.

This is a perfect use case of an offline audio context, which has been a long planned feature (there’s an open github issue for it here). I actually needed the same thing (a time-mag spectrogram of an audio file) so I decided to explore what it would take to get this done. I managed to hack together an OfflineContext without modifying any of cinder’s source code and while it’s working, it’s pushing me more towards wanting to redesign how ci::audio handles some things around Contexts, OutputNodes, and Devices. The current design was heavily influenced by Web Audio (ex. the typical audio::master()->getOutput()), but I think we’re approaching the right time to break away from that and move it in a direction that fits our needs, which are many times at a lower level.

So I have this code that sort of does an offline render of an audio graph, but I don’t think it’s ready to land in cinder core just yet. I still need to do some tests around buffer sizes, but for the time being I’m thinking to just upload it as a gist, if that will be helpful.

cheers,
Rich

1 Like

To update, I recently pushed this adhoc OfflineContext class to a public personal repo here: https://github.com/richardeakin/mason/tree/master/src/mason/audio. You can see where I use it in te AudioSpectrogramView class to compute an STFT using an audio graph of a player node hooked up to a spectral monitor node. As I mentioned it is rough still, I’m not sure if the computation is exactly the same as a regular, real-time graph, and there are things I’d do differently before this were to make it into core. But I think it is a nice place to start and think about how we can eventually formalize this functionality. If you end up using it lemme know what you think!

cheers,
Rich

1 Like