Smooth audio function

I need to smooth audio input for the audio visualisator I have so it is not as “reactive” as it is. A naive approach would be to extend a Buffer class and have it output a rolling average (perhaps combined with a preceding FilterNode). Is there a better approach to achieve this in cinder:audio?

Thanks.

–8

If you want to “smooth” the samples in a buffer then a low pass filter node would have this effect.

However as you want to visualise the data I imagine you probably want to use smooth RMS values (or other audio extraction features) per buffer / per frame. I have had some pretty good results using a moving average or damping approach (see below).

smoothedValue += (currentValue - smoothedValue) * damping;

Where damping is between 0.0 - 1.0 (0.0 being smoothest and 1.0 being noisiest).

However these basic averaging / smoothing methods are always a tradeoff between smoothness and perceptible latency (lag) for audiovisual features. There are other methods to preserve transients etc. in real time and using derivatives of audio feature changes can also lead to some interesting results. I tend to find different audio features such as RMS, Spectral Centroid, Spectral Slope etc. need different smoothing / analysis methods or at least individual tweaking towards what you are trying to visualise with each one. So you may have to experiment a bit or think about how you want to show / visualise the different aspects of the sound.

Also make sure you are using MonitorNode or MonitorSpectralNode to do the audio analysis for visualisation otherwise you will have to worry about thread safety / locking to access your features.

Again i’m not sure if you mean smoothing at a sample level or at a frame level but hope this helps.

F

http://felixfaire.com/

@felixfaire Thanks for your reply, I think it gives me a good intro into what I could try.

I am not sure what the difference is between frame and a sample, as the docs seem to imply these are almost the same for the purposes of this discussion:

In both cases, there is a number of frames (a frame consists of a sample for each channel) and a number of channels that make up the layout of the Buffer

but I guess I am talking about frame smoothing (don’t care about separating the channels at this point).

–8

Ah sorry for my poor wording, I meant frame as in the whole buffer you would get per “frame” of your drawing. I should have used ‘buffer’ instead of frame.

The RMS, FFT, Spectral Centroid etc are all performed on a whole buffer (block of samples) for every drawn ‘frame’ / update() of your program. These are the values you will probably want to visualise.

F

Hey there,

MonitorSpectralNode has built in smoothing via its setSmoothingFactor() property. It basically averages the previous frame with the current based on some percentage, like Felix explained. For RMS (the volume property on MonitorNode), you can just increase the window size and that will give you a smoother result.

I am not sure what the difference is between frame and a sample…

Frames are one slice in time (ex. 44100 per second), while samples are the actual sampled values, one per channel per frame.

@rich.e and @felixfaire : Thanks, guys!

–8