A short introduction as I’m new here; I’ve been experimenting with Cinder for a couple of years and absolutely love this framework. Many thanks to all that contribute and develop this amazing software tool, and for making it available for use in commercial products without strings attached…kudos!!
My primary use of Cinder is for an audio analyzer application. At this time I’m experimenting with both the built in Cinder audio via ‘monitorspectralnode’ as well as FMOD streams to get the FFT data to my app.
One of the issues I’m running into with FMOD is that it isn’t aware of the audio data when using the “line in” audio input device; it only will work with audio files that are playing through the system. I’ve been looking through the FMOD forums, etc. and have only been able to find some old workarounds which I’m not sure are valid anymore (but I am planning on exploring them in the next few days).
Is there anyone here that can provide some direction/help to get FMOD working with a “line-in” audio stream? Thanks in advance…!
I can’t help much with FMOD specific questions (although I know a number of active members are quite familiar with that library), but if you’ve got any questions about things related to ci::audio then I’d be happy to try to help. The InputAnalyzer sample is quite basic, but should be a good starting point for setting up realtime line input -> spectral analysis.
Thanks for the reply. I’ve been using the InputAnalyzer project as a basis for my experimentations with Cinder FFT, as well as using the utility code with that sample for drawing the spectrum data.
I also have adapted the FMOD portion of this AudioAnalyzer sample project (https://forum.libcinder.org/topic/moving-audio-fft-animation) to retrieve the FFT data and draw it with the same utility/drawing code provided with the InputAnalyzer sample (with simple modifications to accomodate a Channel32 float data type (used in the AudioAnalyzer/FMOD sample) vs vector float data (used in the InputAnalyzer sample).
I’m finding the FFT data returned by ci:audio vs the FMOD methods seem quite different as the displays/rendering output for each are different visually, likely due to differing FFT window types, number of bins, window sizes, etc.
I’ve been playing (and need to do more playing) with the ci:audio FFT options since the FMOD FFT display appears to have a better resolution and detail/granularity for the rendered bins (I can attach a screenshot of the outputs of FMOD and ci:audio projects which will show this better then I can explain if that would help).
I see that there is a cinder::audio::dsp FFT enumeration for the different FFT windowing types but am unsure how to use/set them in a object (I’ve experimented with this but cannot figure it out); can you provide some insight on this?
My app is an audio differential analyzer which shows the differences between a source input and processed output. Ideally I’d like to use ci:audio methods exclusively, but am curious as to why the FMOD FFT and the ci:audio displays look as different as they do and if there are FFT options you can suggest I look into deeper.
I’m not exactly sure what you’re seeing with FMOD’s FFT stuff, but in general you’re not ever visualizing an ‘FFT’, you’re looking at the magnitude spectrum that was produced by the FFT after it is converted from imaginary to polar coordinates.
That said, as you mention there are a number of variations that could cause the outputs to look differently. Windowing, the size of the window, the size of ‘zero padding’ (extra samples at the end of each analyzed window to create a smoother output), and overlap are all common parameters and configurable in ci::audio::MonitorSpectralNode with its Format that you pass in when you create the node. There is also a smoothing factor that is purely for visual improvements - it is a value that controls how much of the last frame of magnitude spectral data to mix with the current, so that the samples don’t jump around too much.
I don’t think it is a big problem that FMOD’s FFT visualization looks different, really what you want to do is understand what the data represents. You can do that by running test tones through it and gauging the output frequency and amplitude in your graph to make sure it is correct. For reference, take a look at this section of the cinder audio guide, at the end it explains how to convert a spectral bin to it’s frequency in hertz.
Thanks very much for the information rich.e, much appreciated.
I’ve been making progress with my analyzer but am having difficulty setting the FFT window type, even after inspecting:
This line compiles fine:
auto monitorFormat = audio::MonitorSpectralNode::Format().fftSize(1024).windowSize(2048);
But this one throws errors:
auto monitorFormat = audio::MonitorSpectralNode::Format().fftSize(1024).windowSize(2048).mWindowType(audio::dsp::WindowType::BLACKMAN);
With the errors:
Error 1 error C2248: ‘cinder::audio::MonitorSpectralNode::Format::mWindowType’ : cannot access protected member declared in class ‘cinder::audio::MonitorSpectralNode::Format’
Error 2 error C2064: term does not evaluate to a function taking 1 arguments
Apologies for what is likely a simple error/misunderstanding on my part, but I’ve been experimenting with trying to change the FFT window type with audio::MonitorSpectralNode::Format() in various ways and can’t get the syntax correct.
Thanks balachandran_c that was indeed it; I think I had first used windowType (vs mWindowType) trying to set the window type but must have another error in the syntax/overlooked something (likely starting me right in the face ha) since I was having compile errors. I’m new-ish to C++, having used C for some time, and sometimes have issues with the syntax. At any rate I’ve got it now, thanks again.