Can WaveTable store arbitrary audio, and change playback speed?

Following up from my previous thread, I noticed the WaveTable class in various samples, so I thought I could use it to load an audio sample and pitch-shift it (time stretching allowed) to replace my Pure Data approach. However, the only code samples I could find in cinder only deal with elementary wave forms, like sine, square, sawtooth, and so on. I was wondering if instead I could load my own wave table (from a wav file, or ogg on Android) and then dynamically change che playback speed (which is exactly what I’m doing in my Pure Data patch).

I saw there’s a copyFrom method that takes in an array of floats, but I’m not sure that that does what I want.

Ideas? Pointers to docs? Code samples?

The audio::WaveTable and audio::WaveTable2d classes were designed to facilitate some of the GenNodes that are in ci::audio, mostly to create bandlimited waveform oscillators. As such I don’t know how generic they are and in part why they haven’t been documented so well as of yet.

That said, I don’t see any reason why GenTableNode couldn’t be made to do what (I think) you’re asking, but then it really depends on what your samples are and what you are intending to do with them.

Also keep in mind that many of those Node classes are extremely small in source code (GenTableNode's implementation is all of 20 lines at the moment), so feel free to make your own versions and add features as necessary. I’m sure if things end up being useful we can discuss how to get more features back into the core objects.

Mh, in retrospect, I’m not after a wavetable synth, strictly speaking, but rather a sampler. However, as I understand it, the current version of the BufferPlayerNode is unable to vary the playback speed, which is all I need in practice. I wonder how difficult (or trivial) it might be to modify that node, but my spidey sense suggests it may not be that easy. @rich.e, do you have an update on that issue?

It actually isn’t that much work to add the varispeed playback support verbatim, however I was hoping that I could share some of that code with other things, like DelayNode for example, anything that could use some sort of real-time interpolation. There’s also a few edge cases that we’ll have to work through, like loop points, reverse direction, marking end of file, etc. But I think you could start from BufferPlayerNode and jam out the extra functionality you need without too much effort. I’m also starting to feel like this is an important enough feature that we should get it into BufferPlayerNode however possible, it can be refactored later so that the interpolation code is shared / more feature rich.

For reference, I know of at least one easy to follow implementation, in Web Audio’s AudioBufferSourceNode. You can see where the variable speed playback (set by the playbackRate param on their AudioNode) processing code is here.