Spatial Audio in Cinder?


I would appreciate suggestions as to what library to use for Spatial Audio in Cinder (Android and iOS).



What are you looking for, more specifically? I assume that because you’re targeting mobile devices, you mean spacial in the sense of listening to a 3d position using stereo output (or on most devices, probably a mono output)? Most of that can be achieve using gain, panning and perhaps lowpass filters for moving objects to simulate the doppler effect.


I am recording spatial audio with zoom H2N microphone, which records 4 channels, together with 360 video. As I play this video on the device, the user can move it around to gradually discover the whole view sphere. As far as the audio is concerned I want it to behave similarly, so yes, I think panning, gain would be the right words.

Since for now this should include video, audio should be in sync with it, something along the lines of this. If we take video out of the question, something like this, which creates 8 virtual speakers, might work?



hello @eight_io,

I’ve just made native Android app linking spatial audio to 360 video. I’ve used the Facebook360 Spatial audio SDK for the audio part, which you can find here for free. They also offer a bunch of tools converting other format into their own TBE format. With the latest Zoom H2N firmware (v 2.0 check on their website) you can record into ACN/SN3D format, which FB tools can work with.
The SDK works for iOS and Android, also for regular C++ apps and plays back the *.tbe file. You can provide head rotation in form of Quaternion to rotate sound. There is also the possibility to sync to an external clock (audio to video, or video to audio).


@vjacobs Thanks! Incredibly useful info. I presume all of this you’ve done using cinder.



hello, @eight_io, I’ve made the prototype in Cinder, but for the final product I used Samsungs Gear VR Framework, that gave me a bit tighter hardware integration with the Samsung devices I deployed on.


@vjacobs: While using Facebook SDK, what were you slaving: audio or video? If no interruption (stop/pause etc) were expected did you need to synchronize Cinder Video to Facebook SDK audio at all?





I was slaving audio to video. The video had spoken words from closeups so I needed tight sync. It’s some time ago already but I think there were sync issues when I wasn’t slaving the two.

FYI, this is the line from my project where the sync happens. The videoplayer (mPlayer) is an instance of the typical Android Mediaplayer class. The audioplayer (Decoder) is an instance of the TBSpatDecoder class from the Facebook API. I had a chat with the developers from the API and they told me it’s best practice to sync audio as much as possible, e.g. every render frame.

Decoder.setExternalClockInMs((float) mPlayer.getCurrentPosition());