I would like to be able to record a screen-capture video (probably the contents of an FBO each frame) on iOS and save it to iOS memory / gallery as .mp4 or .mov etc. I would also like to be able to record the sound from the audio graph into the video output file as well. I understand this could be quite a daunting task but am up to the challenge.
Any pointers from someone who has tried this before or recommendations of where to start would be much appreciated. (does the quicktime block work on iOS for example, which audio / video methods should I look to / stay away from etc.) Is this even possible?
Do you need to be able to do this programmatically from the device? If not, Quicktime Player supports recording audio/video from an iOS device over USB. It works pretty well.
Thanks for the reply, unfortunately I was hoping to allow users to create and save videos from within the app.
Having looked into the QuicktimeAVF exporter example on iOS it seems possible to record video but Im not sure how to add the audio output / combine the video output with a .wav etc.
(Also it seems the video addFrame() method can take an optional duration which I should be able to use to keep the animation in sync with real-time even at variable framerates) .
I also noticed there is no audio recording / saving sample, is this complicated to do in Cinder?
There is audio::BufferRecorderNode, it is fairly basic and will allow you to record the string first into memory, then save that to a .wav file. The only place you’ll see it used in the repo is in the SampleTest. Much room for improvement here.
Forward looking, we’d definitely like to have the ability to record audio from a ci::audio::Node into a video (as well as read audio from a video into a ci::audio::Node). Currently this would be specific to a particular video implementation, for example using AVF we need to first find what the API is for storing audio in the video file as I don’t know it. If anyone wants to look into this, we can work together to come up with a Node implementation.