Seaquence for iOS

I’d like to share a project that @quilime and I released this week.

Seaquence is a music app for iOS that enables you to compose sounds with collections of organic creatures, each with their own synth voice. It’s a full C++ rewrite of a Flash website that I co-created with Gabe Dunne and Dan Massey some years ago. More info here.

Obligatory App Store download link (It’s free)

Seaquence is built with Cinder from the ground up. We use ci::app, ci::audio and ci::gl pretty heavily, along with a couple blocks that I maintain: Cinder-PureDataNode and Cinder-NanoVG.

Synth / Pure Data

Audio in Seaquence is synthesized in real time using Pure Data (via libpd) and piped into Cinder’s audio graph via a custom Node. I designed the initial sequencer implementation in Pd but the current synth is mostly @quilime’s work. My knowledge of synthesis has improved a lot since starting this project but Gabe really took the lead in designing the key features.

Pd always looks pretty messy, but I think our patch is actually fairly clean. You can see here our clock implementation, wavetables, sequencer, tone allocation, and fx chain.

Layout / Debug GUI

On a project this complex, compile times get annoying real fast. Fortunately @Simon wrote Cinder-ImGui which makes integrating Dear ImGui into Cinder projects super easy.

Over time our “Debug GUI” grew into a fairly flexible layout editor. The ability to move UI components around, change colors, test different screen sizes, and inspect values while the app is running became integral to our workflow. I think on future apps I would actually go deeper into building custom tooling. The time invested comes back in terms of quality and productivity.


If you haven’t used a performance visualizer on your code I highly suggest doing so. Visualizers allow you to see a timeline of code execution across all threads of your application. They make it much easier to track down performance problems that are not apparent in a traditional profiler, such as lock contention and graphics pipeline stalls.

We used Microprofile, an open source profiler. There are also commercial options like Telemetry, but for a small team like ours the license was prohibitively expensive.

Microprofile runs a web server in your app which connects over a WebSocket to a browser. The detailed timeline view looks like this:


I looked at many many different frameworks, game engines and languages before deciding to start work with C++, Cinder and Pure Data. I’m really happy that we ended up using this stack. Kudos to the Cinder team for making such a robust and portable framework \(^__^)/

I’m happy to answer any questions about Seaquence, how we used Cinder, our process, iOS dev, or any of the other tools we used.



Hi, looks very cool!

As you mentioned that you ended up really happy using the tech you did, what were your arguments for using C++ and Cinder instead of “native” iOS technologies like Obj-C/Swift with their libraries (I’m guessing the library for audio was crucial).

Also—what would you say was the most challenging thing with this project?

Congrats on releasing btw!

The simple answer is that we didn’t want to be tied to any platform. We released first on iOS because that’s the largest market for mobile music apps, but I’d like to bring Seaq to other platforms. Using Swift/Obj-C would have meant that porting to Windows, Android or whatever else would entail a complete rewrite, while C++/Cinder/Pd can run anywhere.

We do actually use some platform code. For example we use NSURLSession for HTTP requests, as it’s simpler than rolling our own, or using a cross platform library (most are some combination of bloated, incomplete, or awkward).

Just finishing over the last 6 months to a year has been the most challenging for me… This is the first app release for both of us. We seriously underestimated the amount of work it takes to bring something to market. There were roadblocks, we made mistakes, and sometimes it felt like it might not even happen, even after most of the code was written.

Very cool, thanks for answering!

I have no doubts that actually finishing something and releasing it to the public is a ton of work. Congrats on actually going the extra mile instead of just ending it as “an experiment”. Good job!

I downloaded it on my 5s and it runs very well, the ui feels very “snappy”, love it.

This is cool!

I have been using nanovg for a project that I am working on and was running into performance issues on older phones, mostly with the triangulation overhead, but also the shaders were running a bit slow. Was there any thing special you did to mitigate that?


Indeed, NanoVG isn’t really optimized for mobile. Because of this most of our graphics are rendered by NVG once and then cached to a proxy texture. There’s an example of how to do render-to-texture with NVG in the Cinder-NanoVG samples.

There are also quite a few graphics that change fairly frequently, or aren’t on screen for very long. Those don’t get proxied and we just eat the cost of recalculating and uploading the vertices every frame. The lifeforms and ripple background are our most expensive graphics to render, so those have been implemented fully in GL using simple distance functions in UV space, with a couple backing VBOs per lifeform.

Great application! I am just starting out with Cinder-IMGui for an iOS project, but I can’t seem to get the touch screen input working with the GUI for the given examples. Do you have any hints? Many thanks.

Hey, Mirek. We only used Cinder-ImGui for the macOS build of the app. I don’t think it currently has support for touch input. ImGui is definitely meant for use with a single mouse input at the moment, although they have a demo for iOS.

If you really need it to run on iOS you could try disabling multitouch so that Cinder simulates MouseEvents, or you could use a specific touch id to simulate mouse events yourself.

1 Like

@Mirek It looks great! Thank you very much for the explanation of how you handle the graphics. I am implementing a very similar style game but with vg-renderer instead of nanovg. I did not get the lifeform part though, are you generating the VBOs from NanoVG or are you rendering they are handled differently outside NanoVG using SDFs?