Performance warning: Shader recompiled

I’m working on an application where I’m getting multiple warnings like the below while the application is running:

cinder::gl::Context::debugMessageCallback[2162] Program/shader state performance warning: [Vertex/Fragment] shader in program [xx] is being recompiled based on GL state

I get the warning for both fragment and vertex shaders, and the program numbers vary. I’m using a scene graph to implement (among other things) a particle system. Each particle has a 3d motion, and has its own VBO ci::gl::BatchRef as a member variable, which is created with a shader (very simple, based on the default Cinder shaders).

What causes this sort of warning? The application is also running in a large window (11520x3240), and I wonder if that’s affecting this, as well.

Thanks for any insights,

It occurs to me that it might be more efficient – since all the particles share a shader and geometry (ci::geom::Plane()) – to create one batch for the entire particle system, and then bind a new texture and change the model matrix as I draw each particle .

In addition to the texture, each particle also has a unique set of four color values for its vertices, which changes each frame. I’d therefore need to update those values in the batch for each particle each frame. Is it still a more efficient approach to use one VBO batch?



this can happen if your Vbo data changes between draw calls. OpenGL connects your mesh data to the shader inputs during compilation. The Batch stores these connections for improved efficiency. When either your mesh data or shader changes, it needs to recompile the shader to reconnect things.

To avoid this, only create your Vbo or VboMesh once (one mesh containing all particles), then compile the shader and create your Batch. To move your particles, map the Vbo and write new data to it, then unmap it and render the Batch. Example code can be found here.

The advantage of using mapping is that the structure and memory location of your data does not change, so the OpenGL state does not change and the program does not need to be recompiled.

For even more efficient particles, you could update their positions in parallel on the GPU, either using a transform feedback shader or a compute shader. See the other 2 particle samples (ParticleSphereGPU and ParticleSphereCS) for more information.


Hi, Paul,

Thanks a million for your really helpful response. That’s an interesting code sample, and not one I’d taken a look at before (at least not recently). In my case, though, because each particle has its individual animation, I’m setting the model matrix for each particle before drawing it, so I’m not sure I could draw the whole system in one draw call. I am mapping the color attributes for each particle each frame (similarly to how it’s done in this sample). Is that the sort of data change you’re talking about that requires recompiling?

I was thinking of an approach that would be more similar to the first example in the cinder tutorial on batches (here), where one batch is used to draw each box, but the scale, translation, and color are set for each individual box. I’d be making similar adjustments for each particle, except it would also entail changing the color attributes in the vbo mesh’s vertices by mapping those attributes. My particle system also has significantly fewer particles than 200k – just about 300 particles.

Thanks for your help!