Shared OpenGL context woes

Hey Embers,

I’m working on integrating a new GUI library in a cinder project. The overwhelming best practice I’ve heard is to share a single OpenGL context, rather than make multiple ones. However, this leads to some difficulty as it appears lots of software does not follow Cinder’s meticulous maintenance of the OpenGL state. My understanding is that Cinder pushes and pops a lot of OpenGL state changes, basically ensuring things that are set, are also unset.

A related topic points to Cinder’s integration of the AntTweakBar and its pushGLstate() and popGlState() functions. Thanks to these functions, I’ve resolved a crashing issue I was having when trying to render just a solid rectangle after returning from the GUI library’s render call.

I’m still having trouble doing some basic texture rendering If I use this external library, see the images below.

The render code:

void CinderProjectApp::DrawScenario3()

	// We must bind the framebuffer as exiting Gameface' Update() call, it's left bound to a back-buffer
	glBindFramebuffer( GL_FRAMEBUFFER, 0 );

	ci::gl::clear( ci::Color( "pink" ) );
	ci::gl::color( ci::Color::white() );

	ci::gl::draw( mTex1, ci::Rectf( 50, 50, 250, 250 ) );
	ci::gl::draw( mTex2, ci::Rectf( 350, 250, 550, 550 ) );

	ci::gl::color( ci::Color( "Purple" ) );
	ci::gl::drawSolidRect( ci::Rectf( 450, 150, 500, 200 ) );

What I know so far:

  • It’s not a texture resource issue, as RenderDoc shows me the textures are there during the render.
  • Despite the push/pop of state, the hard-opengl framebuffer bind must occur as this is left mapped to one of the GUI libraries buffers once it concludes its offscreen rendering.

My working theory is that it could be:

  • Possibly a badly bound Vao?
  • An OpenGL state change to how the textures are rendered?
  • An OpenGL state change to how the GLSL program is executed?
  • Cinder possibly not properly pushing/popping/invalidating a state? My brief glance at Cinders push/pop code appears to possibly check if the thing being pushed is already on top of the state, and then perhaps not push it? Very uncertain about this.

I’m very open to any suggestions, particularly if there’s a way I can make the state push/pop more robust. I’ve also attached the renderdoc’s if someone’s interest is piqued.

Thanks in advance,

You can call gl::context()->sanityCheck() and cinder will report what state changes have been made that differ with its internal idea of the state. It’s not exhaustive but should catch some of the major changes. In your DrawScenario3 example, a changing shader looks like the prime suspect to me.

When i’m integrating with a library that mangles my gl state i often throw a

gl::ScopedVao vao { nullptr };
gl::ScopedGlslProg shader { nullptr };
/// any other state i expect to change

to trigger cinder popping the state back to where i expect it to be when the external gl code is finished executing. It’s not an exact science, but it’s gotten me out of trouble in the past. Since you’re already using renderdoc, you should be able to scrub before and after the offending draw calls and see explicitly which state has changed.

How are you finding gameface otherwise? Is it cost prohibitive to license on an evaluation basis or worth it to buy without a client gig to charge it against? (NDA allowing, obviously)


Much appreciate the response @lithium . You’re an endless source of awesomeness.

I was completely unaware of the gl::context()->sanityCheck() method. Having applied it, I can confirm that without the pushGlstate() and popGlState() function the Vao appears to be set incorrectly. So it’s definitely checking some valuable things. Unfortunately, pushing and popping the state leaves nothing else for the sanityCheck() to find.

I applied the two scopes you mentioned, also no luck, but I’ll try a few more because, as previously noted, the push/pop state I feel has some code that might skip superfluous pushes? But I’m not sure about that.

As for renderdoc. When you say scrub - does renderdoc contain any functionality that let’s me do a diff of some kind? As you can imagine, the commercial library does quite a lot in its update, so I’m trying to find the right needle in a stack of needles. Either that or my lack of experience is just making me extra blind.

I’ve signed no NDA whatsoever, so what I’m happy to put on the public record is that their ‘low budget’ pricing tier is in the 4 digit USD range. A bit of a bitter pill to swallow was learning that the prices recently shot up ~30%.

So from from my perspective, I went from being cautiously approaching, to now feeling I have to once again go back to the drawing board to find something that works for me.

Renderdoc should let you step to any gl call in the capture and inspect the state. I’m not sure if it will highlight state changes for you or not (the much neglected and long forgotten mac OpenGL Profiler did this and it was awesome), but with a keen eye and some patience you might be able to find it.

Since we’re all programmers though, I would suggest a programmatic approach. This dodgy github project shows one possible implementation. Log all the state you’re interested in beforehand, log it again after, and literally diff the results.

1 Like

That exact type of full state-check looks exactly like the most useful thing.

Even if all it’ll get me are narrowing down the exact binding differences, that’ll be a big help and a help with all future integrations.

Well… I am now officially completely out of ideas…

Even having meticulously set every single OpenGL state variable to the verified working ‘cinder-only’ scenario I still get 2 black quads when introducing Gameface’s off-screen rendering.

Perhaps the dodgy OpenGL check state being 6 years old is missing something. I don’t know and I am going insane.

A few more hail marys.

  1. Do you have a debug gl context with a debug message callback / break enabled?
  2. Any gl errors reported if you scatter some calls to CI_CHECK_GL() around?
1 Like
  1. A bit confused about the CINDER_GL_HAS_DEBUG_OUTPUT define that I believe has to exist to enable the debug callback. I tried adding it as a pre-processor directive only to have the compiler point out that Platform.h already defines it. But for some odd reason Visual studio shows the relevant code in Context.cpp as grayed out, and won’t stop on a debug point in it which really confuses me.

  2. I keep learning new things, so if nothing else I’m increasingly becoming an OpenGL master in resolving absolutely all problems except the one I have in-front of me. :sob:

That’s possibly just visual studio being useless. Just creating your app like this should be enough. You’ll want to enable the console window in your prepare settings function too

CINDER_APP( YourApp, RendererGl ( RendererGl::Options().debug().debugLog().debugBreak() ) )

So many new things I’m learning. Thanks again @lithium . That seems to have gotten it working. I’ll see if I can suss something out.

I’ve found out that the issue relates to texture formatting. The Gameface support had provided code that enabled the gameface side texture to render properly.

Providing the two cinder side loaded textures with these properties:

ci::gl::Texture2d::Format f;
	f.setMinFilter( GL_LINEAR );
	f.setMagFilter( GL_LINEAR );
	f.setMaxMipmapLevel( 0 );
	f.setBaseMipmapLevel( 0 );
	f.enableMipmapping( true );

Makes them render out properly.

Specifically f.enableMipmapping( true ); is required. Which leads to the conclusion that something about the OpenGL state that Gameface sets/affects requires mipmapping for rendered textures? I’m unsure. But surely there’s a way to reverse this. I’m looking into it.

This one OpenGL call should disable mipmapping for the texture target:


and yet without mipmapping turned on, I still get black textures. The search continues…

Turns out, the final piece of the puzzle was the ‘other side’ of the texture filtering, e.g.:


Somewhat puzzling to me as the MIN aspect should be the one applied when the quad being textured is smaller than the texture size. I.e. the texture is being minified, not maximized.

But perhaps OpenGL just doesn’t like either being set to a mipmap setting when no mipmap is present.

Mystery solved. Massive huzzahs. Deep sadness of library cost.

1 Like