Multiple windows, multiple shared gl contexts per window

I had one Cinder window, and 8 worker threads that each had their own GL context shared with the primary context. Each worker thread was loading textures into their contexts, and these textures were available to the primary context just fine.

Now, I want to add a second window with its own workers threads. However, I’m not entirely clear how this kind of many-to-many resource sharing works in OpenGL, much less Cinder.

My initial thought was to create the two windows with cinder, then create the worker contexts for each window context. However, what appears to happen in this case is that (according to CodeXL) all the resources end up in the context for the first window. I believe this is because Cinder automatically shares resources between the first window and the second window. Then, when I create the worker contexts for the second window, they behave as though I had shared them with the context for the first window (even thought CodeXL is able to properly detect that they were created from the second window’s context).

The window contexts don’t really need to share resources, but I’m also not sure that it hurts. Is there a reason to try to fight Cinder’s default behavior and make the window contexts independent, or is it ok for all the contexts to share resources in one big mess? If the latter, what context should the workers share with? If the former, how do I keep Cinder from sharing resources between the window contexts?

More generally, is this the right way to go about streaming lots of big, I/O intensive textures to multiple windows, or is this insane? Is this possible?

thanks!

Hi,

I’d recommend to use a different loading strategy. Creating 8 worker threads per window is subject to diminishing returns (if that is proper English). You’ll hurt the hard disk’s feelings by demanding it to load 16 files simultaneously. The effect is that it actually takes longer to load content and, in the case of non-SSD drives, you might even damage it.

It’s better to have no more than, say, 4 loader threads in total. Instead of creating a GL context for each, consider just loading images as a Surface, then notify the main thread to upload it to the GPU using the correct context. Use a Pbo for this and uploading a texture almost becomes a no-op. Here’s some untested code:

// Create a Pbo once and reuse it for every upload.
auto pbo = gl::Pbo::create( GL_PIXEL_UNPACK_BUFFER, MAX_WIDTH * MAX_HEIGHT * 4, nullptr, GL_STATIC_DRAW );

// Load texture in CPU memory.
auto surface = loadImage( loadFile( path ) );

// Upload to GPU using the Pbo.
auto fmt = gl::Texture2d::Format().intermediatePbo( pbo );
auto texture = gl::Texture2d::create( surface, fmt );

// Create a fence so we know when the upload has finished.
auto fence = gl::Sync::create();

// Now do some other stuff to pass the time and then check the fence.
auto status = fence->clientWaitSync( GL_SYNC_FLUSH_COMMANDS_BIT, 0L );
switch( status ) {
case GL_CONDITION_SATISFIED:
case GL_ALREADY_SIGNALED:
	// You can use/draw the texture.
	break;
case GL_WAIT_FAILED:
case GL_TIMEOUT_EXPIRED:
	// Try uploading again.
	break;
}

-Paul

2 Likes

Thanks Paul!

Diminishing returns is totally the right english to explain 16 threads reading off a disk :wink: In practice, I’ll probably lower it–but the threads are not just reading off the disk, but also transcoding, and they could be reading a couple dozen large (video) files at the same time, so I think there is a strong argument for threading at that point.

However, doing the actual texture upload on the main thread using PBOs does seem like an interesting approach, and it could greatly simplify the code by not having to manage GL contexts. There’s a couple wrinkles, however: first, the textures themselves may be DXT compressed (Hap). I’ve looked around a bit for examples of using PBOs with DXT and haven’t found anything. Although it seems as though it shouldn’t be an issue. Do you happen to know if that should work, and be cross-platform?

The other wrinkle is just that the actual texture upload happens in a non-Cinder C++ lib (for compatibility with OF and others), but I can probably work out how to write the raw OpenGL calls based on the CInder implementation you outlined above.

Thanks for the suggestions. PBOs seem like a much better approach.