Doing the least amount of work

Yep, but if I understand correctly, during normal operation (when the application is actively generating each frame) this would attract the additional cost of a copy, from the offscreen buffer onto the onscreen buffer. I was wondering what the overhead of this would be? As well as the time cost, maybe there is a VRAM memory cost (an entire screen sized buffer?)

I’m drawing images that can be 3000x2000, but there can be many of them. The user can zoom out, and I don’t do anything fancy with displaying thumbnails etc, so the card is drawing lots of big but reduced in size textures. I assume that is why even when doing no allocations in CPU or GPU, with a screen full of textures displayed, I get 20-30% GPU usage which causes heat, then fans etc. Its difficult to figure out whats going on because (of lack or experience), I’ am not sure if its a crappy Intel driver, a crappy bit of my code, or just that I am expecting too much from the integrated GPU.

It could be the multi-thread context usage. I read this post (Multiple windows, multiple shared gl contexts per window) from Paul which indicates this approach may be a bad way forward, but doesn’t detail why too much. Not sure of any other approaches though. I need multiple threads so 1 big file cannot stop others loading.

Not sure what you mean about invalidating your first point?

Thanks for the info - I’m bit of a noob still! :slight_smile:

Cheers,
Laythe