Doing the least amount of work

Hi,

Sorry if this is a stupid question, but is it possible for an app to “throttle down” from a real time (eg 60fps) graphics app that renders always into simply displaying a “bitmap” (eg a simple bit blit of the 3D world contents instead of a complete re-computation). Then when user input is given (or some application specific conditions occur), it snaps back into real time mode.

I know we have the setFrameRate type functions - I have not tested varying it in response to certain conditions yet. Using this, with some sort of supervisor code (Eg Grabbing the contents of the framebuffer, then setting frame rate to 0, then just blit over the previoiusly grabbed framebuffer), I hope to be able to achieve this. Are the setFrameRate functions suitable for this use case, or is there a better way to achieve the same goal in Cinder?

This would be in order to save power/fans blowing on weak graphics cards, instead on continuously rendering when not needed.

Thanks - Laythe

1 Like

There’s myriad issues that can arise from abusing the time step like this, so my suggestion would be to do all your expensive rendering to an offscreen framebuffer on demand, and have your “low-power” mode be just drawing that FBO’s color attachment to the screen. 2 triangles per frame should be fine for even the weakest of GPUs.

This does whisper “premature optimisation” to me a little bit though, but then again maybe your power requirements are more stringent than those i’ve come across in production. shrugs

1 Like

To add to @lithium : the laptop’s fans should really only kick in when either/both your CPU or/and GPU are having a hard time to keep up. To use my own laptop as an example: it only becomes loud when CPU load is nearing 80% or so. The GPU does usually not contribute to fan noise in my case. Maybe if I run a AAA game title, but not when using Cinder.

What I mean to say: optimizing your rendering code, while never a bad idea, is probably not the first thing to resort to if you want to keep the noise down.

@lithium That sounds like a more sane way forward I think. But would the offscreen buffer attract an additional copy from offscreen buffer to screen buffer during normal operation? I’m curious - what type of issues does messing with the FPS target in cinder cause? Conceptually at least it makes sense to me.

@paul.houx I think I’m abusing my macbook integrated GPU a little, driving a 4K screen 60 fps screen with the display fully populated with high res textures. Have you a dedicated GPU? I see that even with very little CPU and no main memory/GPU allocations going on, that my GPU usage is 20-30% just drawing a static screen. I assume its all the texture data, but never the less causes the temp to rise and the fans to blow. I’m trying to find ways to over-optimize it, mainly because this is my only machine and the noise is irritating! :slight_smile:

“Copy” probably isn’t the right word because that has different implications. it’s just sampling it as it would any other texture.

Looking at your specific problem. Do the rects you draw your textures into change in size? i.e are you drawing high-res textures into really small rectangles without using mip-mapping? What you’ve described doesn’t seem to be particularly taxing to the GPU (unless i’m underestimating what you mean by high-res) and in my experience the performance bottlenecks are almost never where you think they are.

If it’s just the noise that’s bothering you, you could always use smcFanControl to force the fans off, though that’s probably a terrible idea :wink:

@lithium But surely it would have to do everything it does now, only offscreen, and then there is a final step to get that into the memory location (somewhere) where it is used for actual display?

Yes I don’t use any mipmapping (i think) - i create a texture like:

gl::Texture::Format mTextureFormat = gl::Texture::Format()
    .magFilter(GL_LINEAR)
    .minFilter(GL_LINEAR_MIPMAP_LINEAR)
    .maxAnisotropy(gl::Texture::getMaxAnisotropyMax())
    .wrapT(GL_REPEAT).wrapS(GL_REPEAT)
    .target(GL_TEXTURE_2D);

Should I be using mipmapping? I didn’t because I thought it would need more memory in the GPU and I’m already struggling (typical memory vs performance tradeoff).

The rects dont change in size, I have limited my load and display of on screen texture data to about 10000x10000 combined. Any more, and my system grinds to a halt.

Thanks for the pointers!
tempted by smcFanControl :slight_smile:

EDIT: Just a thought, but I create a couple of loader threads with shared contexts to the main one. Do you think it is possible for this texture data to be getting “copied” between thread context GPU memory causing massive overuse of the GPU memory? The usage does seem to go up suspiciously high when an image has its texture allocated (more so than expected, i think).

Your minification filter of GL_LINEAR_MIPMAP_LINEAR won’t do anything because you’re not enabling mipmapping with Texture::Format().mipmap(), but since you’re drawing your textures at 100% the mips won’t come into play anyway. GL_LINEAR or GL_NEAREST is fine for this.

The point of rendering to the offscreen buffer was so you only updated it when it was necessary. You will still have to do all the rendering work, just not every frame. What the driver wants to do with the FBO behind the scenes is pretty much opaque to us, but since it’s already stored GPU side, sampling its color attachment is going to be very fast, especially when compared to your normal render pass.

Hang on, you’re loading textures at 10000x10000? What size are you drawing them at? This may invalidate my first point about when the mips come in to play.

1 Like

Yep, but if I understand correctly, during normal operation (when the application is actively generating each frame) this would attract the additional cost of a copy, from the offscreen buffer onto the onscreen buffer. I was wondering what the overhead of this would be? As well as the time cost, maybe there is a VRAM memory cost (an entire screen sized buffer?)

I’m drawing images that can be 3000x2000, but there can be many of them. The user can zoom out, and I don’t do anything fancy with displaying thumbnails etc, so the card is drawing lots of big but reduced in size textures. I assume that is why even when doing no allocations in CPU or GPU, with a screen full of textures displayed, I get 20-30% GPU usage which causes heat, then fans etc. Its difficult to figure out whats going on because (of lack or experience), I’ am not sure if its a crappy Intel driver, a crappy bit of my code, or just that I am expecting too much from the integrated GPU.

It could be the multi-thread context usage. I read this post (Multiple windows, multiple shared gl contexts per window) from Paul which indicates this approach may be a bad way forward, but doesn’t detail why too much. Not sure of any other approaches though. I need multiple threads so 1 big file cannot stop others loading.

Not sure what you mean about invalidating your first point?

Thanks for the info - I’m bit of a noob still! :slight_smile:

Cheers,
Laythe

Nothing that you’re going to notice. I’ll have to defer to someone who knows more about the underlying process because you’re starting to make me doubt what I thought to be true about some of this stuff :wink:

This is what i meant by the rects changing in size. Once the user zooms out, the texture will be drawn smaller than its native size, so it will be filtered according to the minFilter you supplied. Mipmapping, while taking up more video memory, can provide a massive performance increase by automatically using a smaller version of the texture when its far away. The speed increase comes from being able to load the whole texture into memory at once, which eliminates a butt-ton of cache misses.

Since it’s a 1 liner in cinder, you may as well try it and see if it helps your performance. :wink: