Can someone explain whether the GPU driver is obliged to release GPU memory on demand from an application program? If not, does that kindof nullify any GPU memory resource management that an application might want to do, since it is a “layer removed” from the actual memory?
I’m allocating lots of textures and then freeing them. When allocating I can see the “GPU System Bytes” (in Process Explorer) rapidly increase (to an amount that hugely surpasses the images hard disk space - maybe they are uncompressed in GPU so take lots of space).
When freeing the texturtes, I can see the Texture2D object destructor correctly calls glDeleteTexture, but there seems to be a delay in the “GPU System Bytes” reducing back to the original value and in fact, the value does not return to the original value, it hovers above it a bit.
And then if keep doing this, I see the GPU memory usage gradually increases, but all the while, I am counting my glDeleteTexture calls, and the memory never quite seems to get recovered.
I am bamboozled, because from the app side it looks clean - x allocated, x freed - but I am seeing my application become a bit of resource hog, not in my process’s memory space, but the GPU memory space, because of this (Intel HD 6000 GPU).
Thanks - Laythe