Hello,
Firstly thank you on behalf of myself and the silent majority to all who made Cinder and all who post here. Very useful forum (and the last) and excellent library.
I’m an old style C/C++ programmer trying to drag myself into the current decade on the shared pointer thing. I’ve been deliberately avoiding it until now and now its causing me some trouble. I’m not sure I like its apparent opaqueness - no, I am sure. I don’t. I prefer managing memory myself, but I will qualify that, as it is not a fully educated opinion yet and I may yet discover why not caring about memory usage appeals.
Anyway, back on point. My application does the following:
1.
Read a whole bunch of textures(encapsulated by the ImageViewer class) with the sole purpose of getting the texture width and height info. I do this for each file, like this:
bool ImageViewer::preLoad()
{
try
{
cinder::ImageSourceRef imageSource = loadImage(loadAsset(mFilename));
if (imageSource != nullptr)
{
mImageWidth = imageSource->getWidth();
mImageHeight = imageSource->getHeight();
return true;
}
}
catch (Exception e)
{
imageApp::mApplication->log("failed to create texture during pre load for IMAGE = " + mFilename);
mImageWidth = 0;
mImageHeight = 0;
}
return false;
}
I include the context above because of the following question:
a) Image Dimension Information
I can probably pull the image dimension bytes of the file totally independent of Cinder, but then would loose support for other formats (or hard code support for them), so this is not too desirable. Is the above the best way in Cinder to do such a task?
2.
Once the image sizes have been read, the Cinder app displays a 3D view where theses images are loaded on demand based on what is visible. The load send’s a request to the texture loader thread, which then does this:
...
try
{
cinder::ImageSourceRef imageSource = loadImage(loadAssetAbsolute(textureLoaderRequest->mFilename), options);
cinder::gl::Texture2dRef tex = gl::Texture2d::create(imageSource, mTextureFormat);
// Lock this from the other thread to ensure multi thread safe syncing with opengl (just incase!)
{
std::lock_guard<std::mutex> lk(mOpenGLClientWaitSyncMutex); // TODO: is this needed?
// we need to wait on a fence before alerting the primary thread that the Texture is ready
auto fence = gl::Sync::create();
fence->clientWaitSync();
// Idenitfy the frame index in the call
textureLoaderRequest->mContentViewer->setTexture(tex, requestFrameIndex);
}
}
catch(Exception e)
{
...
}
...
I have the following areas of uncertainty:
a) OpenGL Fences
I must admit, I am still chewing over the OpenGL fences subject (the fence->clientWaitSync() bit of code came from a sample done by Andrew (thanks!)). At the moment there is only 1 texture loader thread, and the main thread. If I had multiple texture loader threads, would I need to synchronise the fence->clientWaitSync()'s? (or am I being daft here as would the clientWaitSync in each loader thread provide the necessary sync guards for multiple threads setting a texture to be drawn on the main thread). I did some reading, added the above lock_guard, and a bit of testing and it didnt seem to affect anything. (but that doesnt usually mean anything, hence my question).
b) Any kind of Texture Unloading
This brings me to my main problem. After running for a bit and zooming in out with thousands of largish textures on display, my poor, already suffering macbook, gets to its knees and keels over after the processes Private Bytes gets larger than the pagefile, and the system doesnt even have the courtesey to give me a blue screen! (yes I installed windows on it because I discovered after I got it that macOS and myself dont get on much). My pagefile ended up getting expanded to 30GB by windows without it even telling me!
I am at a complete loss as to how to deterministically delete textures. I know the theory - the shared pointer will just “go away” after scope loss, but I can’t see how this is a good thing (or if even possible) if we are tying to manage the “going away” - without reshaping the code to suite the language feature. All I know is that they are not being deleted anywhere and I think it may because the ImageViewers are held within a std::vector<> that to hangs around (due to need) until the application ends. I don’t really want to structure my code such that my code aligns with shared pointer semantics - If such a thing is even possible. Ideally I am looking for a simple method that just releases the image memory so that it is as symmetric as possible with my load function in the texture loader.
c) Smooth Texture Unloading
The end game would be to have another thread that performs the unloads to keep the UI smooth. My question is that if I create the texture on the loader thread which is initialised as follows:
...
mTextureLoaderRequests = new ConcurrentCircularBuffer<TextureLoaderRequest*>(TEXTURE_LOADER_REQUEST_Q_SIZE);
mTextureLoaderBackgroundCtx = gl::Context::create(gl::context());
mTextureLoaderThread = shared_ptr<thread>(new thread(bind(&imageApp::textureLoaderThreadFn, this, mTextureLoaderBackgroundCtx)));
...
Would the unloader thread need access to context that created the texture, and therefore would I need to setup context sharing between the loader and unloader thread and contexts. How would you tackle such a problem?
d) Bonus question (not directly Cinder, but resource related).
For applications with potentially large private memory usage, but also having the need to run on low memory hardware as well as hardware with lots of memory, then some sort of scheme to detect max mem usage must be devised. What if any are the best practice measures for an application stopping itself from allocating too much and making Windows (or any OS) constantly swap the pagefile until it exhausts and kills the system (as I described above). Should we just be monitoring the Page File Size and Private Bytes allocated and literally just stop allocating textures when the Private Bytes grows to some arbitrary percentage or the Page File Size, as a last resort to avoid the system dying?
Apologies for the large post with lots of questions I get carried away sometimes - appreciate any and all fb.
Thanks!, Laythe
EDIT:
I just found out that I can do:
tex.reset();
make the shared pointer “empty”. Yet to benchmark but I guess this also results in a glDeleteTexture() call.