FPS stuttering that make my eyes bleed

Sup Embers’,

Months ago, I decided to turn on Cinder’s framerate limiter, as I didn’t need more than ~60 FPS. Why waste computational resources needlessly, while also letting my laptop burn through my desk when it heats up?

Recently, I’ve had a lot more timed graphic elements moving about in my application - specifically timed 3d lyric meshes.

While doing some isolated element testing, I noticed how incredibly stuttery they moved. I then decided to disable Cinder’s 60 FPS limiter and the result is night and day. My understanding is that without the limiter, the hardware vsync instead kicks in which is set at ~120 fps with gsync.

For a visual of the difference, see this video and watch the top-left for FPS:

(Make sure it’s set to 1080p60fps, and I apologize for the lack of editing, but when I tried to put together a labelled version that fucked with the captured footage)

So my open ended question is if anyone else has run into something similar with Cinder and any advice on if it’s possible to limit FPS in Cinder and not have it look super jerky?

I’ve only briefly glanced at Cinder’s render loop, and I have a somewhat initial understanding of how tricky timing can be thanks to this phenomenal post about rendering and updating.

Cheers,
Gazoo

You can call ::setFramerate(120) for the best of both worlds. My guess is that >60hz wasn’t a thing (or at least was the overwhelming default) when that part of cinder was written.

Even better you could query your display for its refresh rate and setFramerate() to that.

On windows that would look something like this (just handling the current display, will probably fall over if you’re sending to a different monitor)


void YourApp::setup ( )
{
    DEVMODE deviceMode = {};
    if ( EnumDisplaySettings ( nullptr, ENUM_CURRENT_SETTINGS, &deviceMode ) )
    {
        setFrameRate ( deviceMode.dmDisplayFrequency );
    }
}

I’m not sure I follow the idea of ‘best of both worlds’ by applying an application-based frame limiter in addition to the driver enforced one?

But that’s also beside the point I’m trying to understand.

What I’m trying to figure is why the limiter set via Cinder results in such jerky animation. All the text is positioned via Cinder’s Timeline code, and the purple circle at the bottom of the video is animated by an incredibly crude, but simple piece of code:

if (mPosCircle1.x < 0) {
	mPosCircle1Offset += mInternalRenderSize.x;
	mPosCircle1.x = mInternalRenderSize.x;
}

mPosCircle1.x = - (ci::app::getElapsedSeconds()*200) + mPosCircle1Offset;

and even it appears to ‘jerk’ forward.

It’s my understanding it should be perfectly possible to render these things smoothly across the screen whether the limit is 30, 60, 75, etc. but it visually looks like the elements are either slowing down every now and again or speeding up now and again. Almost as if Cinder’s frame-limiting timing code is causing an odder rubber banding effect…? Or perhaps I’m doing something completely bonkers…?

GPUs will often have multiple frames in flight at any given time (3 usually i believe, but often configurable in your driver), so that’s already some latency there, but if you’re artificially sleeping your render loop for some amount of time that’s not in line with vsync, if it doesn’t wake up in time you’re potentially missing an entire v-blank, so the frame you’re currently drawing could take some time before it winds up actually being presented to the display.

Since you’re using frame independent time to position the object, when it finally does wind up hitting the screen it’ll be in the correct position, but the size of the lurch forward will depend on how many frames you ended up missing.

Best of both worlds just means if a user has for some reason explicitly disabled vsync you can still cap your framerate at some sensible value.

Thanks for the follow-up @lithium .

From what I gather, looking at AppImplMswBasic.cpp on Line 107, it would appear that Cinder does sleep for a time befitting the intended frame-rate, independent of the on-going vsync.

I am aware that sleep'ing is an inexact science. There are small increments you cannot hope to ever sleep (1ms perhaps?), and you are equally likely to over/under-shoot longer increments.

All of this left me a bit puzzled as to why I never witness this sort of lurching forward in motion in any video games that absolutely employ application-based frame limiting, e.g. 30/60 typically, along with a ‘variable/free’ if the devs aren’t jerks.

I use frame independent time to determine positioning as audio dictates the status of the visuals rather than the other way around. I suppose most video-games are built differently? I’m having a bit of trouble expressing it properly, but from reading between your lines, I guess they perhaps always increment the game world some amount per frame no matter what? My brain is a bit confused and I’m not sure that makes sense either?

I suppose you’re right about putting at least some limiter in place if a user opts to toss vsync, but at this stage, the way my application is built and how cinder limits frames appears to be at odds with one another. I wonder if there is a way I can allow for application based frame limiting that doesn’t lead to this jerky motion?

As you can probably tell, I find it quite difficult to wrap my head around this as I’ve not had to look at timing in this type scenario so far.

I feel like I saw you typing something up @lithium, and then cat got your tongue?

Just very curious to hear if you had any follow-up.

Or perhaps cat got your whole body @lithium O_o