I got a bit of a specialized Cinder question for all you brilliant people. Here’s my conundrum:
I have a ci::CameraPersp, let’s call it the SceneCamera, pointed at a target ci::vec3. ( I know that for all intents and purposes, we only need a target point in order to simplify figuring out the direction the camera is pointing towards. The camera is actually just looking along a vector and doesn’t give a damn about a ‘target’ point. )
In addition, I have another ci::CameraPersp we’ll call DebugCamera, that connects to Cinder’s amazing ci::CameraUi class. Up until now, this ‘target’ point has been static. The SceneCamera has also been static and viewing the scene from always one fixed location. The DebugCamera is free to be moved and helps me figure out how the scene looks from other perspectives, mostly for debugging purposes. But now things are changing:
I’d like this ‘target’ point to move, and initially the SceneCamera will move along with it in the exact same direction, i.e. no panning or tilting. In other words, the SceneCamera will not change viewing direction at all, but simply translate in a particular direction.
I want the DebugCamera to also move in exactly the same way, but without altering any user-mouse input that’s already been applied.
The question is: How to most practically and simply do this in the Cinder framework?
The best method I’ve come up with so far is to simply make a copy of each camera and apply this movement:
But this feels a bit clunky. Like - perhaps I should apply a movement before the camera scope? or afterwards? But I’m very uncertain of this type of math Any ideas and thoughts are welcome!
I think (but haven’t tested this recently) that it should be enough to update both the eye point and the target for both camera’s. It looks something like this:
CameraPersp mSceneCamera;
CameraPersp mDebugCamera;
CameraUi mCameraUi;
vec3 mVelocity{ 4, 2, 1 };
void MyApp::setup() {
mCameraUi.setCamera(&mDebugCamera);
mCameraUi.connect( getWindow() );
}
void MyApp::update() {
auto sceneEye = mSceneCamera.getEyePoint();
auto sceneLookAt = mSceneCamera.getPivotPoint();
mSceneCamera.lookAt( sceneEye + mVelocity, sceneLookAt + mVelocity );
// TODO: only if user is not interacting with it.
auto debugEye = mDebugCamera.getEyePoint();
auto debugLookAt = mDebugCamera.getPivotPoint();
mDebugCamera.lookAt( debugEye + mVelocity, debugLookAt + mVelocity );
}
By setting both the position and the pivot point, CameraUi does not get confused about what to do with the new situation.
Much appreciate the input. I think an issue with this approach is that the mVelocity requires to be scaled to the time that’s passed since last update(), otherwise the camera just spins out of control very fast (as it keeps adding into the same value).
I worry that since the time between update() calls can be super-mega-tiny that imprecision very quickly creeps in. I’m just making cool visuals, so imprecision isn’t a huge concern, but I think it’s notable to consider.
Thanks to you, I learned about getPivotPoint() which I’m assuming what the ‘target’ ends up being set to!
Note to anyone reading along - this scaling above should be more resistant to imprecision creeping in, but requires keeping the camera positioning and movement value contributions separate.