Why the example of Picking3DApp does not work together with CameraOrtho?

For my CAD project, I needed to use an orthographic projection of the camera (use class CameraOrtho). Also, for choosing in 3D, I decided to try the method described in the examples through the Ray classes and check the intersection with TriMesh. But as it turned out, this technique does not work with CameraOrtho.
Then I tried to change to CameraOrtho in the educational project PIcking3DApp from the examples. But unfortunately, I just made sure that for some reason, this method does not work in such a case.
What could be the reason?

Hi,

there is no reason why this technique would not work with a CameraOrtho. The main difference between an orthographic camera and a perspective one, is that the shape of the viewing area of the former is a box instead of a frustum.

Perspective camera on the left, orthographic camera on the right.

So everything apart from generateRay should be exactly the same and if it doesn’t work I’d start looking at the ray casting first.

If you place the orthographic camera at (0,0,10) looking at (0,0,0), the camera view space will be aligned with world space and the ray created by generateRay will always run parallel with the z-axis (x and y coordinates of its direction will be zero). Can you confirm this?

-Paul

Thanks for the answe, Paul!
I tried this option. It does not work, and for me it is strange. I looked at the difference in the generation of the Ray (generateRay) from a Camera and a CameraPersp one, and I saw that only a shift to the focal distance was added. I also tried to simulate different options for viewing points and camera positions.
In any case, it turned out that the intersection with the bouncing cube occurred, but more often with an incomprehensible algorithm.
For example, the intersection is triggered when the mouse is positioned in a certain quadrant of the screen, but without reference to the position of the directly checked cube itself.

Here is the code of your example converted by me for experiments.

#include "cinder/app/App.h"
#include "cinder/app/RendererGl.h"
#include "cinder/gl/gl.h"
#include "cinder/CameraUi.h"
#include "cinder/TriMesh.h"
#include "CinderImGui.h"

using namespace ci;
using namespace ci::app;
using namespace std;

class Picking3DApp : public App {
public:
	void setup() override;
	void update() override;
	void draw() override;

	void mouseMove(MouseEvent event) override;

	bool performPicking(vec3 *pickedPoint, vec3 *pickedNormal);
	void drawCube(const AxisAlignedBox &bounds, const Color &color);

private:
	TriMeshRef			mTriMesh;		//! The 3D mesh.
	AxisAlignedBox		mObjectBounds; 	//! The object space bounding box of the mesh.
	mat4				mTransform;		//! Transformations (translate, rotate, scale) of the mesh.

	//! By caching a 3D model and its shader on the GPU, we can draw it faster.
	gl::BatchRef		mWireCube;
	gl::BatchRef		mWirePlane;
	gl::BatchRef		mMesh;

	CameraOrtho  mCam;
	CameraPersp			mCamera;
	CameraUi			mCamUi;

	ivec2				mMousePos;		//! Keep track of the mouse.
};

void Picking3DApp::setup()
{
	// Create the mesh.
	mTriMesh = TriMesh::create(geom::Cube().subdivisions(2));

	// Get the object space bounding box of the model, for fast intersection testing.
	mObjectBounds = mTriMesh->calcBoundingBox();

	// Set up the camera.
	//mCamera.setPerspective(40.0f, getWindowAspectRatio(), 0.01f, 100.0f);
	//mCamUi = CameraUi(&mCamera, getWindow());	

	mCam.setOrtho(-10, 10, -10, 10, -10, 10);
	mCam.lookAt(vec3(0.0f,0.0f, 10.0f), vec3(0,0,0));

	// Create batches that render fast.
	auto lambertShader = gl::getStockShader(gl::ShaderDef().color().lambert());
	auto colorShader = gl::getStockShader(gl::ShaderDef().color());

	mMesh = gl::Batch::create(*mTriMesh, lambertShader);
	mWirePlane = gl::Batch::create(geom::WirePlane().size(vec2(10)).subdivisions(ivec2(10)), colorShader);
	mWireCube = gl::Batch::create(geom::WireCube(), colorShader);
}

void Picking3DApp::update()
{
	// Animate our mesh.
	mTransform = mat4(1.0f);
	mTransform *= rotate(sin((float)getElapsedSeconds() * 3.0f) * 0.08f, vec3(1, 0, 0));
	mTransform *= rotate((float)getElapsedSeconds() * 0.1f, vec3(0, 1, 0));
	mTransform *= rotate(sin((float)getElapsedSeconds() * 4.3f) * 0.09f, vec3(0, 0, 1));
}

void Picking3DApp::draw()
{
	// Gray background.
	gl::clear(Color::gray(0.5f));

	// Set up the camera.
	gl::ScopedMatrices push;
	gl::setMatrices(mCam);

	// Enable depth buffer.
	gl::ScopedDepth depth(true);

	// Draw the grid on the floor.
	{
		gl::ScopedColor color(Color::gray(0.2f));
		mWirePlane->draw();
	}

	// Draw the mesh.
	{
		gl::ScopedColor color(Color::white());

		gl::ScopedModelMatrix model;
		gl::multModelMatrix(mTransform);

		mMesh->draw();
	}

	// Perform 3D picking now, so we can draw the result as a vector.
	vec3 pickedPoint, pickedNormal;
	if (performPicking(&pickedPoint, &pickedNormal)) {
		gl::ScopedColor color(Color(0, 1, 0));

		// Draw an arrow to the picked point along its normal.
		gl::ScopedGlslProg shader(gl::getStockShader(gl::ShaderDef().color().lambert()));
		gl::drawVector(pickedPoint + pickedNormal, pickedPoint);
	}
}

void Picking3DApp::mouseMove(MouseEvent event)
{
	// Keep track of the mouse.
	mMousePos = event.getPos();
}

bool Picking3DApp::performPicking(vec3 *pickedPoint, vec3 *pickedNormal)
{
	// Generate a ray from the camera into our world. Note that we have to
	// flip the vertical coordinate.
	float u = mMousePos.x / (float)getWindowWidth();
	float v = mMousePos.y / (float)getWindowHeight();
	Ray ray = mCam.generateRay(u, 1.0f - v, 1);
	
	// The coordinates of the bounding box are in object space, not world space,
	// so if the model was translated, rotated or scaled, the bounding box would not
	// reflect that. One solution would be to pass the transformation to the calcBoundingBox() function:
	AxisAlignedBox worldBoundsExact = mTriMesh->calcBoundingBox(mTransform); // slow

	// But if you already have an object space bounding box, it's much faster to
	// approximate the world space bounding box like this:
	AxisAlignedBox worldBoundsApprox = mObjectBounds.transformed(mTransform); // fast
	//mObjectBounds.transform(mTransform);
	// Draw the object space bounding box in yellow. It will not animate,
	// because animation is done in world space.
	drawCube(mObjectBounds, Color(1, 1,	0));

	// Draw the exact bounding box in orange.
	drawCube(worldBoundsExact, Color(1, 0.5f, 0));

	// Draw the approximated bounding box in cyan.
	drawCube(worldBoundsApprox, Color(0, 1, 1));

	// Perform fast detection first - test against the bounding box itself.
	if (!worldBoundsExact.intersects(ray))
		return false;

	// Set initial distance to something far, far away.
	float result = FLT_MAX;

	// Traverse triangle list and find the closest intersecting triangle.
	const size_t polycount = mTriMesh->getNumTriangles();

	float distance = 0.0f;
	for (size_t i = 0; i < polycount; ++i) {
		// Get a single triangle from the mesh.
		vec3 v0, v1, v2;
		mTriMesh->getTriangleVertices(i, &v0, &v1, &v2);

		// Transform triangle to world space.
		v0 = vec3(mTransform * vec4(v0, 1.0));
		v1 = vec3(mTransform * vec4(v1, 1.0));
		v2 = vec3(mTransform * vec4(v2, 1.0));

		// Test to see if the ray intersects this triangle.
		if (ray.calcTriangleIntersection(v0, v1, v2, &distance)) {
			// Keep the result if it's closer than any intersection we've had so far.
			if (distance < result) {
				result = distance;

				// Assuming this is the closest triangle, we'll calculate our normal
				// while we've got all the points handy.
				*pickedNormal = normalize(cross(v1 - v0, v2 - v0));
			}
		}
	}

	// Did we have a hit?
	if (distance > 0) {
		// Calculate the exact position of the hit.
		*pickedPoint = ray.calcPosition(result);

		return true;
	}
	else
		return false;
}

void Picking3DApp::drawCube(const AxisAlignedBox &bounds, const Color & color)
{
	gl::ScopedColor clr(color);
	gl::ScopedModelMatrix model;

	gl::multModelMatrix(glm::translate(bounds.getCenter()) * glm::scale(bounds.getSize()));
	mWireCube->draw();
}

CINDER_APP(Picking3DApp, RendererGl(RendererGl::Options().msaa(8)))

Hi,

I believe you have stumbled upon an issue with Cinder: it does not support generating a ray from an orthographic camera. I was actually a little surprised to see that the CameraOrtho class does not override the calcRay method with its own, proper implementation.

This is what it should look like:

Ray CameraOrtho::calcRay( float uPos, float vPos, float imagePlaneAspectRatio ) const
{
	calcMatrices();

	float s = ( uPos - 0.5f ) * imagePlaneAspectRatio;
	float t = ( vPos - 0.5f );
	vec3  eyePoint = mEyePoint + mU * s * ( mFrustumRight - mFrustumLeft ) + mV * t * ( mFrustumTop - mFrustumBottom );
	return Ray( eyePoint, -mW );
}

I’ve created a PR for it.

In the meantime, if you don’t want to hack Cinder, you could use the following free functions:

Ray generateRay( const CameraPersp& cam, float uPos, float vPos, float imagePlaneAspectRatio ) const { return cam.generateRay( uPos, vPos, imagePlaneAspectRatio ); }
Ray generateRay( const CameraOrtho& cam, float uPos, float vPos, float imagePlaneAspectRatio ) const
{
	float left, right, top, bottom, near, far;
	cam.getFrustum( &left, &top, &right, &bottom, &near, &far );

	const auto u = glm::rotate( cam.getOrientation(), glm::vec3( 1, 0, 0 ) );
	const auto v = glm::rotate( cam.getOrientation(), glm::vec3( 0, 1, 0 ) );

	const auto s = ( uPos - 0.5f ) * imagePlaneAspectRatio;
	const auto t = ( vPos - 0.5f );
	const auto eyePoint = cam.getEyePoint() + u * s * ( right - left ) + v * t * ( top - bottom );
	return Ray( eyePoint, normalize( cam.getViewDirection() ) );
}

Call it as follows:

Ray ray = generateRay( mCam, u, 1.0f - v, 1 );

-Paul

4 Likes

Great! Thank you very much, Paul! I will check and write about the results! Thanks again for your efforts!

1 Like

This fix will be merged into the master branch today. I’ve also fixed another issue with CameraOrtho: it did not respect the aspect ratio set by the user. From now on, if you define the size of the orthographic space and then set the aspect ratio (or the other way around), the space will be adjusted accordingly. It will be much easier to setup an orthographic camera that does not stretch your content.

After this has been merged, you can simply pull the latest Cinder version from GitHub and your ray casting should just work.

Thanks for pointing us at this issue, @starfair.

-Paul

1 Like

Hi,

I’ve noticed one more thing. Maybe its already fixed in newer version of cinder (I was using 0.9.0).
I was playing around with picking3d sample and noticed if camera is vertically or horizontally shifted, user has to adjust u, v coodrinates (for ray::generate) manually. Since this ray function derives from camera, these coordinates could automatically be adjusted.

For example insted of
float u = mMousePos.x / (float) getWindowWidth();

user has to set it like this:
float u = mMousePos.x / (float) getWindowWidth() + horizontalShift * .5f;

Horizontal shift of 1.f shifts the whole image to one direction by half of the original size.

1 Like

Hi Dave,

this was indeed already fixed in 2016.

~Paul