# Correct method of projecting 3D point to 2D screen coords?

Ahoy

I am trying to project a 3D point to the screen, using the formulas defined in the `Camera::worldToScreen`, i.e.

``````vec2 Camera::worldToScreen( const vec3 &worldCoord, float screenWidth, float screenHeight ) const
{
vec4 eyeCoord = getViewMatrix() * vec4( worldCoord, 1 );
vec4 ndc = getProjectionMatrix() * eyeCoord;
ndc.x /= ndc.w;
ndc.y /= ndc.w;
ndc.z /= ndc.w;

return vec2( ( ndc.x + 1.0f ) / 2.0f * screenWidth, ( 1.0f - ( ndc.y + 1.0f ) / 2.0f ) * screenHeight );
}

``````

I’ve setup a magenta dot to be the 2D project of the 3D position of the car shown here

As you can see, if I zoom out, the dot goes further away from the car (towards bottom edge of screen). Why does this happen? I.e. It’s behaving as if the car is moving down, when that isn’t the case.

Note: I should mention – this Q is more about 2D/3D geometry than it is about cinder. I just happen to be using cinder’s “implementation” of projection, but I don’t understand the behavior I am getting from it

How are you drawing the dot? Are you setting back to an orthographic projection beforehand? `gl::setMatricesWindow()` for example?

I just threw together a minimal example and it worked fine for me, so is there anything special you’re doing to place the car that wouldn’t be taken into consideration by `Camera::worldToScreen`?

``````class ForumTestApp : public ci::app::App
{
public:

void setup ( ) override
{
_cam = CameraPersp ( getWindowWidth(), getWindowHeight(), 60.0f, 0.1f, 1000.0f );
_cam.lookAt( vec3 ( 0, 0, 5 ), vec3 ( 0 ) );

_camUi = CameraUi ( &_cam, getWindow() );
_camUi.setMinimumPivotDistance ( 0.0f );
}

void draw ( ) override
{
static vec3 kCubePos { 0.6, 0.3, 0.1 };

gl::clear( Colorf::gray ( 0.2f ) );
gl::setMatrices ( _cam );

{
gl::ScopedDepth depth { true };
gl::drawColorCube ( kCubePos, vec3(1) );
}

{

vec2 p = _cam.worldToScreen( kCubePos, getWindowWidth(), getWindowHeight() );
gl::setMatricesWindow ( getWindowSize() );
gl::ScopedColor color { Colorf ( 1, 0, 0 ) };

gl::drawSolidRect( { p - vec2(4), p + vec2(4) } );
}
}

CameraPersp _cam;
CameraUi    _camUi;
};

CINDER_APP ( ForumTestApp, RendererGl );
``````

Much thanks for the reply and example !

How are you drawing the dot? Are you setting back to an orthographic projection beforehand? `gl::setMatricesWindow()` for example?

I am rendering the dot using a custom 2D gui overlay (nuklear GUI), so no 3D rendering is taking part in that. Maybe it’s because the `gl::setMatricesWindow()` then (i wasn’t using that type of func before)

I was just wondering because the `vec2` returned by `worldToScreen` is in `0..screenSize` but perhaps nuklear renders in bottom up NDC or something. Do you get correct looking results just printing to `stdout`? I guess the next bit to work out is if it’s rendering or maths related.

I was just wondering because the `vec2` returned by `worldToScreen` is in `0..screenSize`

Yeah, nuklear also follows that convention – (0, 0) is top left corner, +x from left to right, and +y from top to bottom.

Do you get correct looking results just printing to `stdout` ?

Essentially I’m just feeding in the output of `worldToScreen` implementation directly to a nuklear render circle function. So any printed `cout`/`stdout` will be the same values passed into the nuklear func.

FYI, the nuklear function is called like this:

``````nk_fill_arc(canvas, carpos2d.x, carpos2d.y, 10.f, 0.f, 3.142f*2.f, nk_rgba(255, 0, 123, 255));
``````

Where `carpos2d` represents the calculated screen pos.
It might be an issue on my side somewhere. Maybe I’m messing up certain calcs. I’ll have to check it again. Thanks for the answers !

I’ve fixed it now, I think it was my mistake when reading how to implement the `worldToScreen` function. All is working as expected now!