Raymarching Shader Camera Matrices


I’m trying to incorporate shaderToy-style raymarching with Cinder’s polygon batch rendering from an fbo. How would I use projection and view matrix uniforms instead of the standard shaderToy camera build? I can feed the shader uniforms just fine, I just don’t know how to reverse engineer the ray direction using camera matrices. And I’m also having a hard time getting the fbo depth texture and the raymarched distance to match up.
Here’s a typical shaderToy-style camera setup referenced from Paul Houx’s shaderToy Cinder example
vec2 q = gl_FragCoord.xy / iResolution.xy;
vec2 p = -1.0 + 2.0*q;
p.x *= iResolution.x/ iResolution.y;

// camera
vec3 ro = 4.0*normalize(vec3(cos(3.0*mo.x), 1.4 - 1.0*(mo.y-.1), sin(3.0*mo.x)));
vec3 ta = vec3(0.0, 1.0, 0.0);
float cr = 0.5*cos(0.7*iGlobalTime);

// build ray
vec3 ww = normalize( ta - ro);
vec3 uu = normalize(cross( vec3(sin(cr),cos(cr),0.0), ww ));
vec3 vv = normalize(cross(ww,uu));
vec3 rd = normalize( p.x*uu + p.y*vv + 2.0*ww );

// raymarch	
vec3 col = raymarch( ro, rd );

I can send the fragment shader my Camera matrices but how would I extract the ro and rd from those Matrices for the raymarch function? vec3 rd is in the 4th column of the inverseViewMatrix… ? Matrices and I have a hard time getting along sometimes.
Also, I found somewhere that this is a proper world depth extraction from a ray distance but it doesn’t seem to match the fbo’s depth texture.

 rayDepth =((gl_DepthRange.diff * rayDistance) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
 float depth = texture( mFboDepth, uvCoord ).r;
if (depth>rayDepth) mFboColor = vec4(0.,0.,0.,0.);
//alpha blend
vec4 result = vec4(1.) * mFboColor + vec4(1.0 - mFboColor.a) * rayColor;

I think it has something to do with having the wrong camera matrices though.
thanks in advance.


I did it like this for a project:

vec3 cameraPosition = mCamera.getEyePoint();
mat3 cameraOrientation = ci::mat3( mCamera.getOrientation() );

float left, right, top, bottom, near, far;
mCamera.getFrustum( &left, &top, &right, &bottom, &near, &far );

float viewDistance = mFbo->getAspectRatio() / math< float >::abs( right - left ) * near;

mGlsl->uniform( "uAspectRatio", mFbo->getAspectRatio() );
mGlsl->uniform( "uCameraPosition", cameraPosition );
mGlsl->uniform( "uCameraOrientation", cameraOrientation );
mGlsl->uniform( "uViewDistance", viewDistance );

glsl side:

uniform float uAspectRatio;
uniform vec3 uCameraPosition;
uniform mat3 uCameraOrientation;
uniform float uViewDistance;


vec2 uv = vTexCoord0 - vec2( 0.5 );
uv.x *= uAspectRatio;

vec3 rayOrigin = uCameraPosition;
vec3 rayDirection = uCameraOrientation * normalize( vec3( uv, -uViewDistance ) );

I have not used depth extraction before, but are you taking into account that fbo depth is not linear?



Thanks Gabor! That was extremely helpful. I uploaded a working example on github for anyone else that could use it.

In the Fbo::Format I had to use depthTexture() which defaults to GL_DEPTH_COMPONENT24 instead of GL_DEPTH_COMPONENT32F… Does this mean that all the draw calls into the fbo are doing depth comparison using the fbo depth texture instead of “screen depth” and thus using a lower quality comparison and introducing z-fighting into the cinder fbo scene?
It makes me wonder if I should not add depthTexture() to the fbo format and instead, use a multitexture output into all of my shaders to draw into a custom depth Texture using this technique. This way it would save the depth texture z-fighting comparison exclusively for the raymarching shader and leave the fbo comparison format alone.
Or even better… is it possible to render a raymarched scene onto a texture and output a custom gl_FragDepth of the raymarched depth? Theoretically, this would allow me to render the raymarched scene directly onto the screen or fbo with the rest of the scene and let gl do it’s normal depth comparison… right?


If you’re rendering to an Fbo that has a 24-bit depth buffer, then your draw calls will indeed use 24-bit depth comparison. Whether or not this leads to z-fighting depends on the settings of (primarily) your near-plane and to a lesser extent your far-plane and the nature of your content.

In any case, if you prefer having a 32-bit floating point buffer/texture, you can specify this when you create the Fbo. See gl::Fbo::Format::depthBuffer() and gl::Fbo::Format::setDepthBufferInternalFormat().

Regarding your last question: while you could render depth to a separate texture, or even override the pixel depth by writing to gl_FragDepth in your fragment shader, I don’t believe OpenGL allows you to bind more than one depth texture to an Fbo, or to specify separate read and write buffers for depth comparison. But I could be mistaken.

P.S.: you may want to look into the reversed-Z technique described in the first link.


Apparently, my repo wasn’t the working code. I fixed the repo and included an example of the zfighting with 2 cubes, one cube raymarched, the other triangle faced and drawn white.
Also, I heard somewhere that GTA inverses their depth buffer to get higher accuracy for far distances. It makes sense since it’s non-linear… but how would one possibly go about that in cinder?


You’ve asked how to perform reversed-z in Cinder, but since I haven’t tried that myself, I can not give you proper example code. It’s a good idea for a future sample, though, so thanks.

In the meantime, check out this article on how to do reversed-z in OpenGL. It requires extensions to OpenGL that may not be available on all systems.