Raymarching Shader Camera Matrices

I’m trying to incorporate shaderToy-style raymarching with Cinder’s polygon batch rendering from an fbo. How would I use projection and view matrix uniforms instead of the standard shaderToy camera build? I can feed the shader uniforms just fine, I just don’t know how to reverse engineer the ray direction using camera matrices. And I’m also having a hard time getting the fbo depth texture and the raymarched distance to match up.
Here’s a typical shaderToy-style camera setup referenced from Paul Houx’s shaderToy Cinder example
vec2 q = gl_FragCoord.xy / iResolution.xy;
vec2 p = -1.0 + 2.0*q;
p.x *= iResolution.x/ iResolution.y;

// camera
vec3 ro = 4.0*normalize(vec3(cos(3.0*mo.x), 1.4 - 1.0*(mo.y-.1), sin(3.0*mo.x)));
vec3 ta = vec3(0.0, 1.0, 0.0);
float cr = 0.5*cos(0.7*iGlobalTime);

// build ray
vec3 ww = normalize( ta - ro);
vec3 uu = normalize(cross( vec3(sin(cr),cos(cr),0.0), ww ));
vec3 vv = normalize(cross(ww,uu));
vec3 rd = normalize( p.x*uu + p.y*vv + 2.0*ww );

// raymarch	
vec3 col = raymarch( ro, rd );

I can send the fragment shader my Camera matrices but how would I extract the ro and rd from those Matrices for the raymarch function? vec3 rd is in the 4th column of the inverseViewMatrix… ? Matrices and I have a hard time getting along sometimes.
Also, I found somewhere that this is a proper world depth extraction from a ray distance but it doesn’t seem to match the fbo’s depth texture.

 rayDepth =((gl_DepthRange.diff * rayDistance) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
 float depth = texture( mFboDepth, uvCoord ).r;
if (depth>rayDepth) mFboColor = vec4(0.,0.,0.,0.);
//alpha blend
vec4 result = vec4(1.) * mFboColor + vec4(1.0 - mFboColor.a) * rayColor;

I think it has something to do with having the wrong camera matrices though.
thanks in advance.

I did it like this for a project:

vec3 cameraPosition = mCamera.getEyePoint();
mat3 cameraOrientation = ci::mat3( mCamera.getOrientation() );

float left, right, top, bottom, near, far;
mCamera.getFrustum( &left, &top, &right, &bottom, &near, &far );

float viewDistance = mFbo->getAspectRatio() / math< float >::abs( right - left ) * near;

mGlsl->uniform( "uAspectRatio", mFbo->getAspectRatio() );
mGlsl->uniform( "uCameraPosition", cameraPosition );
mGlsl->uniform( "uCameraOrientation", cameraOrientation );
mGlsl->uniform( "uViewDistance", viewDistance );

glsl side:

uniform float uAspectRatio;
uniform vec3 uCameraPosition;
uniform mat3 uCameraOrientation;
uniform float uViewDistance;

...

vec2 uv = vTexCoord0 - vec2( 0.5 );
uv.x *= uAspectRatio;

vec3 rayOrigin = uCameraPosition;
vec3 rayDirection = uCameraOrientation * normalize( vec3( uv, -uViewDistance ) );

I have not used depth extraction before, but are you taking into account that fbo depth is not linear?

-Gabor

Thanks Gabor! That was extremely helpful. I uploaded a working example on github for anyone else that could use it.

However…
In the Fbo::Format I had to use depthTexture() which defaults to GL_DEPTH_COMPONENT24 instead of GL_DEPTH_COMPONENT32F… Does this mean that all the draw calls into the fbo are doing depth comparison using the fbo depth texture instead of “screen depth” and thus using a lower quality comparison and introducing z-fighting into the cinder fbo scene?
It makes me wonder if I should not add depthTexture() to the fbo format and instead, use a multitexture output into all of my shaders to draw into a custom depth Texture using this technique. This way it would save the depth texture z-fighting comparison exclusively for the raymarching shader and leave the fbo comparison format alone.
Or even better… is it possible to render a raymarched scene onto a texture and output a custom gl_FragDepth of the raymarched depth? Theoretically, this would allow me to render the raymarched scene directly onto the screen or fbo with the rest of the scene and let gl do it’s normal depth comparison… right?

If you’re rendering to an Fbo that has a 24-bit depth buffer, then your draw calls will indeed use 24-bit depth comparison. Whether or not this leads to z-fighting depends on the settings of (primarily) your near-plane and to a lesser extent your far-plane and the nature of your content.

In any case, if you prefer having a 32-bit floating point buffer/texture, you can specify this when you create the Fbo. See gl::Fbo::Format::depthBuffer() and gl::Fbo::Format::setDepthBufferInternalFormat().

Regarding your last question: while you could render depth to a separate texture, or even override the pixel depth by writing to gl_FragDepth in your fragment shader, I don’t believe OpenGL allows you to bind more than one depth texture to an Fbo, or to specify separate read and write buffers for depth comparison. But I could be mistaken.

P.S.: you may want to look into the reversed-Z technique described in the first link.

Apparently, my repo wasn’t the working code. I fixed the repo and included an example of the zfighting with 2 cubes, one cube raymarched, the other triangle faced and drawn white.
Also, I heard somewhere that GTA inverses their depth buffer to get higher accuracy for far distances. It makes sense since it’s non-linear… but how would one possibly go about that in cinder?

You’ve asked how to perform reversed-z in Cinder, but since I haven’t tried that myself, I can not give you proper example code. It’s a good idea for a future sample, though, so thanks.

In the meantime, check out this article on how to do reversed-z in OpenGL. It requires extensions to OpenGL that may not be available on all systems.

-Paul

1 Like

revisiting another issue related to this… in a normal cinder shadertoy-esc senario… if I write to gl_FragDepth i just convert linear depth to sample depth…

// result suitable for assigning to gl_FragDepth
float depthSample(float linearDepth)
{
    float nonLinearDepth = (zFar + zNear - 2.0 * zNear * zFar / linearDepth) / (zFar - zNear);
    nonLinearDepth = (nonLinearDepth + 1.0) / 2.0;
    return nonLinearDepth;
}
    gl_FragDepth = depthSample(dist);

but let’s say we are rendering the rest of the pipeline using reverse-z depth and GL_GREATER comparison… how would we convert linear depth to reverse-z from a typical raymarched distance? something like…

float linear_distance = typicalRaymarchedScene();
gl_FragDepth= someMagicalLineartoReverseZFunction(linear_distance );

I think I did a successful test converting a reverse-z sampler depth into linear space with a super simple function…

float revZsamplerDepth = texture(uSamplerDepth, uv).r;
float linear_depth= zNear / revZsamplerDepth ;

but when I try the inverse of this, something like…

float revZsamplerDepth = linear_distance * zNear ;

I’m not getting proper depth at all… I did attempt to put Paul’s reverse-z cinder changes into my build of Cinder… Am I anywhere close on this or did I perhaps port Paul’s Reverse Z fork into the latest release of Cinder wrong…?
I should also add that I’m using this function for my reverse-z camera projection matrix as well…

	    glm::mat4 MakeInfReversedZProjRH(float fovY_radians, float aspectWbyH, float zNear){
			float f = 1.0f / tan(fovY_radians / 2.0f);
			return glm::mat4(
				f / aspectWbyH, 0.0f, 0.0f, 0.0f,
				0.0f, f, 0.0f, 0.0f,
				0.0f, 0.0f, 0.0f, -1.0f,
				0.0f, 0.0f, zNear, 0.0f);
		}

Solved it!
Turns out it was
float linear_depth= zNear / revZsamplerDepth
All along…
I tested it with a shadertoy raytracing scene… gonna try it with raymarching just to make sure…
I went ahead and did a pull request of Paul’s reverse-z branch for the new cinder. Here’s a link to my test app demonstrating writing to gl_FragDepth. Cheers. There’s another technique that does this on the backend instead of the front end and can support transparancies and volumetrics. This method only supports solids, but is more efficient, easier to maintain.