I’m trying to incorporate shaderToy-style raymarching with Cinder’s polygon batch rendering from an fbo. How would I use projection and view matrix uniforms instead of the standard shaderToy camera build? I can feed the shader uniforms just fine, I just don’t know how to reverse engineer the ray direction using camera matrices. And I’m also having a hard time getting the fbo depth texture and the raymarched distance to match up.
Here’s a typical shaderToy-style camera setup referenced from Paul Houx’s shaderToy Cinder example
vec2 q = gl_FragCoord.xy / iResolution.xy;
vec2 p = -1.0 + 2.0*q;
p.x *= iResolution.x/ iResolution.y;
// camera vec3 ro = 4.0*normalize(vec3(cos(3.0*mo.x), 1.4 - 1.0*(mo.y-.1), sin(3.0*mo.x))); vec3 ta = vec3(0.0, 1.0, 0.0); float cr = 0.5*cos(0.7*iGlobalTime); // build ray vec3 ww = normalize( ta - ro); vec3 uu = normalize(cross( vec3(sin(cr),cos(cr),0.0), ww )); vec3 vv = normalize(cross(ww,uu)); vec3 rd = normalize( p.x*uu + p.y*vv + 2.0*ww ); // raymarch vec3 col = raymarch( ro, rd );
I can send the fragment shader my Camera matrices but how would I extract the ro and rd from those Matrices for the raymarch function? vec3 rd is in the 4th column of the inverseViewMatrix… ? Matrices and I have a hard time getting along sometimes.
Also, I found somewhere that this is a proper world depth extraction from a ray distance but it doesn’t seem to match the fbo’s depth texture.
rayDepth =((gl_DepthRange.diff * rayDistance) + gl_DepthRange.near + gl_DepthRange.far) / 2.0; float depth = texture( mFboDepth, uvCoord ).r; if (depth>rayDepth) mFboColor = vec4(0.,0.,0.,0.); //alpha blend vec4 result = vec4(1.) * mFboColor + vec4(1.0 - mFboColor.a) * rayColor;
I think it has something to do with having the wrong camera matrices though.
thanks in advance.