Colors off at different viewing angles (e.g. rear side)

Here’s from the starting camera perspective:
image

And from behind:
image

Wondering why the color shading appears incorrect from the rear side?
Thanks!

I assume you have enabled the depth buffer and are using lambert shading?

When rendering transparent objects, make sure to disable writing to the depth buffer. If the scene also contains opaque objects, render them first (with depth testing and writing enabled), then disable depth writing (but keep depth testing enabled) and render your transparent objects.

For best results, sort the transparent objects from back to front. This is important if the objects have different colors, because rendering a yellow object over a red one is not the same as rendering a red object over a yellow one.

Rendering transparent objects is an advanced topic for which there is still no satisfactory solution in computer graphics world.

-Paul

http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-10-transparency/

https://learnopengl.com/Advanced-OpenGL/Blending

2 Likes

Just to add to this, if you hadnt seen this post already there is some more advice for the same “problem”.

Good luck with the 3D CA! I dont want to derail this topic but on a separate note your current method of a Batch per cube will suffice for the first couple of hundred cubes however when you are ready to take it up a notch it could be a great exercise to learn about instanced rendering. This basically involves 1 cube Batch and you send a list of positions and colours to the GPU separately which will draw them all in one draw call. Have a look at the InstancedTeapots example if this is of interest to you (though this method also requires more knowledge about the shader pipeline).

F

1 Like

Thanks guys. Sorry for so many questions. Seems transparency is pretty complicated, so I’m considering avoiding it for now maybe.

  1. Enabling additive blending causes nothing to be drawn (pure white screen) regardless if objects are opaque or transparent.

  2. Disabling depth test causes interesting shadowing from different angles
    front:


    angle1:
    image
    angle2:
    image

  3. I turned off all transparency, and I’m now very confused about the lighting here with lambert shading – seems inconsistent, or I haven’t set the correct lighting directions? If possible, I’d like to read up on how lighting might work.
    front:
    image
    angle1:
    image
    angle2:
    image

If you have a white background you wont see anything with additive blending as you cant add anything to white to make it any other colour (its already at the peak brightness for all 3 colour channels). Try setting your background to black and using additive blending again.

I made a similar 3D cellular automata several years ago with additive blending in processing. You can see it in the first 30s of this video. https://vimeo.com/105969968 NB. Additive blending can peak out to white quite quickly if you dont use darker colours to draw your cubes with.

1 Like

Thanks Felix. I see what you’re saying. I think additive blending has a unique look.
image

I’m confused about the lighting as well. It should work.

For my understanding, could you check that:

  • You created the cube using a geom::Cube fed into a Vbo?
  • You created the GlslProg using:
    gl::getStockShader( gl::ShaderDef().color().lambert() )?
  • You’ve created a Batch from the Vbo and the GlslProg?
  • You’re drawing the cubes by calling mBatch->draw() in a loop?

If you did all that, the geom::Cube should make sure that vertices and normals are generated for the Vbo and the GlslProg will take care of the lighting. I’ll describe what the shader does at the end of this post in case you’re interested.

If you did things differently, could you explain what you did or share some code maybe?

-Paul


How the Lambert shader works

The shader comes in two parts: a vertex shader and a fragment shader.

The vertex shader takes the 24 vertices of every cube (6 sides, 4 corners per side), as well as their normals and transforms them from model space (the coordinates as defined by the cube) to world space (the cube is translated and maybe scaled, depending on your calls to gl::translate() and so on) to view space (the coordinates relative to the camera, where the camera position is (0,0,0) and we’re looking down the negative z-axis). The normals are also transformed to view space, using a special matrix that prevents scaling so they remain unit length. Finally, the vertex shader passes the transformed vertex positions and normals, as well as the drawing color, to the fragment shader. And to fulfill OpenGL’s requirements, it also transforms the vertices to clip space, which is the second to last step of the 3D-to-2D projection.

#version 150

uniform mat4 ciModelView; // Provided by Cinder to convert from model space to view space in one go.
uniform mat4 ciProjectionMatrix; // Provided by Cinder to convert from view space to clip space.
uniform mat3 ciNormalMatrix; // Provided by Cinder to convert normals to view space.

in vec4 ciPosition; // Vertex position in model space provided by Cinder.
in vec3 ciNormal; // Normal in model space provided by Cinder.
in vec4 ciColor; // Vertex color (or current draw color) provided by Cinder.

out vec4 vertPosition; // Our own view space vertex position being sent to the fragment shader.
out vec3 vertNormal; // Our own view space normal being sent to the fragment shader.
out vec4 vertColor; // Our own vertex color being sent to the fragment shader.

void main(void)
{
    vertPosition = ciModelView * ciPosition; // Convert from model to view space.
    vertNormal = ciNormalMatrix * ciNormal; // Convert from model to view space without scaling.
    vertColor = ciColor;

    gl_Position = ciProjectionMatrix * vertPosition; // Mandatory: convert from view space to clip space.
}

The fragment shader runs for every pixel of your image. It receives the vertex positions and normals, automatically interpolated for the pixel. It then calculates the direction from the vertex to the light source. For the default Lambert shader, the light source position is hard-coded to be the same as the camera position. It then calculates so-called Lambert’s Diffuse Lighting, which is a fancy term for the simplest form of lighting you can imagine. If the vertex normal points directly at the camera, the dotproduct between the normal and the light direction will be 1, resulting in the brightest color. If the vertex normal is perpendicular to the light direction, the result will be 0 or completely dark. If it points away from the light, we probably can’t see the vertex anyway because it will be at the rear side of the cube, but the result would be -1 and for good measure we clamp it to zero. The fragment shader then multiplies the result of the dotproduct with the input color and outputs the final color.

#version 150

in vec4 vertPosition; // Our inputs from the vertex shader.
in vec3 vertNormal;
in vec4 vertColor;

out vec4 fragColor; // Our output from the fragment shader: RGBA.

void main(void)
{
    // Calculate direction to light source.
    const vec3 kLightPosition = vec3( 0.0 ); // In view space, this is same as camera position.
    vec3 L = normalize( kLightPosition - vertPosition.xyz );

    // Normalize normal ('vertNormal' may be slightly longer or shorter than 1 after interpolation). 
    vec3 N = normalize( vertNormal );

    // Take the dotproduct of N and L to find out if our polygon is facing the light.
    float NdotL = max( 0.0, dot( N, L ) );

    // Output the final color. Don't modify transparency.
    fragColor.rgb = vertColor.rgb * NdotL;
    fragColor.a = vertColor.a;
}

As an exercise, see if you can add specular lighting by writing your first shader!

To answer myself…

could it be that your top image, where the cubes are solid green, is caused by the fact that we’re looking dead-on the cube’s corner vertex? In theory, the 3 front facing planes could then be turned away from the light source at precisely the same angle and the lighting would therefor be exactly the same.

Hm, interesting. Yes, that makes sense. The original starting position (which causes a solid green color) is directly and perfectly aligned with the corner of the first cube vertex, but I assumed that some of the cubes to the left or right would get a different shading due to their offset. If I change the camera perspective even a tiny bit, the sides become visible.

Thanks. I found this article
https://mottosso.gitbooks.io/cinder/content/book/guide_to_meshes.html
I’ll be reading on VboMeshes, plus I need to start using drawInstanced() for performance
checking drawInstanced() in this:

Wondering if there’s an easy way to generate a VboMesh from geom::Cube(), but I still need to figure out how to make custom meshes for the future

on setup...
	auto lambert = gl::ShaderDef().lambert().color();
	gl::GlslProgRef	shader = gl::getStockShader(lambert);
	for (int x = 0; x < xdim; ++x) {
		for (int y = 0; y < ydim; ++y) {
			for (int z = 0; z < zdim; ++z) {
				auto slice = geom::Cube().size(xsize, xsize, xsize);
				auto trans = geom::Translate(x, y, z);
				int point = x*zdim*ydim + y*zdim + z;
				mSlices[point] = gl::Batch::create(slice >> trans, shader);
			}
		}
	}

on draw...
	gl::ScopedModelMatrix scpModelMtx;
	for (int x = 0; x < xdim; ++x) {
		for (int y = 0; y < ydim; ++y) {
			for (int z = 0; z < zdim; ++z) {
				int point = x*zdim*ydim + y * zdim + z;
				gl::color(ColorA(CM_HSV, 0.5, float(status[point]), .1, 0.05 + 0.8 * float(status[point])));
				mSlices[point]->draw();
			}
		}
	}

There is:

auto mesh = gl::VboMesh::create( slice );
auto shader = gl::getStockShader(lambert);
mBatch = gl::Batch::create( mesh, shader );

(note: when you use mBatch = gl::Batch::create( slice, shader ), a VboMesh is created under the hood for you).

And to use instanced rendering, try:

std::vector<vec3> positions;
for (int x = 0; x < xdim; ++x) {
  for (int y = 0; y < ydim; ++y) {
    for (int z = 0; z < zdim; ++z) {
      positions.emplace_back( x, y, z );
    }
  }
}

// Create a buffer containing all positions. 
auto instances = gl::Vbo::create( GL_ARRAY_BUFFER, positions.size() * sizeof( vec3 ), positions.data(), GL_STATIC_DRAW );

// Describe the contents of the buffer in a way Cinder will understand.
geom::BufferLayout layout;
layout.append( geom::CUSTOM_0, sizeof( vec3 ) / sizeof( float ), 0, 0, 1 /* per instance */ );

// Append this data to our VboMesh:
mesh->appendVbo( layout, instances );

// When creating the batch, tell Cinder the name of our custom attribute.
mBatch = gl::Batch::create( mesh, shader, { { geom::CUSTOM_0, "iPosition" } } );

Note that you will now have to use a custom shader, instead of a stock one. You could begin by copying the shader code from my previous post to two separate files: cubes.vert and cubes.frag. Then load it using:

auto shader = gl::GlslProg::create( loadAsset("cubes.vert"), loadAsset("cubes.frag") );

You will then have to add our custom attribute to the shader…

in vec3 iPosition; // per instance

(note: also keep the line in vec4 ciPosition;, as we will need both)

…and adjust the vertex position accordingly:

vertPosition = ciModelView * vec4( ciPosition.xyz + iPosition.xyz, 1.0 );

Don’t forget to call mBatch->drawInstanced( positions.size() ); instead of mBatch->draw(); :slight_smile:

Caveat: I wrote this code in Google Chrome, so it might contain errors.

Edit: the cool thing is: you can now sort your positions easily to render the cubes from back to front. Store the positions and the instances Vbo as member variables (e.g. std::vector<vec3> mPositions and gl::VboRef mInstances) and then do this:

void MyApp::update()
{
    const auto camera = mCamera.getEyePoint();

    // Sort by distance from camera, furthest first.
    std::sort( std::begin( mPositions ), std::end( mPositions ),
            [camera]( const vec3 &a, const vec3 &b ) {
        return glm::distance2( a, camera ) > glm::distance2( b, camera );
    } );

    // Update positions.
    auto ptr = (vec3 *)mInstances->mapReplace();
    for( auto &position : mPositions ) {
        *ptr = position;
        ptr++;
    }
    mInstances->unmap();
}
2 Likes