[Solved] Updating multiple attributes with instanced rendering

Hi Everyone,

I’m new to Cinder, coming from a webdev background. I’ve experimented with a bit of WebGL, but trying to expand my horizons.

I’m trying to learn about Instanced Rendering using the InstancedTeapots sample. I understand the general concept, but can’t quite translate that into practical changes to the code.

I want to assign each teapot a different color, but think I’m approaching it incorrectly.

My setup

 // Shader for the 3D instanced object
    mGlsl = ci::gl::GlslProg::create( ci::app::loadAsset( "shader.vert" ), ci::app::loadAsset( "shader.frag" ) );
    ci::gl::VboMeshRef mesh = ci::gl::VboMesh::create( ci::geom::Teapot() );
    // create an array of initial per-instance positions laid out in a 2D grid
    std::vector<ci::vec3> bufferData;
    bufferData.resize(2*(mNumX * mNumY));
    for( size_t potX = 0; potX < mNumX; ++potX ) {
        for( size_t potY = 0; potY < mNumY; ++potY ) {
            float instanceX = potX / (float)mNumX - 0.5f;
            float instanceY = potY / (float)mNumY - 0.5f;
            ci::vec3 pos;
            pos = ci::vec3( instanceX * mGridSpacing, 0 , instanceY * mGridSpacing);
            bufferData.push_back( pos );
            bufferData.push_back(ci::vec3(ci::randFloat(), ci::randFloat(), ci::randFloat() ));
    // create the VBO which will contain per-instance (rather than per-vertex) data
    mInstanceDataVbo = ci::gl::Vbo::create( GL_ARRAY_BUFFER, bufferData.size() * sizeof(ci::vec3), bufferData.data(), GL_DYNAMIC_DRAW );
    // we need a geom::BufferLayout to describe this data as mapping to the CUSTOM_0 semantic, and the 1 (rather than 0) as the last param indicates per-instance (rather than per-vertex)
    ci::geom::BufferLayout instanceDataLayout;
    instanceDataLayout.append( ci::geom::Attrib::CUSTOM_0, 3, sizeof(ci::vec3), 0, 1 /* per instance */ );
    // need to change the offset
    instanceDataLayout.append( ci::geom::Attrib::CUSTOM_1, 3, sizeof(ci::vec3), sizeof(ci::vec3), 1 /* per instance */ );
    // now add it to the VboMesh we already made of the Teapot
    mesh->appendVbo( instanceDataLayout, mInstanceDataVbo );
    // and finally, build our batch, mapping our CUSTOM_0 attribute to the "vInstancePosition" GLSL vertex attribute
    mInstanceBatch = ci::gl::Batch::create( mesh, mGlsl, { { ci::geom::Attrib::CUSTOM_0, "vInstancePosition" },
        { ci::geom::Attrib::CUSTOM_1, "vInstanceColor" }
    } );
    // Shader for the 3D instanced object
    mGlsl = ci::gl::GlslProg::create( ci::app::loadAsset( "shader.vert" ), ci::app::loadAsset( "shader.frag" ) );

My Update:

 ci::vec3 *sphere = (ci::vec3*)mInstanceDataVbo->mapReplace();
     for( size_t potX = 0; potX < mNumX; ++potX ) {
         for( size_t potY = 0; potY < mNumY; ++potY ) {
                float instanceX = potX / (float)mNumX - 0.5f;
                float instanceY = potY / (float)mNumY - 0.5f;

                ci::vec3 newPos( instanceX * mGridSpacing, 0 , instanceY * mGridSpacing);
                *sphere++ = newPos;
                *sphere++ = ci::vec3(1.,1.,1.); // try resetting the color

vert shader:

#version 150

uniform mat4        ciModelViewProjection;
uniform mat4        ciProjectionMatrix, ciViewMatrix;
uniform mat3        ciNormalMatrix;

in vec4             ciPosition;
in vec2             ciTexCoord0;
in vec3             ciNormal;
in vec4             ciColor;
in vec3             vInstancePosition; // per-instance position variable
in vec3             vInstanceColor;
out highp vec2      TexCoord;
out lowp vec4       Color;
out highp vec3      Normal;

void main( void )
    gl_Position     = ciModelViewProjection * (ciPosition + vec4( vInstancePosition, 0 ) );
    Color           = vec4(vInstanceColor, 1.f);
    TexCoord        = ciTexCoord0;
    Normal          = ciNormalMatrix * ciNormal;

Should I be separating the positions and colors into separate buffers? Or is there a different way to update the buffer instead of mapReplace()?




The StereoscopicRendering sample shows how to do this. Solution was to create a struct and reference the color and position using:

         data->position = newPos;
         data->color = newColor;

A quick note for future reference: try to structure your data in groups of 4 floats, so that it conforms to the so-called std140 memory layout. This will avoid misaligned data and makes life a bit easier in most cases.

A 3D position and an RGBA color could be packed like this:

struct Data {
  vec3  position; // 3 floats
  float reserved; // 1 alignment float, not containing any meaningful data 
  vec4  color;    // 4 floats

Some examples of valid groups of 4 floats:

struct Data {
  vec4 a; // obviously :)

  vec3 b; // 3 + 1 float
  float    c; 

  vec2 d; // 2 + 2 floats
  vec2 e;

  vec2 f; // 2 + 1 + 1 float
  float g;
  float h;

  float i; // 1 + 1 + 1 + 1 float
  float j;
  float k;
  float l;

  vec3 m; // 3 + 1 32-bit integer
  int n; // 32-bit integers (int32_t) use the same number of bytes (4) as floats

Example of invalid packing:

struct Garbage {
  float a; // OpenGL will turn this float into a vec4(!)...
  vec3 b; // ...and will then turn this vec3 into a vec4!

As a rule of thumb: go from big to small.


1 Like

Thank you @paul.houx, that’s very interesting! I read something about ‘padding’ buffer data, I assume this Is what you’re talking about?

One thing that still confuses me, is that I don’t have any alpha blending on, and I’ve enabled depthRead and depthWrite, but it still seems like the objects are drawing in the incorrect or, or have some sort of transparency…

Possible fixes:

  • Check your fragment shader to make sure you write 1 to the output alpha. If you’re outputting a vec3, use a vec4 instead and set its 4th component to 1.
  • Check if you do have a depth buffer. If you’re rendering to an Fbo, make sure its depth buffer is enabled and cleared.
  • Try to explicitly set the blend mode to “no blending”. Perhaps the OpenGL state is dirty (the effect in the image looks like some kind of subtractive blending, interesting).

Hi Paul,

Sorry about the delayed response. Explicitly using gl::disableBlending() solved the problem. Not sure how the state got ‘dirty’, as you mentioned, as I have no other calls to blending in the app…

Hi Dan,

I would not be surprised if alpha blending is enabled by default. I seem to remember a discussion about it among the core Cinder devs. It used to be disabled by default, but then users were confused when their text rendering looked bad. Kind of the opposite of your use case :slight_smile: