Outline shape2d VboMesh with stroke in PoScene

Hi embers,

I’m running into a problem rendering strokes in the Cinder PoScene block. My source graphics are SVGs, which I convert to Shape2ds. Each contour should then be rendered with a different fill colour and black stroke and support transparency. However, at present PoScene lists strokes as a //TO DO item in poShape.cpp.

Under the hood, PoScene converts Shape2d to a VBO mesh. By converting each Shape2d contour to a VboMesh and running two passes over each mesh(one for wireframe and one for fill), I’m able to get shapes with fill and stroke as long as each shape remains opaque. As soon as I introduce transparency, the wireframe shows through the fill. Here are 3 images to illustrate the problem and the code I’ve added to poShape.cpp so far.


'void Shape::render()
ci::TriMesh::Format format = ci::TriMesh::Format();
format.mTexCoords0Dims = 2;
format.mPositionsDims = 2;
format.mNormalsDims = 3;

int numContours = mCiShape2d.getNumContours();

//Create a TriMesh and VboMesh for each of our shape2d contours

//Then create a vector of VBO meshes to hold all of the contours

for (int i = 0; i < numContours; ++i)
	ci::Path2d thisPath = mCiShape2d.getContour(i);
	ci::TriMeshRef triMesh = ci::TriMesh::create(ci::Triangulator(thisPath, (float)mPrecision).calcMesh(ci::Triangulator::WINDING_POSITIVE), format);
	mVboMesh = ci::gl::VboMesh::create(*triMesh);

void Shape::draw()
ci::gl::color(ci::ColorA(getFillColor(), getAppliedAlpha()));
std::vectorci::Color thisColors = getFillColors();
float artAlpha = 1.0f;
float wireframeStrokeWidth = 4.0;

for (int i = 0; i < mVboMeshes.size(); ++i)



	ci::gl::color(ci::ColorA(0, 0, 0, artAlpha));
	ci::gl::ScopedGlslProg shaderScp(ci::gl::getStockShader(ci::gl::ShaderDef().color()));

	//DRAW VboMesh  FILL

	ci::Color thisColor = thisColors[i];
	ci::gl::color(ci::ColorA(thisColor, artAlpha));
	ci::gl::ScopedGlslProg shaderScp2(ci::gl::getStockShader(ci::gl::ShaderDef().color()));


To get around the problem, I want to render only the outline of the wireframe. To do so, it seems that I need to calculate the adjacent TriMesh vertices then construct a VboMesh to create the outline.

I found this post on the the Cinder Discourse forum Adjacency to index buffer from trimesh, where @mettrelapaix managed to solve the problem for an obj file.

I read through the post and the logic and code snippets made sense but I’m not sure how to put it all together, especially integrating the adjacency calculation code and it’s dependencies from the glsl 4.0 cookbook into my Cinder project.

@paul.houx has suggested
“Perhaps we should add this adjacency stuff to Cinder, might come in handy. A gl::VboMeshAdj that takes a TriMesh, gl::VboMesh or even a geom::Source as input, for instance.”

I’d like to support this request. In trying to solve this problem, I’ve come across many people trying to outline VboMeshes and it appears to be a commonly needed functionality.

Meanwhile, can someone please help me put the pieces together to achieve this?

Many thanks,


Cinder 0.9.0 release
VS 2013 Community Edition
Win 7 64 bit


If I understand things correctly, you are drawing the wireframe of the mesh after triangulation. Since you created the mesh from a Shape2d, you already have the “outline” you want to render. So, one solution I can think of, is draw your mesh without the wireframes, followed by drawing the outline ( could be GL_LINES ). You could create a batch out of these 2 vbos?


Hi @bala,

Thank you for the suggestion.

I tried out the batch “CombineMesh” technique suggested in this post “Outline a 2D Shape”., However, I’m still running into the same problem as Max: more complex Shape2ds do not scale down evenly for the fill.

If I understand right, GL_LINES is deprecated code. Even though my Nvidia driver may support it right now, I’d like to future proof my code to a certain extent and make it less GPU dependent. Especially since I’m using a scene graph, which could potentially be deployed on a client machine with an Intel GPU.

That’s another reason something akin to @mettrelapaix’s solution seemed most attractive. Just not grokking it yet. Going to keep experimenting.



Just for the record, the primitive GL_LINES isn’t deprecated, however glLineWidth is.

perhaps I should clarify a few things from my post you mentioned here.

the main reason for getting all that adjacency information is to give a geometry shader what it needs to shade contours. this works for 3d objects because the triangles that make the mesh can be front facing or not depending on the view angle. the shader code calculates the direction of the face normal and compares it to adjacent triangles. I’m not sure if that works for 2d shapes since I would guess all the triangles face the viewer.

HI @lithium,

Thank you for clarifying the GL_LINES is still ok but that glLinewidth is deprecated. I’m going to spend some more time with Paul’s GeometryShaderBasic sample to see if I can apply that approach to my needs.

@mettrelapaix, thank you for giving me more background information on how your solution may only apply specifically to 3D. I’m exploring a lot of new territory in trying to solve what initially looked like a simple problem, so I think I’m suffering from information overload. lol.



Can you tell me how using a scene graph affects the problem?

To clarify, I was not suggesting that you create one mesh out of both the outline & the triangulated mesh, since you need to treat them differently.

  1. Render the TriMesh of the shape, ( wireframes disabled, appropriate transparency ).
  2. Render the Outline on top. You create a vbo for it using the appropriate primitive ( GL_LINES/GL_LINE_STRIP/…). You could use Paul’s geometry shader for this.


Hi Bala,

Thank you for outlining the steps I should be taking to get lines and fills working. I will take another run at this today. )

I tried @lithium’s code since I was curious about how it works. I think it would be very useful for simple geometry like squares, circles and polygons, just not quite right for the complexity of my svgs.

Can you tell me how using a scene graph affects the problem?<<

My goal is to use the scene graph to interact with and preview compositions for output to Cairo. Right now I can change the transparency, scale, rotation and position of my SVG assets in the scene graph by dragging the mouse while pressing modifier keys for the various actions. All of this is working great and I can output my composition to a resolution independent Cairo svg.

I would like to have the same functionality for both stroke weight and stroke color in order to accurately preview my composition before I export the Cairo context.

Since I envision poScene becoming one of my “workhorse” tools, I’m hesitant to use glLineWidth in case Nvidia drops support for deprecated Open GL but I need to find a solution that will allow me to use lineWidth in Open GL for the scene graph.

I hope this makes sense. Thanks again for your help.



In case you haven’t seen this one: https://github.com/BanTheRewind/Cinder-UiTree

And its description: https://forum.libcinder.org/topic/uitree

If found it to be a more flexible solution to be kept in my pocket.

Regarding my geometry shader sample, it serves two purposes:

  • allows rendering of proper line width, even on hardware that no longer supports glLineWidth.
  • more importantly: improves the quality of the resulting line by properly connecting its segments.

If your hardware still supports glLineWidth, I would first try to simply create and render a mesh of type GL_LINES. If that works, and you’d like to improve the quality even more, you can always give the geometry shader in my sample a try.

Regarding adding a gl::VboMeshAdj to Cinder: what I mean by this is that you can feed it another mesh (either GL_LINES, GL_LINE_STRIP, GL_TRIANGLES or GL_TRIANGLE_STRIP) and it will add adjacency information to it, turning it into a GL_LINES_ADJACENCY (or GL_TRIANGLES_ADJACENCY etc.) mesh. You still need to write your own outline shader after that.


Thanks for the additional help, guys.

Bala, I saw Stephen’s post about his Cinter-UiTree a couple weeks ago. It’s great that he’s built something intended to be a flexible solution but I don’t think I’m experienced enough to build my necessary scene graph functionality on top of it.

Paul, thank you for the additional information about your geometry shader example and clarifying the functionality and requirements of using gl::VboMeshAdj if that gets added to Cinder at some point.

One thing I’m still confused about is how modify your geometry shader sample to construct a primitive of artibrary shape2d points. Right now, I see that the entire polygon primitive is generated at once by performing the necessary offset calculations inside the shader. It would be great if there were a example showing how to achieve this with a font outline or some other arbitrary shape. I have a feeling this is much more complicated endeavor but perhaps I’m just thinking about it the wrong way.

In my case, in order to render the shape2d outline in a single mBatch->draw() call, I think I’d need to pass the shader an array of floats containing the x,y coordinates of all of my shape2d path points to a uniform of vec2 half the size of my array.

Right now the compiler is complaining that

inder::gl::GlslProg::logUniformWrongType[955] Uniform type mismatch for “uPositions[0]”, expected FLOAT_VEC2 and received bool

I have a feeling I’m not going about this the right way but here’s the code in case anyone can help




void GeometryShaderIntroApp::draw()
gl::setMatricesWindow( getWindowWidth(), getWindowHeight() );
gl::translate( getWindowCenter() );

gl::ScopedGlslProg glslProg( mGlsl );

   //float mPoints[241]; Array containg all the x,y coordinates of my Shape2d

mGlsl->uniform("uPositions", mPoints);



#version 150

layout(points) in;
layout(line_strip, max_vertices = 121) out;

in vec3 vColor[]; // Output from vertex shader for each vertex
out vec3 gColor; // Output to fragment shader

uniform mat4 ciProjectionMatrix;

// Hard coded array size for testing - corresponding to the number of vec2 points of my Shape2d contour

uniform vec2 uPositions[121];

void main()
gColor = vColor[0];

for( int i = 0; i < 121; i++ ) {

	vec2 thisPos = uPositions[i];

	vec4 offset = vec4( thisPos.x, thisPos.y, 0.0, 0.0 );
	gl_Position = ciProjectionMatrix * (gl_in[0].gl_Position + offset);


Hi Ken,

you should read a bit more about geometry shaders in general to understand how they work. It’s important to know they still only operate on a single primitive at a time (e.g. a line segment or a triangle). They can then only create a new primitive and send it to the fragment shader. For example, it can create a triangle strip from a line segment. This is what my shader does. Its inner workings are explained in this post.


Hi Ken,

The right way™ of pushing vertex data into the GPU would be through a VBO. Please go through the VBO example in cinder to figure out how that works. Essentially, you setup memory layout information once, and on each update, push data through that map. Shader uniforms (like in the code above) are suited to values that are constant over all vertices/pixels in a frame.

In your case, you could setup a VBO which has { position, color, thickness } per vertex. You can then access them in your shaders, and use them appropriately. Look at the ciPosition or ciTexCoord0 variable in the examples. Paul’s shader would come in handy over here to convert these end points of the line segments you have supplied through the VBO, to thicker lines with Miter.

Hi guys,

Thanks again for your patience in helping me wrap my head around this, gentlemen. I was pretty comfortable with how shaders worked in Cinder 0.8.5 but I have a lot of catching up to do in this area with the changes to 0.9.0. Since I focus mainly on 2D Cairo graphics, it’s an area I’ve been avoiding for too long. The day has come…

Paul, thanks for the links to more resources. I actually read the geometry shader tutorial you referenced a couple times today but repetition is clearly required! I’ve spent all day doing tutorials and reading about glBatches and geometry shaders. It’s wiring stuff up correctly in Cinder that has me confused. I’ve got your GeometryShaderApp up and running on 0.9.0 so I’ll use this as a guide.

Bala, thank you for giving more details on the right way to do what I’m attempting. I’ve looked at a bunch of the samples but knowing the right approach to choose has been a challenge. I think I just need to immerse myself in this stuff for a couple weeks to get comfortable with all the components and which technique is appropriate for a given task.

Incidentally, I read an excellent tutorial today that other folks here might find useful: Drawing Lines is Hard. It has some great WebGL examples of how to control line thickness in both 2D and 3D scenarios with sample projects and shader code.

Thanks again, guys. Back to woodshedding!