# Drawing thick lines (continued from old forum)

Hi

I’m not actually using Cinder, but OpenGL in my own framework, and I recently came across Paul Houxs great algorithm for drawing thick lines.

As mentioned in the last post, I have run into a bit of a problem when trying to draw the lines in non-square transforms (not sure how to better explain it). Frankly my math is pretty limited (im more of a framework/library developer) and was wondering if anyone had any tips on how I might adapt the algorithm to solve this?

thanks

Hi,

thanks for posting here, instead of on the old, archived forums. I’ve read your other post, but don’t completely understand your problem.

You want to draw a line graph inside a 320x160 px rectangle. The data on the x-axis is in the range [1…1000] and on the y-axis it’s in the range [-1…+1].

How do you calculate your vertex positions? Are you, for example, using (1,-1) for the first pair and (1000, 1) for the last pair? And do you then apply a scaling, like `gl::scale( 320.0f / 1000.0f, 160.0f / 2.0f )`?

Or do you apply the scaling while calculating the coordinates, so that the first pair becomes (0, -80) and the last pair becomes (320, 80)?

To be honest, both methods should result in the same graph. The vertices are sent to the vertex shader, which then applies the model-view-projection matrix to calculate the window position (a.k.a. clip space) of the vertex. The geometry shader then extrudes the line segments by creating a triangle strip and sends new vertex positions and primitives (triangles instead of lines) to the fragment shader. The OpenGL pipeline then rasterizes the triangles, converting them into a huge number of pixels. Finally, the fragment shader determines the color of each pixel based on interpolated data from the geometry shader.

If this does not answer your question, could you maybe post a screenshot?

-Paul

Hi Paul

Its rather the former, so that the coordinates are set according to the units of the graph, i.e the first one being x = 0 and the last x = 1000, and then the scaling is applied before drawing that to the canvas.

I guess the problem is that I am calculating the lines on the CPU, I’m not using a shader. I guess if it was done ‘later’ on the shader, then you would be working in pixels, not coordinate space vertices like I am for the line width, and it would work at all angles.

While I definitely want to at move parts of my code to a shader (for other reasons anyway), at the moment its not a realistic solution.

So if anyone can suggest a way to adopt the algorithm to address this It would be highly appreciated.

I’ve pasted the relevant section of my C++ code below. It’s specific to my framework but its the math that is the problem, so I think you will be able to read it. (But basically its just a port of what you posted in the old forum for the code that runs on CPU).

``````void GLX::Detail::AddPathSegment(Float width, const Point & p0, const Point & p1, const Point & p2, const Point & p3, Array <Point> & points)
{
if (p1 == p2) return;

Point line = Normalise(p2 - p1);

Point normal = Normalise(Point(-line.y, line.x));

Point tangent1 = (p0 == p1) ? line : Normalise(Normalise(p1 - p0) + line);

Point tangent2 = (p2 == p3) ? line : Normalise(Normalise(p3 - p2) + line);

Point miter1(-tangent1.y, tangent1.x);

Point miter2(-tangent2.y, tangent2.x);

miter1 *= width / DotProduct(normal, miter1);

miter2 *= width / DotProduct(normal, miter2);

Point a = p1 - miter1;

Point b = p1 + miter1;

Point c = p2 + miter2;

Point d = p2 - miter2;

points.Push(a);
points.Push(b);
points.Push(c);

points.Push(c);
points.Push(d);
points.Push(a);
}
``````

Well, obviously that would not work, because you are generating triangles in one coordinate space and then you stretch them by applying scaling. For proper triangles, your input points `p0`, `p1,` `p2` and `p3` must be pre-scaled to your 320x160 range.

The reason I thought both methods should work, is because in my solution the scaling happens in the vertex shader, prior to generating the triangles in the geometry shader. But you are not using my solution at all and instead rolled your own CPU version. Which is fine, but it’s not the same.

hey @paul.houx, I thought i’d post here as it’s relevant to this thread ( sort of ). I’m trying to run an example based on your geom shader but am running into an error I can’t diagnose.

``````    void LineRenderer::update()
{
// brute-force method: recreate mesh if anything changed
if (!mVboMesh) {
if (mPoints.size() > 1) {

// create a new vector that can contain 3D vertices
std::vector<ci::vec3> vertices;

// to improve performance, make room for the vertices + 2 adjacency vertices
vertices.reserve(mPoints.size() + 2);

// first, add an adjacency vertex at the beginning
vertices.push_back(2.0f * ci::vec3(mPoints[0], 0) - ci::vec3(mPoints[1], 0));

// next, add all 2D points as 3D vertices
std::vector<ci::vec2>::iterator itr;
for (itr = mPoints.begin(); itr != mPoints.end(); ++itr)
vertices.push_back(ci::vec3(*itr, 0));

// next, add an adjacency vertex at the end
size_t n = mPoints.size();
vertices.push_back(2.0f * ci::vec3(mPoints[n - 1], 0) - ci::vec3(mPoints[n - 2], 0));

// now that we have a list of vertices, create the index buffer
n = vertices.size() - 2;
std::vector<uint16_t> indices;
indices.reserve(n * 4);

for (size_t i = 1; i < vertices.size() - 2; ++i) {
indices.push_back(i - 1);
indices.push_back(i);
indices.push_back(i + 1);
indices.push_back(i + 2);
}

// finally, create the mesh
ci::gl::VboMesh::Layout layout;
layout.attrib(ci::geom::POSITION, 3);

mVboMesh = ci::gl::VboMesh::create(vertices.size(), GL_LINES_ADJACENCY_EXT, { layout }, indices.size());
mVboMesh->bufferAttrib(ci::geom::POSITION, vertices.size() * sizeof(ci::vec3), vertices.data());
mVboMesh->bufferIndices(indices.size() * sizeof(uint16_t), indices.data());
}
else
mVboMesh.reset();
}
}
``````

The error’s being thrown at mVboMesh = ci::gl::VboMesh::create() when the VBO is created…

Exception thrown: read access violation.

``````std::_Tree_comp_alloc<std::_Tmap_traits<unsigned int,std::vector<int,std::allocator<int> >,std::less<unsigned int>,std::allocator<std::pair<unsigned int const ,std::vector<int,std::allocator<int> > > >,0> >::_Myhead(...) returned 0x20.

If there is a handler for this exception, the program may be safely continued.
``````

vertices and indices are both valid containers with size of 4 (at this stage). I’m running MSW10 with VS2015.

Any thoughts what the issue might be?

Thanks!

Hi,

nothing really stands out, everything looks fine. You use unsigned shorts for the indices, which is correct. Everything else in the setup looks good, too. I haven’t run the code myself, or directly compared it to similar code I wrote in one of my projects, but again: can’t find anything wrong with it.

Have you tried stepping into the `create` call?

-Paul

Hey Paul,

As always, thank you for the quick response. I had stepped through the create call, and narrowed it down to something to do with the current context. I couldn’t quite figure out exactly where, it was a deep rabbit hole, but I’ve managed to solve the problem.

I was loading the points for the line into a vector in a callback running on a separate thread from an http request, I can only assume that this caused some kind of issue when running the LineRenderer’s update() method in the main update() loop. I created a bool and moved adding the points to the main thread and that seemed to have solved the issue.

Thanks again.

Hi Paul,

I’m using your geometry shader code for think lines, thank you so much for that. It’s working great, but I’m running into a problem where I want to draw lots of separate lines without connecting them all, but with the vbo mesh it connects them all. Before using the geometry shader I was using a vertBatch and setting all the vertices manually, which worked exactly how I wanted it to:

``````vertBatch = ci::gl::VertBatch::create( GL_LINES );

//.........setting all points...........
vertBatch->vertex( currX, currY );
vertBatch->vertex( lineToX, lineToY );

//then later on creating a batch
mBatch = ci::gl::Batch::create( *vertBatch, mShader );
``````

It’s a little hard to see here, but here is the correct way, with the branches not connecting using a vertBatch:

Any suggestions on how to use the vertBatch method with the thick lines geometry shader, or how not to have all the lines connect with the vbo mesh?

Thanks!

PS Here is how it looks when all the branches connect using the geometry shader and a vbo mesh like you have in your example:

Just a little more info: Based on Paul Houx’s thick line geometry shader example, currently I’m creating the vbo mesh with the following function. This, as mentioned before, connects every line endpoint with the startpoint of the next line, which is not desired. Any ideas to keep each line separate?

``````ci::gl::VboMeshRef Turtle::createMesh(std::vector<ci::vec2> mPoints){
// create a new vector that can contain 3D vertices
std::vector<ci::vec3> vertices;

// to improve performance, make room for the vertices + 2 adjacency vertices
vertices.reserve( mPoints.size() + 2 );

// first, add an adjacency vertex at the beginning
vertices.push_back( 2.0f * ci::vec3( mPoints[0], 0 ) - ci::vec3( mPoints[1], 0 ) );

for( std::vector<ci::vec2>::iterator itr = mPoints.begin(); itr != mPoints.end(); ++itr )
vertices.push_back( ci::vec3( *itr, 0 ) );

// next, add an adjacency vertex at the end
size_t n = mPoints.size();
vertices.push_back( 2.0f * ci::vec3( mPoints[n - 1], 0 ) - ci::vec3( mPoints[n - 2], 0 ) );

// now that we have a list of vertices, create the index buffer
n = vertices.size() - 2;
std::vector<uint16_t> indices;
indices.reserve( n * 4 );

for( auto i = 1; i < vertices.size() - 2; ++i ) {
indices.push_back( i - 1 );
indices.push_back( i );
indices.push_back( i + 1 );
indices.push_back( i + 2 );
}

// finally, create the mesh
ci::gl::VboMesh::Layout layout;
layout.attrib( ci::geom::POSITION, 3 );

auto vboMesh = ci::gl::VboMesh::create( vertices.size(), GL_LINES_ADJACENCY_EXT, { layout }, indices.size() );
vboMesh->bufferAttrib( ci::geom::POSITION, vertices.size() * sizeof( ci::vec3 ), vertices.data() );
vboMesh->bufferIndices( indices.size() * sizeof( uint16_t ), indices.data() );
return vboMesh;
}
``````

Your index buffer is specifically reaching backwards and forwards to connect the vertices (though this may be due to the adjacency topology). Have a fiddle with how the indices are being added.

I tried doing the indices differently each time a new line segment was supposed to happen, but still got the same results. I do think it has something to do with the adjacency topology for the geometry shader to create the think lines, but don’t know how to circumvent it.

``````for( auto i = 1; i < vertices.size() - 2; ++i ) {
if (newLine[i]==true){
indices.push_back( i );
indices.push_back( i );
indices.push_back( i + 1 );
indices.push_back( i + 1 );
} else {
indices.push_back( i - 1 );
indices.push_back( i );
indices.push_back( i + 1 );
indices.push_back( i + 2 );
}

}
``````

Here’s the geom shader from Paul’s example if it is any help. It does do a great job of making thick lines. I did try making a separate mesh for each set of connected lines but that ended up being thousands of vbo meshes and things slowed to a crawl.

``````#version 150

uniform float	THICKNESS;		// the thickness of the line in pixels
uniform float	MITER_LIMIT;	// 1.0: always miter, -1.0: never miter, 0.75: default
uniform vec2	resolution;		// the size of the viewport in pixels

layout( lines_adjacency ) in;
layout( triangle_strip, max_vertices = 7 ) out;

in VertexData{
vec3 mColor;
} VertexIn[4];

out VertexData{
vec2 mTexCoord;
vec3 mColor;
} VertexOut;

vec2 toScreenSpace( vec4 vertex )
{
return vec2( vertex.xy / vertex.w ) * resolution;
}

void main( void )
{
// get the four vertices passed to the shader:
vec2 p0 = toScreenSpace( gl_in[0].gl_Position );	// start of previous segment
vec2 p1 = toScreenSpace( gl_in[1].gl_Position );	// end of previous segment, start of current segment
vec2 p2 = toScreenSpace( gl_in[2].gl_Position );	// end of current segment, start of next segment
vec2 p3 = toScreenSpace( gl_in[3].gl_Position );	// end of next segment

// perform naive culling
vec2 area = resolution * 1.2;
if( p1.x < -area.x || p1.x > area.x ) return;
if( p1.y < -area.y || p1.y > area.y ) return;
if( p2.x < -area.x || p2.x > area.x ) return;
if( p2.y < -area.y || p2.y > area.y ) return;

// determine the direction of each of the 3 segments (previous, current, next)
vec2 v0 = normalize( p1 - p0 );
vec2 v1 = normalize( p2 - p1 );
vec2 v2 = normalize( p3 - p2 );

// determine the normal of each of the 3 segments (previous, current, next)
vec2 n0 = vec2( -v0.y, v0.x );
vec2 n1 = vec2( -v1.y, v1.x );
vec2 n2 = vec2( -v2.y, v2.x );

// determine miter lines by averaging the normals of the 2 segments
vec2 miter_a = normalize( n0 + n1 );	// miter at start of current segment
vec2 miter_b = normalize( n1 + n2 );	// miter at end of current segment

// determine the length of the miter by projecting it onto normal and then inverse it
float length_a = THICKNESS / dot( miter_a, n1 );
float length_b = THICKNESS / dot( miter_b, n1 );

// prevent excessively long miters at sharp corners
if( dot( v0, v1 ) < -MITER_LIMIT ) {
miter_a = n1;
length_a = THICKNESS;

// close the gap
if( dot( v0, n1 ) > 0 ) {
VertexOut.mTexCoord = vec2( 0, 0 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( ( p1 + THICKNESS * n0 ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 0 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( ( p1 + THICKNESS * n1 ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 0.5 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( p1 / resolution, 0.0, 1.0 );
EmitVertex();

EndPrimitive();
}
else {
VertexOut.mTexCoord = vec2( 0, 1 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( ( p1 - THICKNESS * n1 ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 1 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( ( p1 - THICKNESS * n0 ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 0.5 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( p1 / resolution, 0.0, 1.0 );
EmitVertex();

EndPrimitive();
}
}

if( dot( v1, v2 ) < -MITER_LIMIT ) {
miter_b = n1;
length_b = THICKNESS;
}

// generate the triangle strip
VertexOut.mTexCoord = vec2( 0, 0 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( ( p1 + length_a * miter_a ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 1 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( ( p1 - length_a * miter_a ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 0 );
VertexOut.mColor = VertexIn[2].mColor;
gl_Position = vec4( ( p2 + length_b * miter_b ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 1 );
VertexOut.mColor = VertexIn[2].mColor;
gl_Position = vec4( ( p2 - length_b * miter_b ) / resolution, 0.0, 1.0 );
EmitVertex();

EndPrimitive();
}
``````

Hi Sherwood,

the solution I posted back then was for a very specific use-case, to be honest. I noticed when drawing thick line segments with plain OpenGL, they were not properly connected. You can replicate it if you do something like:

``````gl::lineWidth( 10.0f );
gl::begin( GL_LINESTRIP );
gl::vertex( 50, 50 );
gl::vertex( 100, 75 );
gl::vertex( 150, 125 );
gl::vertex( 175, 175 );
gl::end();
``````

My solution uses the primitive mode `GL_LINES_ADJACENCY`, making the previous and next coordinate of the line strip available to the vertex and geometry shaders. This helps in creating just the right set of vertices to build a nice, thick line.

In your `vertBatch` example, however, you are using `GL_LINES`, so I guess you just want single, straight, unconnected line segments. In that case, you need to rewrite quite a few things.

First, make sure your vertex buffer has the correct data: just a pair of vertices for each line. Do not provide adjacency.

Next, the geometry shader should just take those 2 vertices and construct 4 new ones, based on the thickness of the line. Either pass in the thickness as a uniform, or set a thickness per line and pass it from the vertex to the geometry shader as an attribute. The output of the geometry shader should be a line strip with 4 vertices. You can see in the old post how I calculate a normal and then construct the line, I am sure you can derive the correct implementation from it.

Simply ignore the tangent `t` and miter `m` and just use normal `n`. The 4 vertices then become (in this order):
V1 = P0 - n
V2 = P0 + n
V3 = P1 - n
V4 = P1 + n

Let me know if you need more help.

~Paul

That worked great Paul! The miters would still be a nice to have, but I don’t think they’re needed for my use case. Here is the updated geometry shader code for anyone who might need it in the future:

``````#version 150

uniform float	THICKNESS;		// the thickness of the line in pixels
uniform vec2	resolution;		// the size of the viewport in pixels

layout( lines ) in;
layout( triangle_strip, max_vertices = 4 ) out;

in VertexData{
vec3 mColor;
} VertexIn[2];

out VertexData{
vec2 mTexCoord;
vec3 mColor;
} VertexOut;

vec2 toScreenSpace( vec4 vertex )
{
return vec2( vertex.xy / vertex.w ) * resolution;
}

void main( void )
{
// get the two vertices passed to the shader:
vec2 p0 = toScreenSpace( gl_in[0].gl_Position );
vec2 p1 = toScreenSpace( gl_in[1].gl_Position );

// perform naive culling
vec2 area = resolution * 1.2;
if( p1.x < -area.x || p1.x > area.x ) return;
if( p1.y < -area.y || p1.y > area.y ) return;

// determine the direction of the segment
vec2 v = normalize( p1 - p0 );

// determine the normal of the segment
vec2 n = vec2( -v.y, v.x );

// generate the triangle strip
VertexOut.mTexCoord = vec2( 0, 0 );
VertexOut.mColor = VertexIn[0].mColor;
gl_Position = vec4( ( p0 - n * THICKNESS ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 1 );
VertexOut.mColor = VertexIn[0].mColor;
gl_Position = vec4( ( p0 + n * THICKNESS ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 0 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( ( p1 - n * THICKNESS ) / resolution, 0.0, 1.0 );
EmitVertex();

VertexOut.mTexCoord = vec2( 0, 1 );
VertexOut.mColor = VertexIn[1].mColor;
gl_Position = vec4( ( p1 + n * THICKNESS ) / resolution, 0.0, 1.0 );
EmitVertex();

EndPrimitive();
}
``````

Nice to hear it worked, Sherwood!

One more thing: you really should reduce `max_vertices` to 4, since that is the exact number of vertices you are generating. You should/could see a nice performance increase. Geometry shaders aren’t cheap if they generate way more vertices than they consume.

~Paul

Ok, thanks! I wasn’t sure if that was just a space allocation or if it was actually generating that many since it is only emitting 4 vertices. I’ve edited the above code to reflect this change.