Drawing thick lines (continued from old forum)


#1

Hi

I’m not actually using Cinder, but OpenGL in my own framework, and I recently came across Paul Houxs great algorithm for drawing thick lines.

https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader

As mentioned in the last post, I have run into a bit of a problem when trying to draw the lines in non-square transforms (not sure how to better explain it). Frankly my math is pretty limited (im more of a framework/library developer) and was wondering if anyone had any tips on how I might adapt the algorithm to solve this?

thanks


#2

Hi,

thanks for posting here, instead of on the old, archived forums. I’ve read your other post, but don’t completely understand your problem.

You want to draw a line graph inside a 320x160 px rectangle. The data on the x-axis is in the range [1…1000] and on the y-axis it’s in the range [-1…+1].

How do you calculate your vertex positions? Are you, for example, using (1,-1) for the first pair and (1000, 1) for the last pair? And do you then apply a scaling, like gl::scale( 320.0f / 1000.0f, 160.0f / 2.0f )?

Or do you apply the scaling while calculating the coordinates, so that the first pair becomes (0, -80) and the last pair becomes (320, 80)?

To be honest, both methods should result in the same graph. The vertices are sent to the vertex shader, which then applies the model-view-projection matrix to calculate the window position (a.k.a. clip space) of the vertex. The geometry shader then extrudes the line segments by creating a triangle strip and sends new vertex positions and primitives (triangles instead of lines) to the fragment shader. The OpenGL pipeline then rasterizes the triangles, converting them into a huge number of pixels. Finally, the fragment shader determines the color of each pixel based on interpolated data from the geometry shader.

If this does not answer your question, could you maybe post a screenshot?

-Paul


#3

Hi Paul

Its rather the former, so that the coordinates are set according to the units of the graph, i.e the first one being x = 0 and the last x = 1000, and then the scaling is applied before drawing that to the canvas.

I guess the problem is that I am calculating the lines on the CPU, I’m not using a shader. I guess if it was done ‘later’ on the shader, then you would be working in pixels, not coordinate space vertices like I am for the line width, and it would work at all angles.

While I definitely want to at move parts of my code to a shader (for other reasons anyway), at the moment its not a realistic solution.

So if anyone can suggest a way to adopt the algorithm to address this It would be highly appreciated.

I’ve pasted the relevant section of my C++ code below. It’s specific to my framework but its the math that is the problem, so I think you will be able to read it. (But basically its just a port of what you posted in the old forum for the code that runs on CPU).

void GLX::Detail::AddPathSegment(Float width, const Point & p0, const Point & p1, const Point & p2, const Point & p3, Array <Point> & points)
{
	if (p1 == p2) return;

	Point line = Normalise(p2 - p1);

	Point normal = Normalise(Point(-line.y, line.x));
	

	Point tangent1 = (p0 == p1) ? line : Normalise(Normalise(p1 - p0) + line);

	Point tangent2 = (p2 == p3) ? line : Normalise(Normalise(p3 - p2) + line);


	Point miter1(-tangent1.y, tangent1.x);

	Point miter2(-tangent2.y, tangent2.x);

	miter1 *= width / DotProduct(normal, miter1);

	miter2 *= width / DotProduct(normal, miter2);


	Point a = p1 - miter1;

	Point b = p1 + miter1;

	Point c = p2 + miter2;

	Point d = p2 - miter2;

	points.Push(a);
	points.Push(b);
	points.Push(c);

	points.Push(c);
	points.Push(d);
	points.Push(a);
}

#4

Well, obviously that would not work, because you are generating triangles in one coordinate space and then you stretch them by applying scaling. For proper triangles, your input points p0, p1, p2 and p3 must be pre-scaled to your 320x160 range.

The reason I thought both methods should work, is because in my solution the scaling happens in the vertex shader, prior to generating the triangles in the geometry shader. But you are not using my solution at all and instead rolled your own CPU version. Which is fine, but it’s not the same.


#5

hey @paul.houx, I thought i’d post here as it’s relevant to this thread ( sort of ). I’m trying to run an example based on your geom shader but am running into an error I can’t diagnose.

    void LineRenderer::update()
    {
    	// brute-force method: recreate mesh if anything changed
    	if (!mVboMesh) {
    		if (mPoints.size() > 1) {

    			// create a new vector that can contain 3D vertices
    			std::vector<ci::vec3> vertices;

    			// to improve performance, make room for the vertices + 2 adjacency vertices
    			vertices.reserve(mPoints.size() + 2);

    			// first, add an adjacency vertex at the beginning
    			vertices.push_back(2.0f * ci::vec3(mPoints[0], 0) - ci::vec3(mPoints[1], 0));

    			// next, add all 2D points as 3D vertices
    			std::vector<ci::vec2>::iterator itr;
    			for (itr = mPoints.begin(); itr != mPoints.end(); ++itr)
    				vertices.push_back(ci::vec3(*itr, 0));

    			// next, add an adjacency vertex at the end
    			size_t n = mPoints.size();
    			vertices.push_back(2.0f * ci::vec3(mPoints[n - 1], 0) - ci::vec3(mPoints[n - 2], 0));

    			// now that we have a list of vertices, create the index buffer
    			n = vertices.size() - 2;
    			std::vector<uint16_t> indices;
    			indices.reserve(n * 4);

    			for (size_t i = 1; i < vertices.size() - 2; ++i) {
    				indices.push_back(i - 1);
    				indices.push_back(i);
    				indices.push_back(i + 1);
    				indices.push_back(i + 2);
    			}

    			// finally, create the mesh
    			ci::gl::VboMesh::Layout layout;
    			layout.attrib(ci::geom::POSITION, 3);

    			mVboMesh = ci::gl::VboMesh::create(vertices.size(), GL_LINES_ADJACENCY_EXT, { layout }, indices.size());
    			mVboMesh->bufferAttrib(ci::geom::POSITION, vertices.size() * sizeof(ci::vec3), vertices.data());
    			mVboMesh->bufferIndices(indices.size() * sizeof(uint16_t), indices.data());
    		}
    		else
    			mVboMesh.reset();
    	}
    } 

The error’s being thrown at mVboMesh = ci::gl::VboMesh::create() when the VBO is created…

Exception thrown: read access violation.

std::_Tree_comp_alloc<std::_Tmap_traits<unsigned int,std::vector<int,std::allocator<int> >,std::less<unsigned int>,std::allocator<std::pair<unsigned int const ,std::vector<int,std::allocator<int> > > >,0> >::_Myhead(...) returned 0x20.

If there is a handler for this exception, the program may be safely continued.

vertices and indices are both valid containers with size of 4 (at this stage). I’m running MSW10 with VS2015.

Any thoughts what the issue might be?

Thanks!


#6

Hi,

nothing really stands out, everything looks fine. You use unsigned shorts for the indices, which is correct. Everything else in the setup looks good, too. I haven’t run the code myself, or directly compared it to similar code I wrote in one of my projects, but again: can’t find anything wrong with it.

Have you tried stepping into the create call?

-Paul


#7

Hey Paul,

As always, thank you for the quick response. I had stepped through the create call, and narrowed it down to something to do with the current context. I couldn’t quite figure out exactly where, it was a deep rabbit hole, but I’ve managed to solve the problem.

I was loading the points for the line into a vector in a callback running on a separate thread from an http request, I can only assume that this caused some kind of issue when running the LineRenderer’s update() method in the main update() loop. I created a bool and moved adding the points to the main thread and that seemed to have solved the issue.

Thanks again.