there was a reply to one of my posts on the old forum and I’ve decided to reply on the new forums instead. It’s about rendering thick lines, a topic that sounds like “didn’t we solve this in the 60’s?” - but is surprisingly hard to pull off if you care about a) the looks and b) performance.
The old post described a way to draw long line strips without the little gaps between the individual segments. OpenGL by default doesn’t do a great job when rendering lines with a thickness > 1, so you have to come up with your own solution. I won’t repeat what I came up with, you can find a full discussion in the old post.
However, recently I was doing a project where I had to draw hundreds, maybe even thousands of quadratic curves per frame. The designers wanted thick (5 pixel) curves, anti-aliased of course, with a nice color gradient from start to end point and an adjustable curvature.
The solution I arrived at is based on this ShaderToy sample:
It calculates a distance to the curve for each pixel and renders a perfectly anti-aliased quadratic curve. It suffers from a few artifacts (e.g. it can’t handle straight lines) and can be optimized a bit more, but it was a good start. My first implementation used simple quads the size of the curve’s bounding box, but that was horribly slow. The calculations don’t come cheap, so you want to minimize the number of affected pixels as much as possible.
Next thing I did was calculating a simple mesh on the CPU for each curve, just so I could experiment with different options. I would then later translate this into a GPU-based solution. I found that it’s sufficient to subdivide the curve into roughly 40 segments (even for strongly bent curves) by calculating the position for:
t = 0.0,
t = 0.025,
t = 0.050…
t = 1.0
, then taking the normal of the tangent in that point and extending it outward. The wire frame then looks like this:
As you can see, this reduces the number of affected pixels drastically. It was already much faster to draw, but only if the total number of curves was low. Drawing lots of curves made me CPU bound and the frame rate dropped. So, GPU to the rescue.
I constructed a simple
VboMesh, containing 80 vertices (39 segments in total). Their positions are completely irrelevant, for we will calculate them in the vertex shader. Instead, I stored the following information in the vertex:
x = t0; // the "t" for this position
y = t1; // the "t" for the next position on the curve (poor man's tangent)
z = side; // either a 1 or a -1
w = offset; // used to extend the mesh to accomodate for rounded end points
To render all the curves, I use instanced rendering. The instance buffer contains the data for each curve:
ci::vec2 p0; // start point in window coordinates
ci::vec2 p1; // control point in window coordinates
ci::vec2 p2; // end point in window coordinates
float t_min; // range: we can animate the curve by
float t_max; // only drawing a portion of it
ci::vec4 col0; // start color
ci::vec4 col1; // end color
The vertex shader then simply calculates the position of each vertex based on this data.
// Evaluate quadratic curve.
vec2 a = evalQuadratic( p0, p1, p2, t0 );
vec2 b = evalQuadratic( p0, p1, p2, ( t1 < 0.0 ) ? 1.0 - t1 : t1 );
if( t1 < 0.0 )
b = a + ( a - b );
// Calculate normal.
vec2 v = 2.0 * uThickness * normalize( b - a ) * sign( t1 );
vec2 n = vec2( -v.y, v.x );
vertPosition = side * n + offset * v + a;
The fragment shader is very similar to the ShaderToy sample, just a bit more optimized, cleaned up and with fewer artifacts (as in: none).
The end result looks like this:
We can endlessly zoom in on the curves and they remain perfect. Thickness can be adjusted as well, even to values like 30 pixels or more. Rendering 2048 long curves on a 10800x3840 screen takes less than 0.5ms in total, including uploading the instance data from the CPU.
Hopefully this answers your question, Caleb.