On one hand, we have an ssbo that basically contains an array of Particles structs with all sorts of fields.
On setup(), we manually set all particles postions to form a plane.
Every time update() gets called, this ssbo is being accessed by a compute shader that recalculates particles positions.
Then, from draw(), we draw those vertices with gl::drawArrays()
That works well, however our “plane” is being rendered just as a grid of vertices, so we have no texture, shading etc.
On the other hand, we have VboMeshApp sample app, where a VboMesh is created using a geom::Plane and then easily drawn with a texture.
My question is - What is the best/most natual way of combing these two approches? how can I “wire” a GPU-based physics engine to a VboMesh that was created with a geom::Source object?
just add an index buffer to your mesh, which will tell OpenGL how to connect the vertices. Also set the primitive type to something appropriate, like GL_TRIANGLES. The indices will stay the same, regardless of the vertex positions, so you don’t need to dynamically update them. So if your compute shader changes the vertex positions, your mesh will deform with it.
-Paul
Edit: as you can see in the source code, geom::Plane already creates a separate vertex and index buffer, so you could try to create your VboMesh using a Plane as source, with the correct number of subdivisions to match your desired number of particles.
Note: the Plane also creates normals, but when you are deforming the mesh, the normals will no longer be valid or correct. Calculating them on the fly requires access to adjacent vertices, which could be done in your compute shader.