Changing the color of the individual vertices in vertex shader

Hello,

I was tinkering around with the ParticleSphereGPU sample and tweaked it to display two different images depending on application events. The problem which I am facing now is that even though the particles move effortlessly into new positions, I am not able to change their individual colors. I tried changing the value of the color variable in the external shader file based on a uniform variable but that doesn’t help and it just stays with the first color value (whichever it happens to be) even after the condition changes.

I looked around a bit but most of the solutions talk about using a uniform variable to set the color but that will set a single color value for all the vertices which is not something I want. Their individual colors for both the images are saved and passed to them at the time of particle creation. Would really appreciate any pointers to help solve this issue.

The external vs file:

#version 150 core

uniform float uActive;

in vec3   iPosition;
in vec3   iPPostion;
in vec3   iHome;
in vec3   iBase;
in float  iDamping;
in vec4   iColor;
in vec4   iOColor;

out vec3  position;
out vec3  pposition;
out vec3  home;
out vec3  base;
out float damping;
out vec4  color;
out vec4  oColor;

const float dt2 = 1.0 / (60.0 * 60.0);
const float force = 100.0;

void main()
{
	position =  iPosition;
	pposition = iPPostion;
	damping =   iDamping;
	home =      iHome;
	base =      iBase;
	color =     (uActive > 0.0) ? iColor : iOColor;
	oColor =	iOColor;
 	
	if(uActive > 0.0)
	{    
		vec3 dir = base - position;
		float d2 = length( dir );
		d2 *= d2;
		
		float currForce = d2 < 20 ? 0 : (d2 < 40 ? force/8.0 : (d2 < 75 ? force/4.0 : (d2 < 120 ? force/2.0 : force )));
		position += currForce * dir / d2;

		vec3 vel = (position - pposition) * damping;
		pposition = position;
		vec3 acc = (base - position) * 32.0f;
		position += vel + acc * dt2;
	}
	else
	{
		vec3 vel = (position - pposition) * (damping + 0.05f);
		pposition = position;
		vec3 acc = (home - position) * 32.0f;
		position += vel + acc * dt2;
 	}
}

Thanks

hello!

I hope I’m understanding the question correctly, but one thing that might work is to just pass the attribute you want to show through onto the rendering shader for your particles.

Any of your outgoing variables in external.vs (color,oColor,etc) can be used in your rendering shader.

Is oColor the color that the variables need to end up as?
As a simple example, lets say I just wanted to make all the particles yellow once they’re rendered.

in external.vs

// attribs, etc..
void main(){
// code

oColor = vec4(1.0,1.0,0.0,1.0);

// more code
}

now inside your rendering shader you would have something like this

 //vertex shader

    in vec4 oColor
    out vec4 pColor;
    void main(){
      // boilerplate
      pColor = oColor;
    }

 // fragment shader
    in vec4 pColor;
    out vec4 glFragColor;
    void main(){
        glFragColor = pColor;
    }

and now all the particles should be yellow. Does that help point you in the right direction a bit?

Hello sortofsleepy,

Thanks for the response.

In my case, all the particles are assigned different colors based on their position (basically mapped to the image they are trying to replicate). The idea is to morph them from one image to another based on an application event. If I set their color to any of the two external vs variable which hold the values for the two different images then they display that image perfectly. The problem comes when they move around to create the new image. That’s when the color switch is not happening for me even though that event condition logically should trigger that change.

I am sure that I am missing something very basic since I am inexperienced at this. That was one of the reasons I started with the ParticleSphereGPU example and tweaked it. Also, that example doesn’t use the typical format used for vertex and fragment shader sections in the example you gave and the one given in the tutorials.

Glad to hear you got the particles to change to the right color at least!

Hmm, your problem now is a bit tricky since shaders are kinda hard to debug as I’m sure you’ve discovered. There’s not too much for me to go on without seeing more code(or the full source I could run myself); logically though, based on the vertex shader you posted above, and assuming the basic logic didn’t change, you shouldn’t be having any trouble getting it to transition so I’m not entirely sure what the issue could be.

Maybe someone else more experienced than I will have a better idea.

As to why the shaders you’re looking at are different, are you looking at this tutorial?
https://libcinder.org/docs/guides/opengl/part5.html

For me, I generally keep my shaders in a separate file and load it in more or less like the ParticleSphereGPU example, but the constructor for a shader is able to take a raw string as well which is what the CI_GLSL macro in the guide appears to be basically doing. Cinder also has default shaders as well which can be called with gl::getStockShader.

Hmm… for me the main problem in getting my head around the whole structure is that the ParticleSphereGPU example doesn’t have explicit CI_GLSL macros for the vertex and fragment shader parts, nor does it have external .vert and .fragment files to handle the render part. This makes it a bit confusing for me to understand how exactly the information is flowing between the vertex and fragment shader.

I compared it with the TransformFeedbackSmokeParticles examples, and for the render glsl, it’s using a stock shader instead of external shader files:

mRenderProg = gl::getStockShader(gl::ShaderDef().color());

I definitely need to read up a bit more on the shaders to get a better idea of how it actually works but it’s weird that I can make the particles move around any way I want by just controlling the position variables values in the external .vs file but changing the value of the color variables there does nothing. :frowning:

Ah, gotcha. I can try to explain a little bit but this is the point where it might be more helpful to try and look up some basic OpenGL tutorials which will probably explain it better than I could.

This is a good guide
https://learnopengl.com/

And again I’m sure either I or someone else would be able to help you figure it out if you’d post some code that we could run, it’s hard to be sure what the issue might be just based on that vertex shader.

Anyways, this is normally how you might set up a shader.

// note that this can all be defined in one line too like in the ParticleSphereGPU example but I tend to write it all 
// out for clarity and also because all the IDEs I've tried seem to mess up indention for some reason when written      
// on one line
gl::GlslProg::Format format;

// note - you can also input a raw string or use the CI_GLSL macro
format.vertex(app::loadAsset(<path to shader in assets folder>));
format.fragment(app::loadAsset(<path to fragment shader in assets folder));

auto shader = gl::GlslProg::create(format);

gl::getStockShader essentially wraps all that code and loads in a set of vertex and fragment shaders bundled with Cinder to save some of the trouble of setting up and writing out shaders for common things like setting the color of something or applying a texture to a mesh.

The shader pipeline generally flows
vertex -> fragment

When it comes to setting up the actual attributes(position, color, etc) and the data associated with them, that’s handled through Vertex Attribute Objects, not the shaders themselves, the shaders are essentially a pipe that you’re running content through.

Does that help clarify things at all?

Thanks sortofsleepy, that does help with my confusion clarify the use of stockshader in the sample.

My confusion regarding the problem with color change of the particles still remains though (no doubt, because of my lack of understanding on the subject). Let me try to post some code so that it becomes a bit easier to see where I could be going wrong.

The particle struct:

struct Particle
{
	vec3	pos;
	vec3	ppos;
	vec3	home;
	vec3	base;
	ColorA  color;
	ColorA  oColor;
	float	damping;
};

Initialization:

// Create particle buffers on GPU and copy data into the first buffer.
// Mark as static since we only write from the CPU once.
mParticleBuffer[mSourceIndex] = gl::Vbo::create(GL_ARRAY_BUFFER, mParticles.size() * sizeof(Particle), mParticles.data(), GL_STATIC_DRAW);
mParticleBuffer[mDestinationIndex] = gl::Vbo::create(GL_ARRAY_BUFFER, mParticles.size() * sizeof(Particle), nullptr, GL_STATIC_DRAW);

for (int i = 0; i < 2; ++i)
{	
	// Describe the particle layout for OpenGL.
	mAttributes[i] = gl::Vao::create();
	gl::ScopedVao vao(mAttributes[i]);

	// Define attributes as offsets into the bound particle buffer
	gl::ScopedBuffer buffer(mParticleBuffer[i]);
	gl::enableVertexAttribArray(0);
	gl::enableVertexAttribArray(1);
	gl::enableVertexAttribArray(2);
	gl::enableVertexAttribArray(3);
	gl::enableVertexAttribArray(4);
	gl::enableVertexAttribArray(5);
	gl::enableVertexAttribArray(6);
	gl::vertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, pos));
	gl::vertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, color));
	gl::vertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, oColor));
	gl::vertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, ppos));
	gl::vertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, home));
	gl::vertexAttribPointer(5, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, base));
	gl::vertexAttribPointer(6, 1, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, damping));
		
}

// Load our update program.
// Match up our attribute locations with the description we gave.
mRenderProg = gl::getStockShader(gl::ShaderDef().color());

mUpdateProg = gl::GlslProg::create(gl::GlslProg::Format().vertex(loadAsset("particleUpdate.vs"))
	.feedbackFormat(GL_INTERLEAVED_ATTRIBS)
	.feedbackVaryings({ "position", "pposition", "home", "base", "color", "oColor", "damping" })
	//.feedbackVaryings({ "position", "color", "oColor", "pposition", "home", "base", "damping" })
	.attribLocation("iPosition", 0)
	.attribLocation("iColor", 1)
	.attribLocation("iOColor", 2)
	.attribLocation("iPPosition", 3)
	.attribLocation("iHome", 4)
	.attribLocation("iBase", 5)
	.attribLocation("iDamping", 6)
  );

Update function:

// Update particles on the GPU
gl::ScopedGlslProg prog(mUpdateProg);
gl::ScopedState rasterizer(GL_RASTERIZER_DISCARD, true);	// turn off fragment stage

mUpdateProg->uniform("uActive", activeFlag);

// Bind the source data (Attributes refer to specific buffers).
gl::ScopedVao source(mAttributes[mSourceIndex]);

// Bind destination as buffer base.
gl::bindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, mParticleBuffer[mDestinationIndex]);
gl::beginTransformFeedback(GL_POINTS);

// Draw source into destination, performing our vertex transformations.
gl::drawArrays(GL_POINTS, 0, mNumParticles);

gl::endTransformFeedback();

// Swap source and destination for next loop
std::swap(mSourceIndex, mDestinationIndex);

draw function:

gl::ScopedGlslProg render(mRenderProg);
gl::ScopedVao vao(mAttributes[mSourceIndex]);

gl::context()->setDefaultShaderVars();
gl::drawArrays(GL_POINTS, 0, mNumParticles);

Like I mentioned earlier, I am storing the position and color values for the individual particles relative to the two images in the corresponding variables in the external .vs. The idea is to morph the particle cluster from one image to the other one. It’s kind of working in the sense that the particles move to the correct positions and even the colors are displayed correctly for the first image. The problem is that even if I change the value of the the variable color in the external .vs, It doesn’t change the colors of the particles as they move to their new position.

I tried following paul’s advice in an older thread on the old forum where he suggested to be a bit more clever in assigning the color values and use something like their position to define it, but that also only sets it the first time and doesn’t update the value as they move to the new positions for the second image.

I hope that the chunks of shared code helped clarify the situation a bit. Please let me know if some more clarity is needed to help understand the problem better.

Thanks for your help.

Hi,

instead of setting the color for each vertex from code, you could also set a texture coordinate once for each vertex and then simply lookup the color in the vertex shader. You could even do crossfading in the shader by sampling both textures. The big advantage is that the texture coordinates always stay the same and don’t need to be written every frame.

-Paul

Edit: lol, looks like we already talked about this. But if a fixed texture coordinate doesn’t work, why not derive it from the current position?

Hello again!

Yes. That certainly helps. I know you said you only modded the ParticleGPU example, but I just wanted to be sure nothing in the setup might have changed things. While I don’t see how the movement is happening with the snippets you posted, I think I ran into what you might be running into, with using uActive as a flag to swap colors(to be honest I shoulda caught this sooner haha).

The simplest way going along the current code you appear to be writing, I would add one more attribute storing the current color of a particle and store the two colors corresponding to the different images as separate attributes.

You could then do something like

if(uActive){
 color = colorOne;
}else{
 color = colorTwo;
}

That being said, @paul.houx 's suggestion is the better way to go(another thing I shoulda thought of doh! haha).

Thanks for the replies guys.

@paul.houx:

I tried setting the color value based on the current position of the particle, but in that case also, the color assignment seems to be happening only once when the particles are moving to their initial position during the initialization animation, and so the cluster ends up with a color value which is halfway between the two desired values. So, the final result seems like two images superimposed on each other, even though the particle cluster is moving to the correct position and arranging itself correctly to display the correct size in both the cases.

I used a simple ratio based color calculation in the .vs

color = vec4(((length(position - home)*(colorInactive.r - colorActive.r)/length(base - home)) + colorActive.r), ((length(position - home)*(colorInactive.g - colorActive.g)/length(base - home)) + colorActive.g), ((length(position - home)*(colorInactive.b - colorActive.b)/length(base - home)) + colorActive.b), 1);

Where home and base are the the positions assigned for the two different images and colorActive and colorInactive are the two corresponding color values.

@sortofsleepy

I followed your advice and added another attribute to store the current color value and stored the two colors corresponding to the two images in separate attributes, and then switched the color value to the corresponding attribute value based on the uActive flag like you suggested, but even then it just takes the first value during initialization as the particle color and doesn’t change at all when the events are triggered.

I am sure that I am missing something really basic somewhere. I did add the new attributes in the feedbackVaryings capture list

mUpdateProg = gl::GlslProg::create(gl::GlslProg::Format().vertex(loadAsset("particleUpdate.vs"))
			.feedbackFormat(GL_INTERLEAVED_ATTRIBS)
			.feedbackVaryings({ "position", "pposition", "home", "base", "colorActive", "colorInactive", "color", "damping" })
			.attribLocation("iPosition", 0)
			.attribLocation("iColor", 1)
			.attribLocation("iOColor", 2)
			.attribLocation("iPPosition", 3)
			.attribLocation("iHome", 4)
			.attribLocation("iBase", 5)
			.attribLocation("iDamping", 6)
			);

Maybe I am missing some other piece of the puzzle in the setup. I know that I will feel really stupid when I find what I screwed up but I would gladly take that over not knowing why it’s not working like it’s supposed to. :frowning:

Ah, you didn’t actually add a new attribute but a new varying. Even then it seems you didn’t add any new attributes to the VAOs themselves unless you just left that part out?

Updating the shader doesn’t add a new attribute exactly, remember, shaders are merely a pipe that transforms data. You still have to add the actual attribute into your Vertex Attribute Objects first like you did in the chunks of code you posted above. Then you make the necessary additions to the shader like adding the varying value and adding a new attribute location.

That being said, @paul.houx 's suggestion of using texture coordinates is the better way to go as it’s more scalable but I’ve only explored it briefly and am probably not the one to ask about that technique.

Hmm… just to understand where exactly I am going wrong, I had added a new varying and used two independent attributes to store the color values related to the two different images. I will post the chunks from the current state of the code to pin point the problem (Apologies in advance for the length of the content dump)

So, the local particle construct looks like this:

struct Particle
{
	vec3	pos;
	vec3	ppos;
	vec3	home;
	vec3	base;
	ColorA  color;
	ColorA  colorActive;
	ColorA  colorInactive;
	float	damping;
};

The particles and shader initializaton function:

// Mark as static since we only write from the CPU once.
mParticleBuffer[mSourceIndex] = gl::Vbo::create(GL_ARRAY_BUFFER, mParticles.size() * sizeof(Particle), mParticles.data(), GL_STATIC_DRAW);
mParticleBuffer[mDestinationIndex] = gl::Vbo::create(GL_ARRAY_BUFFER, mParticles.size() * sizeof(Particle), nullptr, GL_STATIC_DRAW);

for (int i = 0; i < 2; ++i)
{	
	// Describe the particle layout for OpenGL.
	mAttributes[i] = gl::Vao::create();
	gl::ScopedVao vao(mAttributes[i]);

	// Define attributes as offsets into the bound particle buffer
	gl::ScopedBuffer buffer(mParticleBuffer[i]);
	gl::enableVertexAttribArray(0);
	gl::enableVertexAttribArray(1);
	gl::enableVertexAttribArray(2);
	gl::enableVertexAttribArray(3);
	gl::enableVertexAttribArray(4);
	gl::enableVertexAttribArray(5);
	gl::enableVertexAttribArray(6);
	gl::enableVertexAttribArray(7);
	gl::vertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, pos));
	gl::vertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, colorActive));
	gl::vertexAttribPointer(2, 4, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, colorInactive));
	gl::vertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, ppos));
	gl::vertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, home));
	gl::vertexAttribPointer(5, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, base));
	gl::vertexAttribPointer(6, 1, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, damping));
	gl::vertexAttribPointer(7, 4, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, color));
}

// Load our update program.
// Match up our attribute locations with the description we gave.
mRenderProg = gl::getStockShader(gl::ShaderDef().color());

mUpdateProg = gl::GlslProg::create(gl::GlslProg::Format().vertex(loadAsset("particleUpdate.vs"))
	.feedbackFormat(GL_INTERLEAVED_ATTRIBS)
	.feedbackVaryings({ "position", "pposition", "home", "base", "colorActive", "colorInactive", "color", "damping" })
	.attribLocation("iPosition", 0)
	.attribLocation("iColor", 1)
	.attribLocation("iOColor", 2)
	.attribLocation("iPPosition", 3)
	.attribLocation("iHome", 4)
	.attribLocation("iBase", 5)
	.attribLocation("iDamping", 6)
	.attribLocation("iCurrColor", 7)
	);

The update function:

// Update particles on the GPU
gl::ScopedGlslProg prog(mUpdateProg);
gl::ScopedState rasterizer(GL_RASTERIZER_DISCARD, true);	// turn off fragment stage

mUpdateProg->uniform("uActive", activeFlag);

// Bind the source data (Attributes refer to specific buffers).
gl::ScopedVao source(mAttributes[mSourceIndex]);

// Bind destination as buffer base.
gl::bindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, mParticleBuffer[mDestinationIndex]);
gl::beginTransformFeedback(GL_POINTS);

// Draw source into destination, performing our vertex transformations.
gl::drawArrays(GL_POINTS, 0, mNumParticles);

gl::endTransformFeedback();

// Swap source and destination for next loop
std::swap(mSourceIndex, mDestinationIndex);

The draw function:

gl::ScopedGlslProg render(mRenderProg);
gl::ScopedVao vao(mAttributes[mSourceIndex]);

gl::context()->setDefaultShaderVars();
gl::drawArrays(GL_POINTS, 0, mNumParticles);

The external .vs chunk:

#version 150 core

uniform float uActive;

in vec3   iPosition;
in vec3   iPPostion;
in vec3   iHome;
in vec3   iBase;
in float  iDamping;
in vec4   iColor;
in vec4   iOColor;
in vec4   iCurrColor;
in vec4   ciColor;

out vec3  position;
out vec3  pposition;
out vec3  home;
out vec3  base;
out float damping;
out vec4  color;
out vec4  colorInactive;
out vec4  colorActive;

const float dt2 = 1.0 / (60.0 * 60.0);
const float force = 100.0;

void main()
{
	position =  iPosition;
	pposition = iPPostion;
	damping =   iDamping;
	home =      iHome;
	base =      iBase;
	colorInactive = iOColor;
	colorActive = iColor;
	color = iCurrColor;
	
	if(uActive > 0.0)
	{
		
		vec3 dir = base - position;
		float d2 = length( dir );
		d2 *= d2;
		
		float currForce = d2 < 20 ? 0 : (d2 < 40 ? force/8.0 : (d2 < 75 ? force/4.0 : (d2 < 120 ? force/2.0 : force )));
		position += currForce * dir / d2;

		vec3 vel = (position - pposition) * damping;
		pposition = position;
		
		vec3 acc = (base - position) * 32.0f;
		position += vel + acc * dt2;
		
		
	}
	else
	{
		
		vec3 vel = (position - pposition) * (damping + 0.05f);
		pposition = position;
		vec3 acc = (home - position) * 32.0f;
		position += vel + acc * dt2;
		
	}
	
	color =	vec4(((length(position - home)*(colorInactive.r - colorActive.r)/length(base - home)) + colorActive.r), ((length(position - home)*(colorInactive.g - colorActive.g)/length(base - home)) + colorActive.g), ((length(position - home)*(colorInactive.b - colorActive.b)/length(base - home)) + colorActive.b), 1);
	
}

The strange thing is that it does set the color to a mixed one (somewhere midway between the two color values) during the initial animation but then the color simply stays the same no matter where the particles are moving, which results in a hybrid of the two images which I am trying to display.

Code is helpful, thank you for posting again! While it’d still be useful to know what kind of data is going into your particles, not just the VAO setup, I decided to just build a tiny example anyways from scratch(sort of).

It’s pretty much a straight copy of the ParticleGPU example, but I added two attributes for active/inactive colors and set the uActive variable to flip on a keypress; it should demonstrate how to switch between colors using magenta and yellow, magenta is when uActive is false, yellow when `uActive is true.

I also tried running your shader in it(with some minor alterations cause I was in a hurry and a little lazy) and it seems to move between colors nicely though again, since I don’t know what data is going into the particles initially it’s likely not a replica of the movement you set up.

update.glsl is the shader I was testing, update-test.glsl is me trying to integrate your vertex shader.

Some thoughts:

  • As I was building the example i was reminded as to how finicky things can get when using interleaved attributes. I made comments in the file but essentially, something worth trying might be to make sure you’re calling gl::vertexAttribPointer in with the same variable order as how your Particle struct is setup.

  • I had a very silly realization as well(I’m getting old ha), if all you need to do is simply switch colors without any kind of fancy effect, why not manage colors outside of the shader and just send the current color as another uniform value?

Anyways, I hope that helps a bit more! Sorry to have not been quite as helpful so far.

2 Likes

@sortofsleepy

I got it to work! Thanks a lot. You have been very patient and a great help.

I think I was screwing up the order of the vertexAttributePointer assignment somehow, even though I still don’t know what I was doing wrong since I made sure to match it to the particle struct. What worked for me was to match the order to the working example you shared even though that order doesn’t match up with the order of the variables in the Particle struct.

If I matched the order exactly then all it gave me was a yellowish cluster of particles which didn’t have any reaction to the application event (in terms of the color change at least. The position was still changing correctly). The moment I switched the order around a bit, I started getting results on the simple test color switches I had set up. Then I just swtiched the color values to the pixel values from the two images and it worked perfectly. I then uncommented the position based color assignment and it worked like a charm. :slight_smile:


Current Particle struct:

struct Particle
{
	vec3	pos;
	vec3	ppos;
	vec3	home;
	ColorA  color;
	vec3	base;
	float	damping;
	
	ColorA	colorOne;
	ColorA	colorTwo;
};

Vertex attribute pointer assignments:

for (int i = 0; i < 2; ++i)
{	
	// Describe the particle layout for OpenGL.
	mAttributes[i] = gl::Vao::create();
	gl::ScopedVao vao(mAttributes[i]);

	// Define attributes as offsets into the bound particle buffer
	gl::ScopedBuffer buffer(mParticleBuffer[i]);
	gl::enableVertexAttribArray(0);
	gl::enableVertexAttribArray(1);
	gl::enableVertexAttribArray(2);
	gl::enableVertexAttribArray(3);
	gl::enableVertexAttribArray(4);
	gl::enableVertexAttribArray(5);
	gl::enableVertexAttribArray(6);
	gl::enableVertexAttribArray(7);
	gl::vertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, pos));
	gl::vertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, color));
	gl::vertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, ppos));
	gl::vertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, home));
	gl::vertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, base));
	gl::vertexAttribPointer(5, 1, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, damping));
			
	gl::vertexAttribPointer(6, 4, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, colorOne));
	gl::vertexAttribPointer(7, 4, GL_FLOAT, GL_FALSE, sizeof(Particle), (const GLvoid*)offsetof(Particle, colorTwo));
}

Shader setup:

mRenderProg = gl::getStockShader(gl::ShaderDef().color());
		
mUpdateProg = gl::GlslProg::create(gl::GlslProg::Format().vertex(loadAsset("glsl_update.vs"))
		.feedbackFormat(GL_INTERLEAVED_ATTRIBS)
		.feedbackVaryings({ "position", "pposition", "home", "color", "base", "damping", "colorOne", "colorTwo" })
		.attribLocation("iPosition", 0)
		.attribLocation("iColor", 1)
		.attribLocation("iPPosition", 2)
		.attribLocation("iHome", 3)
		.attribLocation("iBase", 4)
		.attribLocation("iDamping", 5)
		.attribLocation("iColorOne", 6)
		.attribLocation("iColorTwo", 7)
	);

.vs

#version 150 core

uniform float uActive;

in vec3   iPosition;
in vec3   iPPostion;
in vec3   iHome;
in vec3   iBase;
in float  iDamping;
in vec4   iColor;
in vec4	  iColorOne;
in vec4   iColorTwo;

out vec3  position;
out vec3  pposition;
out vec3  home;
out vec3  base;
out float damping;
out vec4  color;
out vec4  colorOne;
out vec4  colorTwo;

const float dt2 = 1.0 / (60.0 * 60.0);
const float force = 100.0;

//float currForce = 0.0;

void main()
{
	position =  iPosition;
	pposition = iPPostion;
	home =      iHome;
	base =      iBase;
    damping =   iDamping;
	color = iColor;
	colorOne = iColorOne;
	colorTwo = iColorTwo;
	
	if(uActive > 0.0)
	{

		vec3 dir = base - position;
		float d2 = length( dir );
		d2 *= d2;
		
		float currForce = d2 < 20 ? 0 : (d2 < 40 ? force/8.0 : (d2 < 75 ? force/4.0 : (d2 < 120 ? force/2.0 : force )));
		position += currForce * dir / d2;

		vec3 vel = (position - pposition) * damping;
		pposition = position;
		
		vec3 acc = (base - position) * 32.0f;
		position += vel + acc * dt2;
		
	}
	else
	{
		
		vec3 vel = (position - pposition) * (damping + 0.01f);
		pposition = position;
		vec3 acc = (home - position) * 32.0f;
		position += vel + acc * dt2;
		
	}
	
  color =	vec4(((length(position - home)*(colorOne.r - colorTwo.r)/length(base - home)) + colorTwo.r), ((length(position - home)*(colorOne.g - colorTwo.g)/length(base - home)) + colorTwo.g), ((length(position - home)*(colorOne.b - colorTwo.b)/length(base - home)) + colorTwo.b), 1); //iOColor;
	
}

Still a bit confused as to why it was not working earlier but at least now I can take my time to go through it.

Thanks again.:slight_smile:

1 Like

Hi,

it might be beneficial to read a bit more about the std140 layout rules. It will help reducing memory & bandwidth and will prevent errors caused by misalignment of data (which I think may be one of the reasons why your code did not work properly). The std140 layout maybe is not directly applicable to your situation, but in general it would not hurt to always stick to the rules.

For instance, instead of using separate arrays for base and damping, you could use a single array of size 4 which contains both. For your information, here’s what the struct should look like in std140 layout:

struct Particle {
    vec3   pos;      // array 0 (3+1 floats)
    float  pad0;     // (reserved)
    vec3   ppos;     // array 1 (3+1 floats)
    float  pad1;     // (reserved)
    vec3   home;     // array 2 (3+1 floats)
    float  pad2;     // (reserved)
    ColorA color;    // array 3 (4 floats)
    vec3   base;     // array 4 (3+1 floats)
    float  damping;  // (part of array 4)
    ColorA colorOne; // array 5 (4 floats)
    ColorA colorTwo; // array 6 (4 floats)
} 

The reserved floats are only there to make sure the struct’s size is the same as its GLSL counterpart, which is always a multiple of 16 bytes (4 floats). A ColorA is already 4 floats big. A vec3 and a float together also make 4 floats. Note, however, that a float followed by a vec3 would take up 8(!) floats, due to the std140 alignment rules.

-Paul

2 Likes

Thanks @paul.houx. That’s very informative. I think that you could be right about the reason why it wasn’t working properly earlier. Good to know that there is so much to learn and improve upon.