OpenGL Extensions (Glew,Glee,Gloader)

Hi,

Thanks for replying, Paul.

I just did, but having both #include "cinder\gl\gl.h" and #include <glload/gl_core.hpp> gives me Attempt to include auto-generated header after including glew.h error, whereas NOT including cinder\gl\gl.h gives me namespace ci and gl errors. I did try including some other bunch of other glload h and hpp files, but the errors I get are similar.

Any ideas, feel free to share :slight_smile:

Thanks!

Hi,

Just jumping in here but it seems you are trying to pull in glew and glload at the same time. This won’t work since both libraries are trying to do the same thing, so its one or the other. You should at least try to find where glew.h is pulled in from and remove that reference.

Hi,

Thanks, @petros.

After identifying that the only difference in OpenGL extensions is that on the OF side there is GL_ARB_compatibility extension which is not installed on cinder side, I decided this is not a probable cause.

To summarise what I’ve found out so far: the GL_INVALID_OPERATION happens in line:

glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE, &dataLength);

The code leading to this is as follows:

texData.setWidth(1920);
texData.setHeight(1080);
texData.setDataFormat(GL_UNSIGNED_INT_8_8_8_8_REV);
texData.setInternalFormat(GL_COMPRESSED_RGBA_S3TC_DXT5_EXT);
gl::Texture2d::Format fmt;
fmt.setInternalFormat(GL_COMPRESSED_RGBA);
fmt.setTarget(GL_TEXTURE_2D);
mTexture = gl::Texture2d::create(texData, fmt);
glBindTexture( GL_TEXTURE_2D, mTexture->getId()); //we get no gl_error here
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE, &dataLength);

The same GL_INVALID_OPERATION error happens in:

glCompressedTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, mTexture->getInternalFormat(), dataLength, mSurface->getData());

which is a few lines further down.

Based on glGetTexLevelParameteriv specifications I would suspect that the reason behind GL_INVALID_OPERATION error is:

GL_INVALID_OPERATION is generated if GL_TEXTURE_COMPRESSED_IMAGE_SIZE is queried on texture images with an uncompressed internal format or on proxy targets.

However, I’ve tried a couple of format combinations, but no success.

The specifications for glCompressedTexSubImage2D offer more reasons for GL_INVALID_OPERATION error, half of which I don’t really understand at the moment.

I am going to keep poking at it, but am a bit rudderless right now.

If anyone spots anything obviously wrong please let me know.


UPDATE 1: I did the following check:

GLint isCompressed = true;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED, &isCompressed);

And after these 2 lines execute on cinder side the isCompressed variable is false whereas on OF it stays true. On to discovering why is that and how does OpenGL determine whether a texture is compressed or not.


UPDATE 2: another check:

GLint intFormat
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &intFormat);

returns 6408 (RGBA) on cinder side and 33776 on OF side (DXT one). Apparently, the internal format is not set correctly on the texture.


UPDATE 3: the latest hypothesis is that somehow glBindTexture(GL_TEXTURE_2D, texID) is not working correctly. Does anyone have any ideas? For example, texID refers to a texture of 720x480px and yet the code below:

glBindTexture(GL_TEXTURE_2D, mTexID);
GLint intFormat;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &intFormat);

sets intFormat to 0 whereas it should be 720.

Any ideas?

Thanks so much.

Best regards,
M

Compressed textures are a bit out of my comfort zone.

In case you didn’t know: to facilitate debugging, it helps if you can step through Cinder’s source code as well. If you’re on Windows, you can add Cinder’s project to your Visual Studio solution. If you then run the Debug target and set a breakpoint, you can see exactly what Cinder is doing under the hood.

  • From the File menu, select Add... and then Existing Project...
  • Browse to <cinder>\proj\vc2015\cinder.vcxproj
  • Click the Open button
  • In the Solution Explorer, right-click on your project
  • Select Build Dependencies and then Project Dependencies...
  • Make sure the cinder project is selected by clicking in the check box. This way, Cinder will be built before the code of your project.
  • Click the OK button

-Paul

Hey,

I think, first, you should try to figure out why the compressed format request is failing.

According to this GL_COMPRESSED_RGBA_S3TC_DXT5_EXT should be handled properly.

Can you give the following a try :

texData.setWidth( 1920 );
texData.setHeight( 1080 );	
texData.setDataFormat( GL_RGBA );
texData.setInternalFormat( GL_COMPRESSED_RGBA_S3TC_DXT5_EXT );
gl::Texture2d::Format fmt;
fmt.setInternalFormat( GL_COMPRESSED_RGBA_S3TC_DXT5_EXT );
fmt.setTarget( GL_TEXTURE_2D );
mTexture = gl::Texture2d::create( texData, fmt );
{
    gl::ScopedTextureBind stb( mTexture->getTarget(), mTexture->getId() )
    glGetTexLevelParameteriv( GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE, &dataLength );
}

Hello Paul and Petros,

Thanks for the suggestions. I tried both. Paul’s suggestion, whilst completely unknown to me and surely very useful in all the future challenges, didn’t offer any additional info because I couldn’t ‘step into’ glBindTexture function any further.

I just tried Petros’ code with some additional checks as shown below:

{
texData.setWidth(1920);
texData.setHeight(1080);
texData.setDataFormat(GL_RGBA);
texData.setInternalFormat(GL_COMPRESSED_RGBA_S3TC_DXT5_EXT);
gl::Texture2d::Format fmt;
fmt.setInternalFormat(GL_COMPRESSED_RGBA_S3TC_DXT5_EXT);
fmt.setTarget(GL_TEXTURE_2D);
mTexture = gl::Texture2d::create(texData, fmt);
{
gl::ScopedTextureBind stb(mTexture->getTarget(), mTexture->getId());
GLint dataLength = 0;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE, &dataLength);
GLint intFormat;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &intFormat);
GLint isCompressed = true;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED, &isCompressed);
}
}

But unfortunately: dataLength and isCompressed remain 0 after execution whereas intFormat is 6408 instead of 33779.

From my limited knowledge, it seems that OpenGL doesn’t pull the data from the texture whether I am using cinder’s ScopedTextureBind (Petros’ suggestion) or openGL’s glTextureBind (original OF code).

p.s. in the meantime I managed to include glew (and glad.h as a matter of fact) but didn’t rescue me (you guys probably suspected as much).

Any additional ideas, don’t hold them back :slight_smile:

I’m going to keep digging though.

Not a huge help, but my next move would be to see what you get for glGetString(GL_RENDERER) and glGetString(GL_VERSION) etc on both OF and cinder. Perhaps you’re getting a different context version that is more / less strict that the other as it pertains to this particular feature?

1 Like

Another thing that might worth double checking is that at this point https://github.com/cinder/Cinder/blob/master/src/cinder/gl/Texture.cpp#L2332 ( which happens during texture creation with the constructor you are indirectly calling ), all the parameters passed are what you would expect them to be.

Hi,

I checked all of the above. I am using cinder v0.9.0 so the line numbers of Texture.cpp are a bit different, but the code seems the same.

When I stepped into mTexture = gl::Texture2d::create(texData, fmt); line I went to Texture2d::create as shown below:
step1

Then to Texture2d::replace where all the data seems fine.
step2

The for loop in Texture2d::replace was completely skipped though (I don’t know if that’s intentional)
step3

The way I understand it, it appears that the data is passed correctly, right?

And to answer @lithium 's comment:
Renderer is the same on both sides, which is GeForce GTX 1070/PCIe/SSE2

However, the OpenGL version differs as it is 4 on OF side

3 on cinder side.

Can you force cinder to use openGL 4? I mean how likely is it that this might be the cause of it anyway?

Thanks for all the suggestions.

You can pass the context version as a parameter to your renderer instantiation in the CINDER_APP macro.

CINDER_APP( YourApp, RendererGl ( RendererGl::Options().version(4, 6) ) )

As for how likely it is to solve your problem, I would suspect that 3.2 and 4.6 would behave fairly similarly, but i thought OF may have been using the old fixed function pipeline (i.e version 2.1) which could be drastically different.

Thanks. I did that and now the OpenGL version does say ‘4’, but the result is still the same. In @petros ’ code segment all the vars stay either default or 0.

It feels like I am either missing something obvious (very likely) or its a weird bug or something.

Has anyone perhaps ventured so far to try to test @petros ’ code in a blank new cinder project? I did, but the results are the same. No parameters get extracted with glGetTexLevelParameteriv function.

Hi,

assuming texData is of type gl::TextureData and following the code, there’s not much you can learn from it, apart from the decimal value for the GL_COMPRESSED_RGBA_S3TC_DXT5_EXT constant, which is 33779.

I was able to successfully create that texture, but that’s not a huge surprise because it is not initialized with data, it’s more or less an empty shell.

When calling glGetTexLevelParameteriv, an error is generated which you can retrieve with glGetError(). It’s error 1282 (0x0502), which is GL_INVALID_OPERATION. You already knew this, of course. This is due to the fact that the texture is not compressed.

What’s your data? Did you actually load the compressed data into the texture? I could imagine that for OpenGL to figure out the uncompressed size, it needs to decode part of the data (it’s probably stored in the header). If that data is invalid, or empty, it would generate the above error.

Do you have a compressed file that you are willing to share?

-Paul

Hi Paul,

Thanks for replying.

Yes, texData is defined as gl::TextureData texData;

This whole exercise is part of trying to port SecondStory’s ofHAPplayer to cinder so the compressed file is basically HAP sample files that you can download from Renderheads website (HAP file and HAP with alpha).

I can imagine that trying to write a code to read those files etc. would be too much work for you to help me out, so if I can provide a compressed file for you in any other (and easier) way let me know. I would be happy to.

However, it’s quite possible that I might have missed out on ‘loading the compressed data into the texture’. Can you please be more specific?

What I mean is that in my code there is no data read yet until glGetTexLevelParameteriv function. This happens a bit later where:

player->getPixels( mSurface->getData() ); //this is where we read pixels from memory buffer
glCompressedTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, mTexFormat, dataLength, mSurface->getData());

And glCompressedTexSubImage2D function also returns GL_INVALID_OPERATION even though all the parameters are correct and that mSurface->getData() looks correct as well.

Hi,

this is the first time I play with compressed texture formats in OpenGL, so forgive me if I misunderstood and confused you even more.

I did a little research and it seems that calling glGetTexLevelParameteriv to retrieve the compressed image size in bytes should be done after making sure the texture is indeed compressed. You can find more info here.

Based on the article, I created a Gist that, for me, results in a compressed texture that I can render. To try for yourself, create an empty Cinder project using TinderBox and copy-paste the code.

I use a neat trick to allow debugging: glEnable( GL_DEBUG_OUTPUT ). You can see all errors as human readable messages in the output window.

Place breakpoints in the code to step through it.

Let’s first establish that this code works and then take it from there.

-Paul

Hi Paul,

Thank you so much for this. It works! I also managed to make it work in my block, which was a cause for a small celebration.

I had to change the glTexImage2D to glCompressedTexImage2D, because HAP provides already compressed frames so compressing by glTexImage2D was redundant and made FPS on a FullHD movie drop to about 13.

However, glCompressedTexImage2D needs imageSize parameter. If internal format is HapTextureFormat_RGB_DXT1, which in OpenGL parlance apparently equals GL_COMPRESSED_RGB_S3TC_DXT1_EXT you can calculate the image size with dataLength = width * height / 2;. For other HAP formats, however, the imageSize on the OF side, is calculated as shown below:

if (textureFormat == HapTextureFormat_RGB_DXT1){
imageSize = width * height / 2;
}else {
glBindTexture(GL_TEXTURE_2D, texData.textureID);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE, &imageSize );
}

But as glBindTexture in combination with glGetTexLevelParameteriv apparently doesn’t work BEFORE glCompressedTexImage2D I have to find another way to calculate imageSize.

I’ll keep digging, but if anyone has any ideas, feel free to share. I guess we are moving pretty deep into HAP territory, so I understand if I’m on my own from now on :slight_smile:

Thanks again for all the help so far.
Mitja

1 Like

Hey,

very nice that there is some progress! I believe the following is the root of your problem when it comes to using only Cinder’s API:

If the textureData are empty then no initial upload will happen and thus the problems afterwards. I believe that if you create the texture with the following ( skip texData since its either way empty and let Cinder do an initial allocation with the appropriate formats ) then binding the texture and querying the imageSize should work.

This is based on my understanding after reading a bit the documentation for glCompressedTexImage2D and more specifically the part which states:

The :

is the part that makes me believe that your problem originates from the fact that this call is being skipped because the texture data are empty and no initial upload is happening.

gl::Texture2d::Format fmt;
// You might try GL_COMPRESSED_RGBA also here
fmt.setInternalFormat( GL_COMPRESSED_RGBA_S3TC_DXT5_EXT );
fmt.setTarget( GL_TEXTURE_2D );
mTexture = gl::Texture2d::create( 1920, 1080, fmt ); 
{
    gl::ScopedTextureBind stb( mTexture->getTarget(), mTexture->getId() )
    glGetTexLevelParameteriv( GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE, &dataLength );
}

This should do a default data allocation with the correct formats and hopefully then you can query the info you need with glGetTexLevelParameteriv.

Hope I m not adding more confusion!

Cheers,
Petros

Hey Petros,

I have so many commented code blocks that I hardly know what’s different since the initial version but that actually works! Wow. I’ve got FullHD HAP, HAPA and HAPQ movies playing back with 60FPS.

Thanks so much guys! It looks like we’ve solved this. Can’t thank you enough. We’ve been poking every now and then at this HAP problem for around a year and now, finally, it seems we are there.

I’ll be cleaning this sample up a bit and share this on Github in around 2 weeks.

One last thing for me to do is to fix the fact that the texture is flipped both vertically and horizontally. I guess I should write a simple shader for that unless you guys can think of a better solution.

Thanks for all the help, again. You’ve been tremendous!

:rainbow: :beer: :beers: :rainbow:

1 Like

That’s great Mitja!

The reason why this works is because you are not supplying empty texture data for the creation of the texture. When using the specific constructor that you had been using up until now Cinder will check the texture data and if there are available data only then it will actually allocate space for the texture. Since you were passing empty data no actual texture storage was created.

By using the constructor I suggested, Cinder will allocate a default ( null data ) storage for the texture at the correct size and format so then you have a valid texture at the correct size and format that you can actually fill with your data.

Hope this makes sense.

Does the following help ?

...
gl::Texture2d::Format fmt;
fmt.loadTopDown( true );
...

It makes perfect sense. Thanks Paul! You are a star.

I’m off to vacations now. I’ll flip the texture and post the sample when I get back.

Cheers,

M

1 Like

Hello again,

My vacations were sadly cut short, but this allowed me to upload a working sample sooner. In the end, using fmt.loadTopDown didn’t work for some reason, but I’ve managed to flip the texture another way.

As I mentioned here, the majority of the port was done by @dave.

The sample can be downloaded here.

Thanks again for all your help!
Mitja