Extract/set Channel <--> Texture; quickly fades out

Hi there, I’m trying to repeatedly modify a texture, however it seems to quickly “fade out”. I expect this is maybe to do with gamma correction, or something about the format/linearity of the Channel data being different from the Texture format…? But it’s a pretty basic example – how can I get the following simple example to extract a Channel and set it back into a Texture while keeping the data intact?

    ci::Channel chan(256, 256);

    // Initialize it so that it has a vertical gradient, from white to black (top-bottom)
    auto iter = chan.getIter();
    uint8_t v = 255;
    while (iter.line()) {
        while (iter.pixel()) {
            iter.v() = v;
        }
        --v;
    }

    mTex = gl::Texture2d::create(chan);
    // If you draw mTex here, it will look correct (white at top. black at bottom)

    // Extract the data as a Channel, then immediately set it back (unchanged)
    ci::Channel extractChan(mTex->createSource());
    mTex->update(extractChan);
    // If you draw mTex here, it will be very dark (likely a gamma-conversion issue?)

I’m sure I’m missing something basic…but don’t see it from reading through the docs/API.

Thanks,
Glen.

I think the problem seems to be that the constructor ImageSourceTexture() doesn’t recognize a single-channel 8-bit source as special, so it uses the default case with CM_RGB color model and GL_RGBA format.

After much investigation, I just tested using Channel32f instead of Channel (aka Channel8u), and Texture internalFormat of GL_R32F with dataType of GL_FLOAT) and that seems to work. But the 8-bit case greyscale seems like a “common” (or at least useful) case that should work, no?

Anyhow, at least now I’ve got a workaround…

Any specific reason you’re not using the good ol ping pong method? ImageSourceTexture::createSource() is extremely expensive, ive only ever used it to debug and save a texture to file.

Cheers,
Rich

Thanks for the reply. I’m just creating the texture once, then I want to update it. So my issue is not with performance, but with the way the 8-bit data is interpreted incorrectly (as RGB colour) for single-channel textures.

Or maybe I misunderstand what you’re proposing with your ping pong comment (I do know about that technique for double buffering). My data is not colour info, It’s numerical data produced by my program (actually audio data) that I want to use on the GPU as a texture (with interpolation, etc).

Thanks, maybe I misread your suggestion, please let me know. In any case, it works as I want with the 32f version, so I’m happy – I just thought it seemed like a bug with single component 8-bit textures, but maybe not.

G.