Convert SRGB texture to linear in OpenGL

enthusiastic_3d_graphics_pr... :

I am currently trying to properly implement gamma correction in my renderer. I have set my framebuffer to be SRGB with glEnable(GL_FRAMEBUFFER_SRGB) and now I am left with importing the SRGB textures properly. I know three approaches to do this:

  1. Convert value in shader: vec3 realColor = pow(sampledColor, 2.2)
  2. Make OpenGL do it for me: glTexImage2D(..., ...,GL_SRGB, ..., ..., ..., GL_RGB, ..., ...);

  3. Convert the values directly:

    for (GLubyte* pixel = image; pixel < image + size; ++pixel) 
        *pixel = GLubyte(pow(*pixel, 2.2f) + 0.5f);
    

Now I'm trying to use the third approach, but it doesn't work.

  1. It is super slow (I know it has to loop through all the pixels but still).
  2. It makes everything look completely wrong (see image below).

Here are some images.

  1. No gamma correction: enter image description here
  2. Method 2 (correction in when sampling in fragment shader) enter image description here
  3. Something weird when trying method 3 enter image description here

So now my question is what's wrong with method 3 cause it looks completely different from the correct result (assuming that method 2 is correct, which if I think it is).

derhass :

I have set my framebuffer to be SRGB with glEnable(GL_FRAMEBUFFER_SRGB);

That doesn't set your framebuffer to a sRGB format - it only enables sRGB conversion if the framebuffer is using an sRGB format already - they only use of the GL_FRAMEBUFFER_SRGB enable state is to actually disable sRGB conversion on frambeuffers which have an sRGB format. You still have to specifically request your windows' default framebuffer to be sRGB capabable (or might be lucky to get one without asking for it, but that will differ greatly on implementations and platforms), or you have to create an sRGB texture or render-target if you render to an FBO.

  1. Convert the values directly:

    for (GLubyte* pixel = image; pixel < image + size; ++pixel) 
          *pixel = GLubyte(pow(*pixel, 2.2f) + 0.5f);
    

First of all pow(x,2.2) is not the correct formula for sRGB - the real one uses a small linear segment near 0 and the power of 2.4 for the rest - using a power of 2.2 is just some further approximation.

However, the bigger problem with this approach is that GLubyte is an 8 Bit unsigned integer type with the range [0,255] and doing a pow(...,2.2) on that yields a value in [0,196964.7], which when converted back to GLubyte will ignore the higher bits and basically calculate the modulo 256, so you will get really useless results. Conceptually, you need 255.0 * pow(x/255.0,2.2) which could of course be further simplified.

The big problem here is that by doing this conversion, you basically loose a lot of precision due to the non-linear distortion of your value range. If you do such a conversion before-hand, you would have to use higher precision textures to store the linearized color values (like 16 bit half float per channel), just keeping the stuff as 8bit UNORM is a complete disaster - and that is also why GPUs do the conversion directly when accessing the texture, so that you don't have to blow up the memory footprint of your textures by a factor of 2.

So I really doubt that your approach 3 would be "importing the SRGB textures properly". It will just destroy any fidelity even if done right. Approaches1 and 2 do not have that problem, but approach 1 is just silly considering that the hardware will do that for you for free. so I really wonder why you even consider 1 and 3 at all.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=398459&siteId=1