Since perception happens when you look at the image on your screen, applying an explicit "perception" step in rendering is actually counterproductive, since those nonlinearities would compound.
Gamma correction is used to allow for better contrast resolution for the brightness levels where the eye is most sensitive, when you have to compress your color values down to 8 bits per channel. The display then inverts this to produce ordinary linear space intensities.
there's more to the story than that, even. there's also the part where old CRT monitors had nonlinearity of similar exponent (or maybe its inverse, I can't find the details of it quickly on mobile). newer display technologies emulate this nonlinearity so they can be compatible receiving the same signals / data. (someone correct me if I'm wrong, I feel like I am not being 100% exact here)
But the point that remains today, is not the bits (as shaders work with floats internally), nor the response curve of a CRT (because almost nobody uses those any more), but the fact that the physics calculations of light operate on linear quantities (proportional to an amount of photons), so you better do those in linear space.
Then at the end you must convert to perceptual space, which can be done in a number of ways. I'm not sure how much of a win replacing one pow(color, 2.2) with a sqrt is, at the end of a fragment shader. Especially when the pow(color, 2.2) is already a quick approximation by itself, there are much fancier curves to convert to perceptual space (that don't desaturate the darks as much, for instance).
Accurate. The 2.2 and various hacks compensate for the baked in hardware.
Although perception is nonlinear, the nonlinearity here specifically addresses hardware on standard sRGB displays. After all, as cited above, why would you apply a nonlinear correction for perception when it has that baked in?
What is missed by many, is that standard LCD technology, despite being inherently linear, has that low level hardware nonlinearity baked in. Typically, it is a flat 2.2 power function, although other displays or modes may use differing transfer functions.
The irony is that the nonlinear 2.2 function adopted into most imaging results in a closer-to-linear signal, that ends up further nonlinear based on tonemapping, SMPTE2084, or other such nonlinear adjustments for technical or aesthetic reasons. In the case of raytracing, a mere 2.2 adjustment is woefully worthless.
PS: The CIE1931 model, of which the RGB encoding model is effectively derived from, used visual energy. That is, it doesn't model "reality" so much as the psychophysical byproduct that happens in the brain. Luckily, the base model, XYZ, operates on a linear based model with some extremely nuanced caveats. Raytracing using RGB tristimulus models as a result, sort of work.
Since perception happens when you look at the image on your screen, applying an explicit "perception" step in rendering is actually counterproductive, since those nonlinearities would compound.
Gamma correction is used to allow for better contrast resolution for the brightness levels where the eye is most sensitive, when you have to compress your color values down to 8 bits per channel. The display then inverts this to produce ordinary linear space intensities.