Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that we mix up physical and perception, including in our language. If you look at the physical stuff, there's nothing in this specific range of EM radiation that is different from UV or IR light (or further). The physical stuff is not unique, our reading is. Therefore, color is not a physical thing.

And so when I say "color" I only mean it to be the construction that we make out of the physical thing.

We project back these construction outside of us (e.g. the apple is red), but we must no fool ourselves that the projection is the thing, especially when we try to be more precise about what is happening.

This is why I'm saying a 3D model of color (brain thing) is very far from modelling color (brain thing) at all. But! It's not purely physical either, otherwise it would just be a spectral band or something. So this is pseudo-perceptual. It's the physical stuff, tailored for the very first bits of anatomy that we have to read this physical stuff. It's stimuli encoding.

If you build a color model, it's therefore always perceptual, and needs to be evaluated against what you are trying to model - perception. You create a model to predict things. RGB and all the other models based on three values in a vaccum will always fail at predicting color (brain!) when the stimuli's surround is more complex.



There’s a valid point in there somewhere, but you’re also saying some stuff that seems hyperbolic and getting harder to agree with. You’re right that perception is complicated, and I agree with you when you say 3D models don’t capture all of perception. That is true. That does not imply that people can’t use 3D models for lots of color tasks. Again, it always depends on your goals. You’re making abstract and general claims without stating your goals.

It’s fine for you to think of perception when you say color, but that’s not what everyone means, and therefore, you’re headed for miscommunication when you make assumptions and/or insist on non-standard definitions of these words.

Physical color is of course a thing. (BTW, it seems funny to say it’s not a thing after you introduced the term physical-color to this thread.) Physical color can mean, among other things, the wavelength distribution of light power. A physical color model is also a thing, it can include the quantized numerical representation of a spectral power distribution. Red can mean 700nm light. Some people, especially researchers and scientists, use physical color models all the time. You’re talking about meanings that are more specific than the general terms you’re using, so maybe re-familiarizing yourself with the accepted definitions of color and color model would help? https://en.wikipedia.org/wiki/Color_model

Again, it’s fine to talk about perception and human vision, but FWIW the way you’re talking about this makes it seem like you’re not understanding the specific goals behind 3D color spaces like LAB. Nobody is claiming or fooling themselves to think they solve all perception problems or meet all possible goals, so it seems like a straw man to keep insisting on something that was never an issue in this thread. If you want to talk about 3D models not being good enough for perception, then please be more precise about your goals. That’s why I asked what use cases you’re thinking of, and we haven’t discussed a goal that justifies needing something other than a 3D color model - color constancy illusions do not make that point.


Unfortunately, it seems like we will not reach any agreement here.


Honestly I haven't read the whole thread but I think your mixing stuff like green and blue being called the same word in some languages or ancient greek completely missing word for blue.

What I was thinking is along the lines of showing a real life scene to ten random people - like a view of a city park outside of an office window - and then showing them a picture of said scene on a computer screen using only 256 colors (quantization) and asking them if it looks the same.

Or modeling a 3D photo realistic scene of a room in a video game and then switching off the light and asking the player if the scene still looks realistic after we changed the colors or did we stumbled into uncanny valley.

The simplest, hands on experiment, I can think of is putting yourself in shoes of an oil painter and thinking about creating a gradient between two colors, let's say blue and green (or any other pair, it doesn't really matter). Now try to imagine said gradient in your mind and then try to recreate it with graphical program like Photoshop. If you went down this route the gradient will seem odd. Unnatural.

All common standards we were commonly using for the last 30 years like RGB, HSL, HSV, etc. falls flat. They are not so much off to call them "uncanny" (as in "uncanny valley"), but they seem wrong if you look close enough.

To actually simulate mixing two blobs of an oil paint you need arcane algorithms like Kubelka-Mink (yet another ground breaking discovery in IT made by reading a 100 years old research).

All in all - take a look at this video, I know it's 40 minutes long, but this topic has been a peeve for me for almost 20 years and it's the best and most comprehensible take on the subject: https://www.youtube.com/watch?v=gnUYoQ1pwes


That video is excellent, thanks for sharing. BTW it does back up the point @subb was making, that the experience of color is a perceptual thing; “light isn’t what makes something a color. As we’ve seen, colors are ultimately a psychological phenomenon.” Which is true.

FWIW I suspect the issue in this thread is that color models and color spaces are not necessarily modeling perception. The word color is overloaded and has multiple meanings. Just because color experience is perception, that doesn’t mean “color” is always referring to perception nor that phrases like “spectral color” or “color model” are referring to perceived experience, and they’re often not.

A color model is any numeric representation that captures the information needed to recreate a color, and it can be a physical or spectral color model, a stimulus model (cone response), or a perception model. Being able to recreate a color does not imply that the information is perceptual. Spectral “color” measurements are just pure physics, and spectral color models are just modeling pure physics.

By and large, the color matching experiments that lead to our CIE standards mostly measured average cone response for an average observer, and were never intended nor designed to capture effects like adaptation and surround. This is why many of the 3D color spaces we have that trace lineage to those experiments, especially the “perceptual” ones, are primarily modeling cone response and not perception. CIE color spaces do involve some kind of very averaged out perception of color, in a static unchanging, well adapted, no surround kind of way, which is for example why the “red” color matching function goes negative. [1]

There are people doing stuff like adaptation and spatial tone mapping in video games and research, and they’re using more tools than just 3D color spaces for that. That’s the kind of discussion I was hoping @subb would get into, i.e., what specific cases require going beyond the CIE models.

[1] https://yuhaozhu.com/blog/cmf.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: