Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To generate a natural-color image of a biological sample using lasers in the human eye's visible light range, you'd need a great many lasers covering the entire frequency range. This generates hundreds of datasets at each particular wavelength (ignoring issues like laser-induced fluorescence, which can be managed with spectral filtering).

The trick comes in taking all that data from the hundreds of images generated by different-wavelength lasers and assembling them layer-by-layer into an image the human brain interprets as color, aka colorimetric rendering, onto the three-color-cone system the eye's retina employs plus a bunch of neural processing (there's a complex equation for this mapping of 'hyperspectral cube' data onto an RGB display for human visualization).

There's a really strange example - the mantis shrimp - that used to be thought to have rich color vision in a narrow band, but now people think it might be a lot more direct, a kind of color vision without much neural processing involved, with each photoreceptor scanning slighty different wavelengths and directly signalling to the mantis brain, such as it is:

Thoen et al. (2014) – "A different form of color vision in mantis shrimp" (Science)

https://www.science.org/doi/10.1126/science.1245824



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: