Video is typically transmitted in one of two colorspaces. The terminology here is tricky and hard for me to remember, but I think it's 'studio levels' and 'computer levels'?
Anyway, all you really need to know is that there are two main standards for the range of values in each RGB color channel. IIRC, they are 0-255 and 16-235. So if you're using the broadcast standard (16-235), values below 16 or above 235 don't exist, and 16 is black. But a 24bpp/32bpp surface on a computer can store the full range of values no matter what, so there are various scenarios where graphics code needs to know what to do with those values. Discard them? Saturate up/down? Rescale everything so that the whole buffer is 16-235?
It gets trickier when interacting with other hardware. Your TV might expect 16-235 values, in which case your video card and/or software need to rescale 0-255 values down to 16-235 so they look correct on your TV. Your game console might be putting out 16-235 because it assumes you have a bad TV, and when you run that console through an HDMI capture device, it might be doing a 16-235 -> 0-255 rescale behind your back to 'fix' your video. Then when you fix your console settings to output 0-255, your capture device is still doing the rescale, and you're saturating values below 16 or above 235.
It's a common source of confusion when dealing with video, and if you're unlucky, multiple stages in a display pipeline will each mishandle levels, scaling them to a range that's too small or truncating values, etc.