Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

4 bits is ridiculously little. I'm very curious what makes these models so robust to quantization.


Read The Case for 4 Bit Precision. https://arxiv.org/abs/2212.09720

Spoiler: it's the parameter count. As parameter count goes up, but depth matters less.

It just so happens that at around 10B+ parameters you can quantize down to 4bit with essentially no downsides. Models are that big now. So there's no need to waste RAM by having unnecessary precision for each parameter.


For completeness, there's also another paper that demonstrated you get more power/accuracy per-bit at 4 bits than at any other level of precision (including 2 bits and 3 bits)


That's the paper I referenced. But newer research is already challenging it.

'Int-4 llama is not enough [0] - Int-3 and beyond' suggests 3-bit is best for models larger than ~10B parameters when combining binning and GPTQ.

[0] https://nolanoorg.substack.com/p/int-4-llama-is-not-enough-i...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: