That's a clever one, I had not seen that yet, thank you.
The hint for me is that the models compress so well, that suggests the information content is much lower than the size of the uncompressed model indicates which is a good reason to investigate which parts of the model are so compressible and why. I haven't looked at the raw data of these models but maybe I'll give it a shot. Sometimes you can learn a lot about the structure (built in or emergent) of data just by staring at the dumps.
That's quite interesting. I hadn't thought of sparsity in the weights as a way to compress models, although this is an obvious opportunity in retrospect! I started doing some digging and found https://github.com/SqueezeAILab/SqueezeLLM, although I'm sure there's newer work on this idea.
The hint for me is that the models compress so well, that suggests the information content is much lower than the size of the uncompressed model indicates which is a good reason to investigate which parts of the model are so compressible and why. I haven't looked at the raw data of these models but maybe I'll give it a shot. Sometimes you can learn a lot about the structure (built in or emergent) of data just by staring at the dumps.