Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI compression is super, super cool... but while standardization is certainly a major issue, isn't the model size a much larger one?

Given that model sizes for decoding seem like they'll be on the order of many gigabytes, it will be impossible to run AI decompression in software, but will need chips, and chips that are a lot more complex (expensive?) than today's.

I think AI compression has a good chance of coming eventually, but in 10 years it will still be in research labs. There is absolutely no way it will have made it into consumer chips by then.



"Isn't the model size a much larger one?" yap It will probably be different, and systems will have to download the weights and network model, as new models come in, I don't think that we will have a fixed model with fixed weights, the evolution is too fast. Decoding will take place using the AI chip on the device aka "AI accelerator"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: