Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How can you be so confident when nobody has managed to convincingly beat GPT-4 yet, including Open AI themselves?

All evidence is the throwing even textbook qualify data at a model of almost any size just approaches an asymptote just a tiny bit above GPT 4.

A better model of some data starts to look increasingly like then data, not like something else beyond the data.



i don't know about the future. I am only saying, we haven't seen diminishing returns from training transformers on more data / compute yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: