Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LeCun has startling bad takes for a major player in the space to the point that they can only be bad faith and driven by adverse motivations.


This may be part of the reason.

https://www.linkedin.com/posts/yann-lecun_what-meta-learned-...

One might also imagine that as one of the "godfathers of AI" he feels a bit sidelined by the success of LLMs (especially given above), and wants to project an image of visionary ahead of the pack.

I actually agree with him that if the goal is AGI and full animal intelligence then LLMs are not really the right path (although a very useful validation of the power of prediction). We really need much greater agency (even if only in a virtual world), online learning, innate drives, prediction applied to sensory inputs and motor outputs, etc.

Still, V-JEPA is nothing more than a pre-trained transformer applied to vision (predicting latent visual representations rather than text tokens), so it is just a validation of the power of transformers, rather than being any kind of architectural advance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: