Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

" At some point the AI will be good enough to do the tasks that some humans do (it already is!). That's all that really matters here."

What matters to me is, if the "AGI" can reliably solve the tasks that I give to it and that requires also reliable learning.

LLM's are far from that. It takes special human AGI to train them to make progress.



> What matters to me is, if the "AGI" can reliably solve the tasks that I give to it and that requires also reliable learning.

How many humans do you know that can do that?


Most humans can reliably do the job they are hired to do.


Usually they require training and experience to do so. You can't just drop a fresh college grad into a job and expect them to do it.


But given enough time, they will figure it out on their own. LLM's cannot do that ever.

Once they can ... I am open to revisit my assumptions about AGI.


They may require training but that training is going to look vastly different. We can start chatting about AGI when AI can be trained with as few examples and as little information as humans are, when they can replace human workers 1:1 (in everything we do) and when they can self-improve over time just like humans can.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: