Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Re your first point: it's not conscious. It has no understanding. It's perfectly possible the model could successfully answer an exam question but fail to reach the same or similar conclusion when it has to reason it's own way there based on information provided.


Great point, LLM will not be great at ground breaking law.... But most lawyers aren't. That's to say, most law isn't cutting edge. The law is mostly a day-to-day administrative matter


Careful, there are plenty of True Believers on this website who really think that these "guess the next word" machines really do have consciousness and understanding.


I incline towards you on the subject but if you call it guessing you open yourself up to all sorts of rebuttals.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: