Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, the reason is that the model assigns words positions in an ever-changing vector space and evaluates relation by their correspondence in that space—the reply it gives is also a certain index of that space, with the “why” in the question giving it the weight of producing an “answer.”

Video series on the topic: https://www.3blue1brown.com/topics/neural-networks

Which is to say that “why” it gives those answers is because its statistically likely within its training data that when there are the words, “why did you connect line and log with paper” the text which follows could be “logs are made of wood and lines are in paper.” But that is not the specific relation of the 3 words in the model itself, which is just a complex vector space.



I definitely think it's doing more than that here (at least inside of the vector-space computations). The model probably directly contains the paper-wood-log association.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: