Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We are not talking about just a vector, but rather vectors trained in a specific way along with cosine similarity. You could easily argue that this system can tell you that one words has a different meaning from another word, or that two words have similar meaning. Further, it learned this without these relationships being explicitly stated.

Once you start to try to define aspects of cognition explicitly, things very quickly get ambiguous. Also, these conversations usually go along the lines of:

1. State a definition.

2. See that computer matches.

3. Decide it's wrong after all and try to change it so that computer can't match it.

4. Repeat until we find a definition that excludes computer.

I think it's a fascinating topic, but the above pattern is fairly disappointing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: