Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have always found neural network diagrams like the RNN one here to be very vague and even slightly misleading. What does it mean that h_t loops onto itself? While I know that it means "take as input h_{t-1} also", the diagram itself does not illustrate the concept to the primary person looking at such a diagram, i.e. someone wanting to learn about the architecture.


I came to post the same comment. I was confused by the lack of "t+1" or "t-1" nodes, and then it took me a while to realize I had to connect the "ht" node to itself.


If something takes two inputs (e.g. an adder), then I'd expect it to have two separate connectors, not "also" plugging in the second thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: