Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are examples quite far from neural networks, the ones I can think of are broadly optimisation problems:

Many physics problems involve trying to find a function which minimises something -- energy, entropy, action. Or the state which makes the difference between two things zero. Sometimes adjusting many parameters slowly down the gradient is a good way to find these.

In bayesian statistics, the basic problem is to sample from a distribution, which you know only indirectly, by some kind of monte-carlo method. But the space to sample can be enormous. If I understand right, advanced ways of doing this exploit the gradients (of functions defining the distribution) to try to choose samples efficiently.

People have hacked tensorflow to do all sorts of things which its creators didn't intend. Or written tools specialised for another particular domain (like Stan). I guess the excitement is that instead of re-inventing the wheel in each domain, maybe this can be pushed down to become a language feature which everyone above uses.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: