Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I tried this with a friend last year (using 3-majority and a variety of other gates). We couldn't get backprop to work past one layer, probably because there isn't enough bit precision. The "train without backprop" issue is actually pretty difficult to accomplish, so we gave up.

Does anyone have a decent method to train neural networks without backprop? I think the information bottleneck sort-of works, but it's hard to evaluate the mutual information without a neural network.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: