This is one of the best explanations of an ML topic that I've read in a long time. It strikes a great balance of being approachable for non-experts and being in-depth enough to give a reader a feeling that they understand how things work.
There's the grad student from STanford who describes ML and NNs using circuits and a"forces" analogy which may be useful for CS folks. I found it illuminating.
It's not unexpected, but still strange that the patent wars have started in the way to AGI. It puts together ideas that were done by researchers and seems to be something that will be crutial for robotics in the future.
Also I always thought that smart kitchen robots that can peel potatoes for me and do a limited set of things will come to my home, but from these algorithms it seems that the hardware is behind the software, so even the first good kitchen robots will be general enough to learn any motion (but not precise enough for everything that a human can do)
I have one question. Did they trained different network to play different game i.e for each game they have corresponding trained network OR were they able to train a single network to play 2 or more games?