Merely scaling up a GAN and optimizing it's network structure and training procedure allowed GANs to create nearly realistic high resolution faces. If you can generate realistic faces, then you can likely also generate realistic action and thought sequences simply with an even larger model.
An action/thought sequence is just a 4D tensor with some outputs controlling actuators. Thinking is just production of actions while actuator output neurons are inhibited, which can simply be implemented by a product with some sigmoid activated neurons.
Coherent combinations of such sequences can be produced by feeding both the current sensory inputs and the preceding internal state as conditioning vector to both the generator and discriminator.
You simply need to find a way to train the discriminator not only to tell real from fake, but to determine the value of the generator's outputs and make it backpropagate those values in time during training over several generated episodes by TD.
As the GAN is conditioned on its own previous state, it can learn by trial and error how to combine the short action and thought sequences it produces, can thus learn to produce coherent ("real") language and logic.
Based on such intuitions, I'd say it is impossible to tell when AGI will come exactly, but currently technology looks damn promising.
> If you can generate realistic faces, then you can likely also generate realistic action and thought sequences simply with an even larger model.
I'd really be willing to take bets that, for the next thirty years (paid out at the end of that period), nothing remotely like AGI will happen. Should fund my retirement pretty nicely.
AGI being at least on par with humans when it comes to creativity and invention? I.e. writing great novels, coming up with compelling philosophy, coming up with new, good mathematics etc.
Current technology doesn't look very promising unless we somehow come up with a "computer" architecture that is similarly scalable and energy efficient as a brain. Machine learning and deep learning aren't exactly new. The big change that made them possible is the availability of faster hardware. If transistor density increases stop before we can reach AGI or even just a dumbed down version of it, then we might never reach AGI ever.
> At the moment, equipment and products that involve glass and metal are often held together by adhesives, which are messy to apply and parts can gradually creep, or move. Outgassing is also an issue - organic chemicals from the adhesive can be gradually released and can lead to reduced product lifetime.
Just checked my feed of recommended videos and it is still full of Tetris and excavator videos that I was into lately. It would be awesome to be surprised with some extraordinary lectures or documentaries similar to stuff I watched 5 years ago, or so.
How about employing 10 people of this massive platform to curate some channels that people can easily subscribe to? It could literally be advertised to be kids-friendly and not dumb etc.
An action/thought sequence is just a 4D tensor with some outputs controlling actuators. Thinking is just production of actions while actuator output neurons are inhibited, which can simply be implemented by a product with some sigmoid activated neurons.
Coherent combinations of such sequences can be produced by feeding both the current sensory inputs and the preceding internal state as conditioning vector to both the generator and discriminator.
You simply need to find a way to train the discriminator not only to tell real from fake, but to determine the value of the generator's outputs and make it backpropagate those values in time during training over several generated episodes by TD.
As the GAN is conditioned on its own previous state, it can learn by trial and error how to combine the short action and thought sequences it produces, can thus learn to produce coherent ("real") language and logic.
Based on such intuitions, I'd say it is impossible to tell when AGI will come exactly, but currently technology looks damn promising.