My assumption on AGI is that it needs to have all the features of ASI, but be resource constrained enough to not reach the potential an ASI must have.
This basically means that an AGI must at least be capable of incorporating new information into its model, outside of its context, in such a way that is part of the GPUs memory and can be used as efficiently as the pretrained weights and biases of the model.
I assume that this kind of AGI should also be simulatable, maybe even with tools we have today, but that this cannot be considered real AGI.
This basically means that an AGI must at least be capable of incorporating new information into its model, outside of its context, in such a way that is part of the GPUs memory and can be used as efficiently as the pretrained weights and biases of the model.
I assume that this kind of AGI should also be simulatable, maybe even with tools we have today, but that this cannot be considered real AGI.