They may require training but that training is going to look vastly different. We can start chatting about AGI when AI can be trained with as few examples and as little information as humans are, when they can replace human workers 1:1 (in everything we do) and when they can self-improve over time just like humans can.
What matters to me is, if the "AGI" can reliably solve the tasks that I give to it and that requires also reliable learning.
LLM's are far from that. It takes special human AGI to train them to make progress.