I think that's possibly true but not certain. We're only just getting through the door on LLM capabilities, and we're seeing surprising emergent behavior such as internally modeling the board state of Othello games[1], so it's very possible this research brings us entirely new realms of capabilities by continuing to scale up and improve.
Toxic and empty. Full of promises and delusions about saving time and effort.