But per the wording "noarchive - Prevents Google from showing the Cached link for a page" - and it seems likes it is technically just avoiding showing the cached link.
Copyright infringement is a tort though, so it's down to content owners to sue Google if they feel damaged by this "caching".
I think countries added workarounds for computer caching, allowing transient copies. But Google's "cache" or more of a short term archive, I'd guess they called it "cache" to semantically bypass the issue of it being an infringing copy.
Yes, actually it does. So does AU, US, and CA laws. Or, not expressly, but it does say a caching service must respect recognized industry standards for updating, removing, and excluding content from being cached. That covers HTML meta elements, HTTP caching headers, /robots.txt files, etc.
You can stop me copying all your works, just find me and ask nicely. So, I can never be successfully sued for copyright infringement now, because it's easy to "disable" my potential infringement. Yay. /s
So we returned to the original comment: "Google also caches and serves everything else its robot finds, so if this was a problem it was already a problem long before AMP."
This is where the caching exceptions in copyright laws comes into play. A service can automatically cache content passing through it and not be held liable. Caching is really broadly defined so just about anything can be considered caching.
But per the wording "noarchive - Prevents Google from showing the Cached link for a page" - and it seems likes it is technically just avoiding showing the cached link.