Having the server contain the "single true source" is important for many reasons, most of them being the main benefits of the web in the first place - automatic backups of everything, automatic sync of everything, etc. And if you're backing up everything to the server anyway (e.g. every action requires a REST call to the server), then the server is in any case acting as the source of truth, which means it can also do some business logic processing.
That's not to say the client is "dumb" by any means, but you'll be talking to the server anyway.
That's not a valid assumption for truly ambitious web applications.
For example, I have an Ember application running on both mobile and desktop that gives the user complete read/write access to their data even when the network goes down. When they get a connection again, everything synchronizes automatically.
This is an important requirement for us, and once we implemented it we realized that is has very useful side benefits as well: the entire application feels dramatically faster, because you're very rarely waiting for the server. Changes take effect instantly.
It also makes our backend infrastructure simpler. First, because we can take it down for a bit and nobody will even notice. Second, because once you have a true distributed synchronization algorithm running, adding another redundant server is no more complicated than adding another client. They're actually quite symmetric in many respects, running the same codebase.
Any time you are showing a user a representation of data, be it in a web page or client side rendered template, it is potentially out of sync.
The only atomic source is probably your database, and from the millisecond that you query it, it could be stale.
It is not the same thing as the class of bug I am explaining where you have multiple copies of the same object in memory leading to confusing state errors.
There are some objects that you just don't need to refresh constantly. In ember there's nothing stopping you from calling refresh when you enter a route, it gives you the object and leaves it up to you. If you think it's important to refresh it as the route changes, by all means do so!
What we're really talking about here is an advanced form of caching. "Long lived objects" doesn't mean you do no validation, or that you don't hit the server to save something permanently.
The point is that due to the difference in architecture between an SPA and a more traditional all-server-side app, there's no reason to hit the server every single time. Isn't this the advantage of writing so much JavaScript? If you're going to make a bunch of requests on every page, why not just ditch the JavaScript and write it all server side?
Having the server contain the "single true source" is important for many reasons, most of them being the main benefits of the web in the first place - automatic backups of everything, automatic sync of everything, etc. And if you're backing up everything to the server anyway (e.g. every action requires a REST call to the server), then the server is in any case acting as the source of truth, which means it can also do some business logic processing.
That's not to say the client is "dumb" by any means, but you'll be talking to the server anyway.