1) You can simply have vector clocks, and resolve differences like anything else. Most of the time the game's prediction would be correct. Sometimes there would be a small adjustment.
Except you don't have access to game state in these cases so there's no adjustment that can be made.
You can only move forward in the frames you display. You can't walk back in time to evaluate a player's action in relation to their latency and then reconcile game state with all other clients.
Yes you can. When the server sends the "official" record of moves made, if the client detects any discrepancy with expected results, it resets the game to a slightly earlier time and replays the "real" moves to reach the current point. All this is done instantaneously, and only at the end is the interface refreshed.
> resets the game to a slightly earlier time and replays the "real" moves to reach the current point.
That's exactly what properly engineered network games do. However we're talking about video streaming here, in which you don't have access to any game state. It's not possible to do without the developer going in and adding specific feature support for your streaming service.
Can you go into more detail with (1)? It's unclear to me how vector clocks could help with, say, a fast-paced FPS. The "small adjustment" might often be bringing a dead character back to life, if his bullet packet's vector clock claims he shot first. (Hack potential, too...)
There is always hack potential. All the server can do is make sure that the claimed move is legal (eg fits the rules of chess) and realistic (check a string of inputs to see if they probably have been made by a human).
As for the adjustment... Yes bringing a character back to life after a momentary error, in one place on the map, as seen by one person, is a small adjustment that happens rarely.
It doesn't defeat the purpose of streaming. You don't have to emulate everything in the game, just everything in your vicinity that can possibly affect what you see and hear.
That is how client-server games already work. The server has the full world state, and each client has a subset necessary to render the world for that player. Ideally the absolute minimum subset, so as to reduce the potential for cheating (wallhacks, etc.).
The purpose of streaming is to remove the need for the simulation and rendering at all. The client is a dumb terminal that just renders frames and records input.
The moment you start making the client smart again — making it aware of a subset of the world state, making it do its own rendering based on the input — you've just reinvented current client-server gaming.
That would be extermely complicated and would require additional support from every game. Streaming right now takes every frame, compresses it and forwards it, as simple as that and works for everything.
The cost you are talking about is only for physics and logic in multiplayer games, it doesn't apply to single player games. And even then it is negligible compared to the biggest cost in gaming: rendering. Prediction requires rendering being done on the client, which requires a hefty video card and defeating the purpose of streaming.
Yes you are right, but in this case we have a central server farm AND clients can time out. The server can drop the vector clock of all connected clients for tick X once all clents receive tick X updates or time out for tick X.
2) Yes, anything real-time is like this.