But... if you’re sending the timestamp down to sub-second precision, the receiving computer has to compare that number to their own time (which is based on an epoch). So, an epoch is used either way. It sounds like a convoluted solution to a problem that doesn’t need to exist.
Hmm? This timestamp is known to be local-time, used for measuring durations-between-events on a single session, and not used for cross-session reporting. Time would be compared from one event to the next.
Cross-session reporting (such as calculating DAUs or MAUs) reports on the timestamp from the ingestion service, which is applied at the batch level before a batch of events is placed on the event hub for routing (events are batched and transmitted every 16ms off the client). Typically you can get away with treating everything that happened in the same simulation frame as having happened instantaneously... though we use a monotonically incrementing guaranteed-unique sequence identifier for processing using a high-water-mark (one stream is at-least-once sequentially-guaranteed with retries, while the other stream is lower priority and at-most-once sequentially-guarantied without retries).
No amount of ISO spec will help with client clocks being completely arbitrary. :D The ingestion service clocks are NTP synced to datacenter-hosted atomic clocks, but we still see a few-second skew here and there.
I've even done research into how often they drift backwards in time (but that doesn't really happen, since NTP will just slew the system time by delaying, or adjust the clock massively on startup before the game launches).