You can if you just run PTP (almost) entirely on your NIC. The best PTP implementations take their packet timestamps at the MAC on the NIC and keep time based on that. Nothing about CPU processing is time-critical in that case.
Well, if the goal is for software running on the host CPU to know the time accurately, then it does matter. The control loop for host PTP benefits from regularity. Anyway NICs that support PTP hardware timestamping may also use PCI LTR (latency tolerance reporting) to instruct the host operating system to disable high-exit-latency sleep features, and popular operating systems respect that.
> The control loop for host PTP benefits from regularity.
How much regularity? If you sent PTP packets with 5 milliseconds of randomness in the scheduling, does that cause real problems? It's still going to have an accurate timestamp.
> instruct the host operating system to disable high-exit-latency sleep features
Why, though? You didn't explain this. As long as the packet got timestamped when it arrived, the CPU can ask the NIC how many nanoseconds ago that was, and correct for how long it was asleep. Right?
> PTP packets with 5 milliseconds of randomness in the scheduling
This should not matter, unless you are a 5G telecom operator running at a high frequency. Gaussian noise in the master is not important to PTP. Being a master is easier than being a slave.
If you are running PTP at 128 Hz like a telecom, delays that large might lead to slaves resetting their state machines, which would blow the whole thing up.
> The CPU can ask the NIC how many nanoseconds ago that was
The CPU can indeed ask the NIC what time it is, but then the CPU has to estimate how long ago the NIC answered the question. If the PCI bus is in L1, it will take 10s to 100s of microseconds (no hard upper bound; could be forever) to train up to L0. The host has to determine this delay and compensate for it, because PCI bus transition is much longer than the desired error in PTP. The easiest way is to repeatedly read the time, discard the outliers, and divide the estimated delay in half. This technique is used by various realtime ethernet stacks. You will note that this is effectively the same as disabling ASPM. This is also why they invented PCIe 3.0 PTM.
I see nothing in your pair of unnecessarily belligerent comments that actually contradicts what I said. There are host-side features that enable the clock discipline you are observing, even if you are apparently not aware of them.
This is a really helpful contribution - if only everyone could be as smart as you.
If mine are somehow too beligerent for you, which is hilarious given how arrogant and beligerent your initial comment and responses come off as (maybe you are not aware?), then perhaps you'd like to actually engage any of the other comments that point out how wrong you are in a meaningful way?
Or are those too beligerent as well?
Because you didn't respond to any of those, either.