Latency isn't everything; but that doesn't mean it's irrelevant either. I'm OK with a metric that accurately represents latency with the caveat that feel or other factors may be more important. If key and/or switch design impacts latency in practice; shouldn't we measure that?
I guess that is an open question - perhaps virtually all the variance in latency due to physical design is tied up with fundamental tradeoffs between feel, feedback, sound, and preference. If so - then sure: measuring the pre-activation latency is pointless. On the other hand, if there are design choices that meaningfully affect latency without meaningfully impacting other priorities, or even where gains in latency are perhaps more important than (hypothetically) small losses elsewhere - then measuring that would helpful.
I get the impression that we're still in the phase that this isn't actually a trivially solved problem; i.e. where at least having the data and only _then_ perhaps choosing how much we care (and how to interpret whatever patterns arise) is worth it.
Ideally of course we'd have both post-activation-only and physical-activation-included metrics, and we could compare.
I'm fine with wanting to measure travel time of keyboards but that really shouldn't be hidden in the latency measurement. Each measure (travel time and latency) is part of the overall experience (as well as many other things) but they are two separate things and wanting to optimize one for delay isn't necessarily the same thing as wanting to optimize both for delay.
I.e. I can want a particular feel to a keyboard which prioritizes comfort over optimizing travel distance independent of wanting the keyboard to have a low latency when it comes to sending the triggered signal. I can also type differently than the tester and that should change the travel times in comparisons, not the latentcies.
I guess that is an open question - perhaps virtually all the variance in latency due to physical design is tied up with fundamental tradeoffs between feel, feedback, sound, and preference. If so - then sure: measuring the pre-activation latency is pointless. On the other hand, if there are design choices that meaningfully affect latency without meaningfully impacting other priorities, or even where gains in latency are perhaps more important than (hypothetically) small losses elsewhere - then measuring that would helpful.
I get the impression that we're still in the phase that this isn't actually a trivially solved problem; i.e. where at least having the data and only _then_ perhaps choosing how much we care (and how to interpret whatever patterns arise) is worth it.
Ideally of course we'd have both post-activation-only and physical-activation-included metrics, and we could compare.