I'm not sure what you are trying to say, given the context of what you quoted was. Binary search has optimal worst case runtime complexity, but for small tables it is, on real computers, overall still slower than a linear scan, which effectively has the worst worst case runtime complexity (unless you start to get pathological). This is because the constant in front of the runtime complexity, the one that O-notation explicitly ignores, starts to matter: Branch prediction and cache locality come into play.
What exactly do you mean with "writing faster search and sort algorithms"?
Those are neat optimizations of the constant, but even with that, a linear scan can still be much faster under the right (and not unrealistic) circumstances.
But I see now you were referring to the branch predictor part specifically, so all good.
What exactly do you mean with "writing faster search and sort algorithms"?