You're looking for the tiniest blocks of code that are run an exceptional number of times.
For instance, I used to work on graphics renderers. You'd find the bit that was called the most (writing lines of pixels to the screen) and try to jiggle the order of the instructions to decrease the number of cycles used to move X bits from system RAM to graphics RAM.
When I was doing it, branching (usually checking an exit condition on a loop) was the biggest performance killer. The CPU couldn't queue up instructions past the check because it didn't know whether it was going to go true or false until it got there.
Don’t modern or even just not ancient cpus use branch prediction to work past a check knowing that the vast majority of the time the check yields the same result?
All the little tricks that the CPU has to speed things up, like branch prediction, out of order execution, parallel branch execution, etc, are mostly more expensive than just not having to rely on them in the first place. Branch prediction in particular is not something that should be relied on too heavily either, since it is actually quite a fragile optimization that can cause relatively large performance swings with seemingly meaningless changes to the code.
Branch prediction is great for predictable branches, which is often what you have, or a good approximation to it. I forget the exact criteria, but even quite old chips could learn, e.g., all repeating patterns of length up to 4, most repeating patterns of length up to 8 and fixed-length loop patterns (n YESes followed by 1 NO) of any length.
Quite often, though, you don't have predictable branches, and then you'll pay half the misprediction cost each time on average. If you're really unlucky, you could hit inputs where the branch predictor gets it wrong more than 50% of the time.
For instance, I used to work on graphics renderers. You'd find the bit that was called the most (writing lines of pixels to the screen) and try to jiggle the order of the instructions to decrease the number of cycles used to move X bits from system RAM to graphics RAM.
When I was doing it, branching (usually checking an exit condition on a loop) was the biggest performance killer. The CPU couldn't queue up instructions past the check because it didn't know whether it was going to go true or false until it got there.