A lot of video codecs are NP hard to encode optimally, so rely on heuristics. So you could certainly say that some approaches take a lot of compute power to encode, but are much more easily decodable.
The codecs aren't NP hard. Rather, the "perfect" encode is. That's where the heuristics are coming into play. The codec just specifies what the stream can look like, the encoders have to pick out how to write that language and the decoders how to read it.
Decoders are relatively simple book keepers/transformers.
Encoders are complex systems with tons of heuristics.
This is also why hardware decoders tend to be in everything and are relatively cheap with equal quality to software counterparts. On the flip side, hardware encoders are almost always worse than their software counterparts when it comes to the quality of the output (while being significantly faster).
> The codecs aren't NP hard. Rather, the "perfect" encode is.
That's what I meant by my first sentence.
And I'll throw out there that the vast majority of 'hardware codecs' are in fact software codecs running on a pretty general purpose DSP. You could absolutely reach the same quality as a high quality encoder given the right organizational impetus of the manufacturer; they simply are focused on reaching a specific real time bitrate for resolution rather than overall quality. By the time they've hit that, there's a new SoC with it's own DSPs and it's own Jira cards that needs attention. If these cores were more open, I'm sure you'd see less real time focused encoder software targeting them as well.