It does seem like a fad. Kudos to those who can create a new field and profit from it. On the one hand, "chaos engineering" seem a bit like "we don't understand our architecture well enough to know what its failure modes are, so let's just poke it and see what happens," but on the other hand, it seems at least a little bit analogous to fuzzing, which is certainly a technique that yields useful results that would have otherwise been overlooked until it was too late.
My first instinct was to agree with this, but from my experience it's extremely difficult to properly communicate failure modes 100% of the time across different teams in very large organizations. Dependencies that are fuzzy arise for example when a service A proxies data for client service B from some other service C. It doesn't help that the organization of teams in a company often severs lines of communication between teams who explicitly don't have dependencies but implicitly do. As a result, information gets lost in the process. Having a last line of defense in the form of a "chaos engineering" team may actually be the natural response of large organizations to counter the inherent messiness that is produced as a result of bureaucracy.
That and additionally it has implications for the development team as well. Using "chaos engineering" shifts the mindset of the developers. As a developer you now expect things to fail. You know that the "we make it work first and make it resilient later" approach will bite you sooner then later so you think resilience from the first line of code.
There are a million different ways a computer can fail. I think we're asking too much of people to be able to know all the pitfalls of every system they create.
But also this 'new field' just seems like something we've already been doing just with a different name. You're kind of expected to make sure your system can work if the computer suddenly shuts off, or a dependency is lost, or the network is slow. Have we not been doing this??