Can’t you quantify the risks and add them to the inputs of those models then?
Same with code quality you mentioned, you’re effectively saying that the ‘feel’ or intuition about the code quality and on when to act to do something about that code quality, is more accurate than if you’ve tried to quantify the code quality and then quantify its effects on everything else and then try to use that as one of the inputs of the model.
I’m just asking all that, I’m myself relying on intuition a lot, but I feel like I always have a communication barrier when all I have is intuition; i.e. how do I explain my reasoning/decisions if all the risks and potentially positive outcomes are just weights in my head (that’s what I imagine intuition is)
The formalisms of systems thinking don't really work when you try to incorporate uncertainty. You have to frame things in terms of stocks and flows, but risk and uncertainty resolve in sudden leaps, not incrementally. If you assign 30% probability to "will have an incident" it doesn't smoothly climb from 30% to 100% over time, you roll the dice it jumps discontinuously to 100%. I'm not saying that attempting to quantify code quality and its impact is not useful, I'm saying the tools of systems thinking don't add anything to that exercise. Even if you gather data on the quality of your code, you still use expertise and intuition to judge those metrics, because they are always weak proxies.
To explain intuition, I do think it's successful to list the pros and cons and be explicit about which ones you think are low-medium-high likelihood and low-medium-high impact. Then people can disagree regarding your richer model.
I’m just asking all that, I’m myself relying on intuition a lot, but I feel like I always have a communication barrier when all I have is intuition; i.e. how do I explain my reasoning/decisions if all the risks and potentially positive outcomes are just weights in my head (that’s what I imagine intuition is)