Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A couple years ago I did a deep dive into systems thinking. Two people whose ideas I highly respect, John Cutler and Will Larsen, reference ideas from systems thinking, which is what piqued my interest. In the end, what I found is that it wasn't that useful for the problems I face as an engineering manager, and I am skeptical about whether it's formally useful except in a very casual way.

The key work in systems thinking is drawing causal loop diagrams to identify potential feedback loops, some of which tend to stability and some of which tend towards instability. The way you draw these has almost infinite degrees of freedom, and so while they can sometimes be helpful in eliciting your own ideas, the outcome is ultimately heavily grounded in your preconceptions of what's important and about the relevant scope of the exercise. In working with it, I never had a sudden realization that some neglected factor was the key to everything and would provide previously unexpected levels of leverage. I never identified a feedback loop which provided outsized control over the process that I couldn't identify and attempt to resolve with traditional tools. So, as an individual analytical tool, I don't think it added much to deep thought, an outliner and a notepad.

The case studies and the literature about the practice emphasize its importance as a tool for communication and collaboration. It seems to me that many of the practitioners are mostly trying to use it as a rhetorical tool to try to win arguments about which they've already decided their bottom-line opinion. But I think in the context of a business, it fails at this in lightweight terms (i.e. without a major top down organizational push) because the idea is so foreign to others. First you would have to teach them what a causal loop diagram is, which is itself a quite nuanced topic, then you'd have to convince them that your particular construction and emphasis is the one that's most relevant for a decision. The "success stories" here make a lot of money for the consultants that are able to go do training for 5 layers of management like this author, but no one ever adopts it and makes important decisions which they credit to the incredible causal loop diagrams they drew.

Two additional issues. Systems thinkers sometimes emphasize the importance of quantitatively modeling the feedback loops. For almost all the things I care about, that's impossible or admits to the same explosion of degrees of freedom as the loop structure. If you decide that code quality is a concern, or that a deteriorating dev experience could be impacting velocity, you could try to find metrics that capture those, but finding metrics that capture those AND act as inputs or outputs to further nodes of the causal loop is pretty much impossible. Qualitative aspects of a system are of critical importance, and you ignore them at your peril.

Second, systems thinkers are not very good at thinking about probability and risk. The causal models allow you to think about what happens assuming you know about the inflows and outflows of systems and processes, but they can't be readily combined with an understanding of your own limited knowledge, or risks that are ever present in every decision. Thinking about risks quantitatively, even in ballpark terms, I found way way way more useful to my decision-making than all the time I spent thinking about feedback loops. Knowing whether your confidence that an improvement will work is 30% or 70% is directly useful, even if its only your informal probability, assuming that you are reasonably well calibrated.



Systems Thinking goes beyond just causal loop diagrams, which are a useful tool in soft-systems modelling. Systems Thinking is more of a foundational mindstate rather than a set of tools and I think this is something I disagree with the author on.

https://sebokwiki.org/wiki/What_is_Systems_Thinking%3F

Systems Engineering uses Systems Thinking as the root of the apprach but will use any tool available for actually modelling/designing the system. A big one which is used a lot for hard-systems is SysMl (an offshoot of UML, I know) which is again somewhat inpenetrable for outsiders. I think thats ultimately a trap for all domain-specific tools where they need some way of being abstracted for the layman.

For an Engineering Manager, a combination of tools from the Operations Management disciple and Enterprise Systems Engineering is possibly something that might be a bit more useful to you.


Can’t you quantify the risks and add them to the inputs of those models then? Same with code quality you mentioned, you’re effectively saying that the ‘feel’ or intuition about the code quality and on when to act to do something about that code quality, is more accurate than if you’ve tried to quantify the code quality and then quantify its effects on everything else and then try to use that as one of the inputs of the model.

I’m just asking all that, I’m myself relying on intuition a lot, but I feel like I always have a communication barrier when all I have is intuition; i.e. how do I explain my reasoning/decisions if all the risks and potentially positive outcomes are just weights in my head (that’s what I imagine intuition is)


The formalisms of systems thinking don't really work when you try to incorporate uncertainty. You have to frame things in terms of stocks and flows, but risk and uncertainty resolve in sudden leaps, not incrementally. If you assign 30% probability to "will have an incident" it doesn't smoothly climb from 30% to 100% over time, you roll the dice it jumps discontinuously to 100%. I'm not saying that attempting to quantify code quality and its impact is not useful, I'm saying the tools of systems thinking don't add anything to that exercise. Even if you gather data on the quality of your code, you still use expertise and intuition to judge those metrics, because they are always weak proxies.

To explain intuition, I do think it's successful to list the pros and cons and be explicit about which ones you think are low-medium-high likelihood and low-medium-high impact. Then people can disagree regarding your richer model.


Yup this stuff is not for managers. It's on par with expecting managers to work out maxwells equations by themselves. It's not what managers are hired to do. Better option is to hand over data to academics/experts, and do what you can until they return with better models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: