Hacker Newsnew | past | comments | ask | show | jobs | submit | more bjterry's commentslogin

I bought this book on Audible based on a random recommendation, not really knowing anything about it. On the page to buy it, it said it was released in 2020. I spent the first 80% of the book thinking that it was written in the 2010s, but intentionally written as if it were the 1980s. That is, with explanations that didn't presage the development of the Internet, and using analogies that would be understood by the people of that era. I was impressed with how perfectly the author was able to channel that era without anachronism, and even told a couple friends about this.

When I learned it was written in the 1980s, I wasn't exactly shocked. But then, I learned it was written by the Klein bottle guy, and that really was shocking. It's become one of my favorite books.


The cargo a plane can carry isn't limited only by the volume of the interior, but also by the amount of weight that can be carried safely in the plane. Airlines try to balance the amount of dense and non-dense (a.k.a. volumetric) cargo to solve for the joint constraint by charging different prices for cargo depending on the greater of its weight cost vs. its volume cost.

Even if it turned out to be practical to double the volume of cargo you could carry, it seems unlikely that it would allow you to double the weight of the cargo, since the engines and the airframe have all been designed around the same set of engineering requirements. The best case scenario would be a decrease in the cost of volumetric cargo, with dense cargo staying the same.


The article is about having a completely separate plane (glider, really) that is towed by a conventional plane. The glider gets to ride in the powered plane's wake, and presumably the fuel cost to tow the cargo plane is less than the cost of flying two independent planes.

This isn't about making a larger conventional plane that has more volume to put cargo in.


Right, but if a plane has a maximum takeoff weight of 600k lb, I doubt if it can safely takeoff while being 600k lb AND towing another x00k lb in a glider, which is what would be required to reduce cargo costs by 65%.


That is not at all in conflict with the post. The selection effect is one of the sources of the empirical diseconomies of scale the author mentions.


My first read was that the author was claiming that smaller companies are at least as competent at moderation than big platforms.

In my first read, I took that to mean from a process and inputs viewpoint.

But reading it again, I suppose the author could be taking a more consequentialist point of view. They are at least as good (or much better), largely because they don't have to do much.

I suppose that's fair, but also a lot of words in the OP to state a truism :-)


It's pronounced like "charisma" without the "cha" or the "m."


In WW2, British intelligence analysts inferred that a secret Nazi radar system used only a single beam based on its code name: Wotan (Odin), a god with one eye. For a long time, I have occasionally referenced this example, but today I learned that it was actually sheer luck. The prior radar system that used two beams was also codenamed Wotan.

https://en.wikipedia.org/wiki/Battle_of_the_Beams#Y-Ger%C3%A...


Best practices are not actually the best practice in many situations. Best practices by their nature are legible and acceptable in a broad variety of contexts. What are the chances that in your situation, the legible, broadly acceptable practice is actually going to be optimal?


I was curious what it would actually be. Assuming a full offset with expensive direct air capture of CO2 ($238 / tonne [0]), the carbon tax would only be $2.12 per gallon for 8,887 grams CO2 [1].

[0]: https://www.iea.org/commentaries/is-carbon-capture-too-expen... [1]: https://www.epa.gov/greenvehicles/greenhouse-gas-emissions-t...


Capturing the carbon is only part of the cost... The other part is sequestering it for millions of years. Concrete is a great place to do that, where many other 'sequestering' techniques might only last tens or hundreds of years, requiring the capture fee be paid again and again and again.


A couple years ago I did a deep dive into systems thinking. Two people whose ideas I highly respect, John Cutler and Will Larsen, reference ideas from systems thinking, which is what piqued my interest. In the end, what I found is that it wasn't that useful for the problems I face as an engineering manager, and I am skeptical about whether it's formally useful except in a very casual way.

The key work in systems thinking is drawing causal loop diagrams to identify potential feedback loops, some of which tend to stability and some of which tend towards instability. The way you draw these has almost infinite degrees of freedom, and so while they can sometimes be helpful in eliciting your own ideas, the outcome is ultimately heavily grounded in your preconceptions of what's important and about the relevant scope of the exercise. In working with it, I never had a sudden realization that some neglected factor was the key to everything and would provide previously unexpected levels of leverage. I never identified a feedback loop which provided outsized control over the process that I couldn't identify and attempt to resolve with traditional tools. So, as an individual analytical tool, I don't think it added much to deep thought, an outliner and a notepad.

The case studies and the literature about the practice emphasize its importance as a tool for communication and collaboration. It seems to me that many of the practitioners are mostly trying to use it as a rhetorical tool to try to win arguments about which they've already decided their bottom-line opinion. But I think in the context of a business, it fails at this in lightweight terms (i.e. without a major top down organizational push) because the idea is so foreign to others. First you would have to teach them what a causal loop diagram is, which is itself a quite nuanced topic, then you'd have to convince them that your particular construction and emphasis is the one that's most relevant for a decision. The "success stories" here make a lot of money for the consultants that are able to go do training for 5 layers of management like this author, but no one ever adopts it and makes important decisions which they credit to the incredible causal loop diagrams they drew.

Two additional issues. Systems thinkers sometimes emphasize the importance of quantitatively modeling the feedback loops. For almost all the things I care about, that's impossible or admits to the same explosion of degrees of freedom as the loop structure. If you decide that code quality is a concern, or that a deteriorating dev experience could be impacting velocity, you could try to find metrics that capture those, but finding metrics that capture those AND act as inputs or outputs to further nodes of the causal loop is pretty much impossible. Qualitative aspects of a system are of critical importance, and you ignore them at your peril.

Second, systems thinkers are not very good at thinking about probability and risk. The causal models allow you to think about what happens assuming you know about the inflows and outflows of systems and processes, but they can't be readily combined with an understanding of your own limited knowledge, or risks that are ever present in every decision. Thinking about risks quantitatively, even in ballpark terms, I found way way way more useful to my decision-making than all the time I spent thinking about feedback loops. Knowing whether your confidence that an improvement will work is 30% or 70% is directly useful, even if its only your informal probability, assuming that you are reasonably well calibrated.


Systems Thinking goes beyond just causal loop diagrams, which are a useful tool in soft-systems modelling. Systems Thinking is more of a foundational mindstate rather than a set of tools and I think this is something I disagree with the author on.

https://sebokwiki.org/wiki/What_is_Systems_Thinking%3F

Systems Engineering uses Systems Thinking as the root of the apprach but will use any tool available for actually modelling/designing the system. A big one which is used a lot for hard-systems is SysMl (an offshoot of UML, I know) which is again somewhat inpenetrable for outsiders. I think thats ultimately a trap for all domain-specific tools where they need some way of being abstracted for the layman.

For an Engineering Manager, a combination of tools from the Operations Management disciple and Enterprise Systems Engineering is possibly something that might be a bit more useful to you.


Can’t you quantify the risks and add them to the inputs of those models then? Same with code quality you mentioned, you’re effectively saying that the ‘feel’ or intuition about the code quality and on when to act to do something about that code quality, is more accurate than if you’ve tried to quantify the code quality and then quantify its effects on everything else and then try to use that as one of the inputs of the model.

I’m just asking all that, I’m myself relying on intuition a lot, but I feel like I always have a communication barrier when all I have is intuition; i.e. how do I explain my reasoning/decisions if all the risks and potentially positive outcomes are just weights in my head (that’s what I imagine intuition is)


The formalisms of systems thinking don't really work when you try to incorporate uncertainty. You have to frame things in terms of stocks and flows, but risk and uncertainty resolve in sudden leaps, not incrementally. If you assign 30% probability to "will have an incident" it doesn't smoothly climb from 30% to 100% over time, you roll the dice it jumps discontinuously to 100%. I'm not saying that attempting to quantify code quality and its impact is not useful, I'm saying the tools of systems thinking don't add anything to that exercise. Even if you gather data on the quality of your code, you still use expertise and intuition to judge those metrics, because they are always weak proxies.

To explain intuition, I do think it's successful to list the pros and cons and be explicit about which ones you think are low-medium-high likelihood and low-medium-high impact. Then people can disagree regarding your richer model.


Yup this stuff is not for managers. It's on par with expecting managers to work out maxwells equations by themselves. It's not what managers are hired to do. Better option is to hand over data to academics/experts, and do what you can until they return with better models.


Perplexity.ai processes Bing search results through GPT-3. It works really well, and you can see the prompt they use at https://blog.ouseful.info/2022/12/09/combing-chatgpt-and-sea...


No one in this thread (nor in the article) has made the obvious observation that this means there are a massive variety of genuinely safe foods that could be made via currently banned practices. You can only get an exception at great cost for foods with existing cultural practice. Some real innovations, like the initial invention of the Peking duck, are basically prohibited.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: