The problem is that it takes a really long time until technology is so good and so reliable that you really don't need to understand it to be able to operate it.
Take, for instance, the "Yes, let's all go back to coding in assembly!" line -- The thing is: For a really really long time after high-level languages had become mainstream, you really did still have to know assembly to be a programmer, even if you did most of your work in, say, C, or Pascal. That's because compilers for high-level languages and their debugging tools were initially a "leaky" abstraction. When your programme failed, you had to know assembly to figure out what went wrong and work your way back from that to what you could do in your high-level language to fix the problem. Nowadays, compilers and debugging tools have become so good, that those days are mostly gone, and you really don't need to know assembly any more (for most practical intents and purposes).
But the problem we have today: We pile on layer upon layer upon layer of leaky abstraction without ever giving it the time it needs to mature. We're designing for shortening the amount of time a developer spends on getting something done, under the completely misguided assumption that the developer will never leave the "happy path" where everything works as designed. This is neglecting the fact that a developer spends most of their time debugging the situations that don't work as designed. Usually, if you make the "happy path" time more productive with a side-effect of making the "unhappy path" time less productive, that amounts to a net-negative, and that's the big problem.
If you do something even slightly unusual you can quickly get into a situation where you spend more time debugging your toolchain than writing code.
I really hate the state of modern software. We have so many layers of utterly unknowable abstractions that it isn't even possible to understand what your code actually ends up doing.
And that's how we ended up with Electron, which I think is the pinnacle of shitty software driven by the unsustainable paradigm of libraries on abstractions on libraries.
Pilots must still understand how planes work. That’s called aviation. Most software devs have absolutely no idea how their software platform works. Most are overpaid API monkeys.
Tell that to the passengers of those 737 MAX flights where the pilots did not know how to disable the failing AoA correction...
I wouldn't bet that most pilots know more about how the planes they fly work than software devs know about their computers. For one, most planes today rely heavily on computers. Do they teach electronics in "aviation"?
As I understand it, the 737 MAX issue was not because the pilots didn't know how their plane worked, but because they were lied to about a feature that was installed to try to reduce costs. Had they been told it was there, no the outcomes could well have been different.
Although the pilots cannot be blamed for not understanding the undocumented controls of the 737, one (multiple?) incident(s) were mitigated due to very experienced pilots understanding the fundamentals of the plane. Think of the pilot knowledge as a second line of defense, against an incident that should have never happened (again - not the pilots fault, but the additional knowledge helped).
I recommend reading “Flying Blind” for more detailed accounts of the precursor lion air flight that almost crashed.
Not sure I understand your point. I read "it's not that they did not know how to disable the feature, it's just that they were not told how to disable the feature".
Or are you saying that they knew, but somehow did not do it?
I know. My original point was that it is not completely clear that pilots know better how planes work than software engineers know how computers work.
Not at all saying that they were incompetent. On the contrary, passenger planes today are IMO much more complex than one desktop computer loading a web page: passenger planes are a group of many computers doing safety-critical stuff in order to maintain a giant machine up in the air.
I don't see how one can say that software engineers don't really understand computers, but pilots do really understand planes.
If I recall correctly, the crew of the Ethiopian flight that crashed were very experienced and they did understand what was happening. They just couldn't mitigate it in the short time they had.
Note that my point was not that the crew was inexperienced or incompetent. My point was that those flying machines are crazy complex, and actually made of tons of safety-critical computers.
I just did not find it fair to say "pilots know how planes really work, but software engineers don't know how computers really work". Both are waaaay too complex for one person to actually understand fully.
There's a limit to how much about a plane the pilot understands (e.g. how much about the electronics, circuitry, software in the cockpit is understood? what about the chemical composition or manufacturing process behind the rubber in the tires? or the subatomic physics that helps explain why air and the plane interact they way they are known to in aviation?). I don't disagree that poor practitioners exist in every field, but that's a sign of what the technology permits you to get away with not knowing.
I think we have to make a discernment between operating and manufacturing when referring to knowing "how it works". The pilot needs to understand how a plane will behave on the fundamental level when given a set of instructions. A develop should have that understanding as well (and that's coming from a person who's been through CS, but have lost a lot of that understanding).
But what the vast majority of programmers are "operating" are programming languages, runtime environments, and operating systems—which generally treat the hardware and the CPU architecture as implementation details. The people who use programming languages and those who create/maintain them might as well be in different industries, like the pilot and the aerospace engineer.
I think that misses the point. A programmer who understands both their language and the environmental constraints with which that language, and it’s capabilities, execute likely understands enough to write and maintain original applications in that language.
As a JavaScript/Web/Fullstack developer I don’t live in that world. I live in a world of giant stupid frameworks. The only purpose of these frameworks is to supply an architecture in a box and put text on screen in a web browser. If a task cannot be performed using only the API provided by that framework then it must not be worth doing as it’s clearly far beyond the capabilities of the developer. There is far more to this software platform than merely putting text on screen in a web browser, for example: accessibility, security, performance, test automation, A/B testing, network messaging, architecture, content management, and so on.
God forbid you take the giant stupid frameworks away. It’s like castrating a person in public and then laughing at their great embarrassment. Many developers, some of whom shouldn’t be in this line of work to begin with, have built their entire careers around some framework API and absolutely cannot write code without it. The emotional insecurity is very real, as well as the completely inability to write original applications.
I think that's a highly dismissive and ignorant view of what software development, as a value-creation endeavor, actually is.
The responsibility of a software engineer is not mapping high-level constructs to low-level details. The responsibility of a software development engineer is to implement systems that meets the business requirements, and operate on those systems at the abstraction level that makes sense to the problem domain.
It is entirely irrelevant what machine code is running, or even what machine is running the code, just like being able to model fluid flow over the control surfaces of an airplane is entirely irrelevant to steer the plane. A pilot needs to know how to control the plane using the plane's interfaces. Being able to whip out a computational fluid dynamics model is entirely irrelevant for a pilot if all they want to do is turn left/right.
High-level languages and abstraction layers are the key to simplify and speed up delivering value. No one should care about what pages of virtual memory their application is writing to if their goal is to serve a webpage in multiple continents.
The only one purpose of software is: automation. The degree to which a software developer strives towards that one purpose determines their employer’s return on investment completely irrespective of the business requirements. Unnecessary abstractions exist not to simplify any return on investment but to ease candidate selection from amongst a pool of otherwise unqualified or incapable candidates.
> Unnecessary abstractions exist not to simplify any return on investment but to ease candidate selection from amongst a pool of otherwise unqualified or incapable candidates.
This take is outright wrong. One of the most basic business requirements is turnaround time for features, bugfixes, and overall maintenance, which ultimately means minimize operational costs.
All production-ready application frameworks are designed to provide standardized application structures out-of-the-box that hide the implementation details that don't change and make it trivial to customize the parts that change more often. Backend frameworks are designed around allowing developers to implement request handlers, and front-end frameworks are designed around allowing developers to build custom UI elements from primitive components, provide views to present data, and fill in handlers to respond to user interactions. Developers adopt these frameworks because they don't have to waste time reinventing the wheel poorly and instead can focus on the parts of the project that add value.
At least in JavaScript land all production ready frameworks only solve two problems: architecture in a box and put text on screen. These are trivial to achieve at substantially lower effort without the frameworks, but it requires a more experienced or better trained developer.
What you describe is a training failure, but your thoughts on the matter are an economic failure. The goal of software is eventual cost reduction via automation. I say eventual because software is always a cost center and its value is not immediately realized.
What you describe is employment, which is not the same thing. The least friction path to employment to turn candidates into commodities to ease selection and risk of rejection post-selection. Once employed the candidates perceived value is often measured in things you describe, which rarely translates into any kind of value add to the business. Churn is burn, which increases employee engagement but almost always increases operational costs. The way to decrease operational costs is with automation, which includes things like CI/CD, static analysis, test automation, and so forth. These automation efforts are not measured in churn.
That contrast is why many software developers are API monkeys, because its what they are hired for and what they are rewarded for. That is why software developer return on investment is not defined by business requirements. Many employers need people to perform low effort work and do not wish to invest in formal training. This is all measurable.
> At least in JavaScript land all production ready frameworks only solve two problems: architecture in a box and put text on screen.
I don't think you have a very good grasp on the issue.
All production-ready application frameworks are designed to provide standardized application structures out-of-the-box that hide the implementation details that don't change and make it trivial to customize the parts that change more often.
That's what they are used for: to ensure developers do not have to reinvent the wheel poorly, and to provide very flexible ways to change the things that are expected to change the most often.
Front-end frameworks are used to help develop user interfaces. Describing user interfaces as "put text on screen" already shows you have a very poor grasp on the subject and are oblivious to fundamental requirements.
Unwittingly, you're demonstrating one of the key aspects where frameworks create value: gather requirements and implement key features that meet them, so that people like you and me who are oblivious to them don't need to rearchitect their ad-hoc frameworks to support them as an afterthought.
It should be noted that those who try to make the same accusations you've made regarding complexity aren't really complaining about complexity. Instead, they are just manifesting that they are oblivious to key requirements and as they are oblivious to them then they believe they can just leave huge gaps in basic features without any consequence.
Strange. You are almost verbatim repeating what I wrote, expanding upon it, and then elaborating upon your expansion to justify the same conclusion that I wrote in about 4 words. To me this sounds like virtue signaling.
These frameworks provide value to the employer, not the developers, because it eases candidate selection and turns otherwise unqualified developers into less capable commodities. In that regard the value is entirely regressive because it requires more of the less capable people to perform equivalent work that does not achieve disproportionate scale, which is the economic goal of automation. If the given developers only return value directly proportional to their manual efforts they are merely overpaid configuration experts on top of data entry.
Some developers believe everything is always a framework or any attempt to avoid frameworks creates a new framework. I cannot help these people. Any non-religion is a cult type nonsense of affirming the consequent fallacy.
Software devs have zero control over the tooling, languages, networking and hardware.
It looks like they have a choice because the map of available options constantly shifts... but they're metaphysically locked to near-identical options in the universe of potential ways.
As long as computing is largely US based, it will always be this way. It's treason to go off-piste, in a large way.
Ask a pilot to recite Navier-Stokes from memory. They won't even know what you're talking about.
Building a gas turbine engine without training? Forget it.
The only electrical engineering a pilot needs to know is the difference between volts and amps, and what it means when a breaker pops. The EEs who design the avionics are not so fortunate.
Nobody is talking about building the engine lol, you're all making the exact "just program everything in assembly" comment the author was taking the piss out of.
You have probably 80% of people using React that have absolutely no idea how the three main functions in it's API work. That's the equivalent of a 747 captain having no idea what the TOGA button does and only knowing that that's the one you press to do a take off or a go around.
To support your point, having a locked TOGA mode can be extremely dangerous even if the pilots know exactly what it does. The equivalent would be a risk of death if you just once wrote a React component that had an infinite loop in its render function; I would venture a guess that if there was that kind of risk, most programmers would resign in very short order!
Developers for which the above argument is clever and persuasive are essentially equating themselves as the equivalent of cab drivers. Their libraries, frameworks, and hardware are black boxes to them that they manipulate in pre-prescribed ways (and sometimes just plain cargo-culting) to achieve a desired result. When their abstractions break down, they have to call in specialists to diagnose and repair them. Of course, they get very agitated and defensive when people point this out and, very much unlike what a hacker would do, try to diminish the value of expertise and skill and call it unnecessary. And, okay, for them, it is.
Yes, a cab driver does not need to understand automotive engineering because a cab driver is, in the non-pejorative, technical sense of the word, unskilled labor. Is that really the analogy you want to make though?
Take, for instance, the "Yes, let's all go back to coding in assembly!" line -- The thing is: For a really really long time after high-level languages had become mainstream, you really did still have to know assembly to be a programmer, even if you did most of your work in, say, C, or Pascal. That's because compilers for high-level languages and their debugging tools were initially a "leaky" abstraction. When your programme failed, you had to know assembly to figure out what went wrong and work your way back from that to what you could do in your high-level language to fix the problem. Nowadays, compilers and debugging tools have become so good, that those days are mostly gone, and you really don't need to know assembly any more (for most practical intents and purposes).
But the problem we have today: We pile on layer upon layer upon layer of leaky abstraction without ever giving it the time it needs to mature. We're designing for shortening the amount of time a developer spends on getting something done, under the completely misguided assumption that the developer will never leave the "happy path" where everything works as designed. This is neglecting the fact that a developer spends most of their time debugging the situations that don't work as designed. Usually, if you make the "happy path" time more productive with a side-effect of making the "unhappy path" time less productive, that amounts to a net-negative, and that's the big problem.