I feel that I made the transition from jr to mid level developer when I was able to come into some new code and judge it, instantly pointing out what was wrong and how it could be improved, and how it should've been built. I feel that I went from mid to senior when I would enter that same situation and try to understand the why behind the code (even though I probably still thought it was poorly architected) before passing judgement or trying to "fix" it with a refactor.
You transition from senior to manager when you first try to understand the importance of the code to the business and whether anyone will ever touch it before bothering to understand the code.
Your short comment really triggered my management PTSD. I wrote a huge ass post but decided to delete it and just summarize:
* Senior tradesman -> manager is not a natural transition;
* Senior tradesmen take business into account without needing to transition;
* Both technical and business understandings are attributed to management while business is subtracted from senior tradesmen. Hilarious.
I'm glad at one point in my life, I have the option to transition to this magical role where all of a sudden I will understand everything and even if the potential users of this potential idea might potentially touch the potential application.
I wish I could have written a clearer statement instead of this empty half-rant, but as it stands, I am but a senior engineer so am forced to come to grips with my own limitations. If only I would transition... and after, maybe I'll naturally transition to president of everything.
Senior developers are the ones who understand the code AND the business. One of their jobs is to keep managers from making stupid decisions that would harm the business. In many orgs, managers did NOT come from the ranks of senior developers.
I always get triggered when a new team member comes in and starts to rename variables in Pull requests and plans out huge refactorings. I‘m like ok you are five minutes into this code without any knowledge why it is shaped and named like this. Not that the person is or isn’t correct it is just how fast people are willing to change working software because they feel they can make it better in an instant. The second reason I get triggered is that I think it is quite rude to the programmers that came before. I have to swallow a lot of pride over the years because of course ones code isn’t without fault. But to point that out from a newbie in the team who maybe only want to proof that they can code? Super hard for me ;)
I still cringe at the memory of my first webdev job out of university. I thought it would be a good idea to alphabetize all of the methods of the behemoth User.rb class of an ancient Rails application.
The other developers let me do it and, looking back, they obviously didn’t want to rain on my parade as a junior dev that was eager to help out and “improve” things, god bless them.
On the other hand, it's also bad when a new person comes in who treats all existing code as gospel and is afraid to fix things that are clearly messy. Or doesn't even see the need.
I think I actually prefer your type, at least they care about improving things, and they're going to make it theirs.
I almost always welcome such initiatives because the new team member brings a fresh perspective and is unencumbered by our existing knowledge. And they don't have that many responsibilities yet, thus they can focus on the code itself. But good test coverage and thorough reviews are necessary to do this.
No don‘t get me wrong. I see this as well and have an inner fight when the typical: Let’s clean this up PR pops up. What I mean are changes that have no real value other than a change from Person X in file Y. Yes one can rename tons of methods and variables. But maybe these names stem for a reason. I have one example where the variable names where generic in nature and the change request renamed them according to the concrete implementation. You can argue back and forth about this :)
I actually think jumping straight in and refactoring is one of the best ways to learn a new codebase. If it's set up consensually and as a learning tool, the combo of refactoring + review from older team members is a great way to learn, even if your changes ultimately don't make it to prod.
The broadest version is that you can let a junior do that if the system is well managed: has tests, the reviewer has good knowledge, there are good rollback procedures, good metrics exist; whatever it is that's appropriate is functioning.
As long as you play around in your own branch and never merge your changes, what's the risk? You can try and make changes purely for the purpose of getting to know the code.
yeah, fair I suppose. Personally, I find that changing things and watching the tests fail gives me a lot more insight than just running the code, but that's just me I suppose.
I agree but I also worry about a certain "Stockhold Syndrome" with large codebases. Sometimes it's the fresh pair of eyes that can see exactly how bad some things are. Once you've learned to understand something you tend to have a rosier view of it.
And sometimes the harder it was to understand something, the more invested you become in it and the more you want to defend it.
The trust to refactor needs to be earned. There is a social dimension to refactoring. People that come to a codebase and immediately start refactoring don’t understand this. They believe they can just make objective arguments and others will accept them.
You will become a master developer when you can explain why the thing never should have been made in the first place and then lead an effort to tear it out and throw it away.
You become a guru developer when you realise that judging any code based on any metric without knowing the full context is often misleading. Yes, in general removing code is good, but does that mean removing the full project and committing that is the best change you can do?
Imagine you're developing a storage with S3-compatible API that is meant for public consumption, but you don't actually have something to store. Then yeah, bad idea to write some sort of storage system.
Point being, yes, of course you can come up with 1000s of examples where it's good/bad to delete all code. The point of my comment wasn't "It's never good to remove all code for a project" but rather "If this change is good or not depends on variables from outside the Git repository".
The problem is that this is always, always harder to sell than a new feature/launch/whatever addition. You might be a master developer but you'll not be the highest paid / highest level.
This is why I like to ask people to talk me through a change that I think I don't like, or might be wrong, before passing judgement. One of two things happens in the process of them explaining:
- they know something about the context that I was unaware of or have an insight that I hadn't thought of and so I learn something and change my mind
- they realise they've missed or misunderstood something, or were unaware of something which I can communicate to them, so they learn something and change their mind. Sometimes this can be as mundane as "this isn't idiomatic, we should prefer the community style.
It's almost always a learning experience for one of us, frequently both. I've learnt a lot over the years of reviewing other people's code.
In the cases where neither of these things happen it's because it's a question of personal taste, there's nothing wrong with the code, I just wouldn't have written it that way. In those cases I leave it alone.
Hrrrm, so that comment was badly formatted for some reason, I think I should have put a blank line between the two cases in the second paragraph which should have been a two item list :/
A non prod breaking scenario I have seen - only towards the end of a fairly long refactoring exercise, when tests for certain edge cases start failing, you finally understand the full purpose of the code you are refactoring.
Not to fall into the obvious trope too much, but: Your legacy code has tests for edge cases? I'm happy when it has any tests. More often, it's untested and I write the tests myself before refactoring the implementation.
I thought the whole point of a true refactoring was that it should not lead to any changes in outwardly visible behavior. If you're changing what the code does, that should absolutely be pointed out, regardless of whether it's phrased as "clean up".
Transition to pro TL: keep the solution to yourself, let everyone speak and piggy back on a team member that comes close to yours without forcing. If no solution matches, also present yours with the added benefit of weighing the pros and cons of the ones before it.
I work in a lot of old systems and this is something I have to remind myself of constantly--the people that were working on these systems before me were not necessarily dumber than me. If they made these choices, they may have had good reasons for it that are not immediately clear to me, but if I change it may have far flung consequences that I am just not aware of.
If a change has far flung consequences that aren’t obvious, the code is of poor quality. If the reasons for the decisions are non-obvious, the code is of poor quality. I’ve been bitten y assuming that people thought things through the first time too many times. If it’s not in the code base or documentation, I’m assuming that it was done for expedience. Not because they were dumb, but just because that part of the code base wasn’t important when it was written.
Trying to understand everything passively prevent you from actually understanding things, trying to change something will teach you about underlying issues or it will show there no issues.
Zoom out a little. Code can work perfectly fine technically yet have far-flung consequences in terms of business cases, human workflows, etc, etc.
So it's not as easy as saying "there's always a good reason" or "there's never a good reason". Make a fair effort at uncovering the reason and then it's a judgment call with insufficient data.
Exactly. Code can pass 100% of tests--but the tests only cover what the test writer knows about. Code, especially old code, can encapsulate TONS of weird human and business specific behavior that you don't know about or doesn't seem sensible at a quick glance. Don't be so quick to dismiss old code--people rarely set out to build something thats shitty and code, especially code that seems completely asinine, is rarely just in there for the lulz.
Chesterton's Fence sounds like it was retooled by Jordan Peterson in his definition of truth (broadly: anything evolutionarily useful, Making Sense podcast w/ Sam Harris).
Peterson takes it to weird places, IMO, and Chesteron's Fence works better.
"Don't remove a fence until you know why it was put there."
It was a rhetorical device to support conservatism from the beginning. Chesterton was using it to defend Catholicism, but it works just as well for racial segregation, slavery, voting rights restrictions, any status quo.
For the counterpoint from the late great Joe Armstrong,
> Rewrite from scratch with zero external dependencies - always the fastest way (or at least the most fun way) - it increases understanding. [1]
I don't have a permanent viewpoint on this, I kind of bounce back and forth. I definitely think that whole rewrites are important for getting the module boundaries right, and I think it's important to have seen a problem from a bunch of different perspectives, ideally some of them really bad and borderline outrageous, before you commit to one solution. (“This problem is NP-complete, you just want to brute force it?!” -- yeah it turned out that in practice n was no greater than 9 and you could in fact easily hardcode an upper limit and try all possibilities to find the best one. But even if you don't get lucky, the fact that ridiculous ideas came to mind suggests that all the good ideas were on the table at least once.)
Right now I am more inclined to enjoy rewrites because I am working on a project which itself was a rewrite, and I would like to rewrite it back to something more like what it used to be, hah. If I do get my way you can bet I will turn coat, “we can't rewrite this now, don't you know how expensive rewriting is?!”
I'd suggest there's a difference between "I don't understand how this is supposed to work" to "I do understand how this is supposed to work, and I don't understand why it isn't working."
Seat belt story seems to be Type 1. But what if a lot of "stupid" design decisions are actually Type 2?
And the reasons may or may not be good - somewhere on a scale from real budget and/or time constraints, lack of insight, indifference, penny pinching economics, to passive aggressive user hostility.
When Apple removed Magsafe no doubt there were perfectly good internal justifications for that decision. But ultimately the people outside the company who said it was a poor move from a user POV turned out to be right.
I think it's a case of defining what you know, and asking questions for what you don't know. But, of course that's easier said than done. Maybe you chase after a red herring, maybe you ask the wrong questions. But I think asking the right questions comes from expert understanding of the fundamentals of anything, which is really hard, maybe for a lot of new fields, impossible because no one has established fundamentals, unless you're really smart. Maybe you permute the thing, or its joined with another thing, so it really becomes it own new thing.
I personally hate the non-magsafe charger because I trip over it and yank the laptop around, but I can see the trade off in being able to use a single charger for everything. Because of that, I see the magsafe issue being a permutation, because by changing the environment that laptop chargers operate in, with the addition of usb-c, you've changed the game.
> I personally hate the non-magsafe charger because I trip over it and yank the laptop around, but I can see the trade off in being able to use a single charger for everything.
So why the *ck didn't they just include a little bit of magsafe-to-USB-C cable with the phones, and keep the wall-plug-to-magsafe charger? Then they could have transitioned the laptops over to USB-C at any time without inconveniencing users, added the safety of magsafe to the phones, and allowed users to mix-and-match chargers and cables.
But no, that's exactly what they didn't do. Funny that, for this vaunted "customer-centric" company...
> But ultimately the people outside the company who said it was a poor move from a user POV turned out to be right.
Survivor bias? I'm sure there were people outside the company who said removing the headphone jack was a poor move, removing the CD drive was a poor move, moving to Intel was a poor move, removing the HDMI was a poor move, removing the F-keys was a poor move etc.
- you can please some of the people, some of the time.
- some mistakes will be made along the way. That's good, because at least some decisions are being made - when we find the mistakes, we'll fix them.
At some point it boils down to profits. The MagSafe decision was made to increase profits. Apples assumptions turned out to be wrong and they realized it didn't increase profits.
Sorry, you lost me there. It was a bit of an annoyance at first, but I am all-in on the USB-A to USB-C/DP/Thunderbolt/PD unified port transition. I plug in my laptop to a Thunderbolt 5K monitor and also get power for the laptop as well as a bridge to USB devices plugged into the monitor hub. I have zero interest in going back to USB-A.
The monitor has three USB-C ports on the back. I have a couple of tiny USB-C to USB-A adapters in my backpack. No dongle, just an adapter that connects between the A cable and the C port. However, I am using the adapters less and less. I only bought USB-C peripherals for the last 2-3 years.
I think we have the opposite problem too. People ascribe way too much intentional design to shit they don't understand even where no such intentional design exists.
Like 99% of the stuff you encounter on a daily basis is how it is because of some 3-way trade between aesthetically pleasing, the economic realities of producing that object and "how they've always been." It is not engineered for performance in the slightest other than some basic "yeah that should do" napkin analysis.
I think of this whenever I see coat hooks with dual hooks, the top one extending further. I've always assumed that they were designed when people regularly wore hats.
In my house, we use the bottom hooks for "outdoor base" and the longer top hooks for "outdoor top layer". It means you're never putting dry base layers on top of a wet waterproof layer.
I think it also provides value for people to hang purses.
But as other comments mention they found utility in it through layering. It may have been for hats once, but it’s certainly kept around because people found new uses.
> it’s certainly kept around because people found new uses.
Yasure that's the reason? I mean, sure, it's possible... But my money is on the "how they've always been" / "the original constraints made sense at the time but they’ve been forgotten and all we’re left with is the end product" mentioned above.
Happened to me more than once in a new project to come across some old code I didn't like, and when trying to understand it finding out that the old team doesn't understand it either and just does things out of inertia. Now what's one supposed to do here, except starting to refactor to get a feeling what dragons there are?
So to play devils advocate, a seatbelt harness is actually much safer than the seatbelt in a mass-produced automobile.
The reason seatbelts release tension is a compromise in safety made for the average driver (i.e lowest common denominator). The engineers probably thought they would compromise on safety because otherwise most people are not capable or willing to operate a harness. Aircraft and high performance automobiles use a design that would make perfect sense to a nine year old.
Correct. Multi-point racing harnesses can only be used with a cage. The standard diagonal shoulder belt design allows your body to lean over sideways (towards the center of the car) if the roof of the vehicle caves in on your head and pushes you down. If you're in a harness without a rollcage and the roof caves in, the harness locks your spine upright and you end up with a head/neck/spine injury that's very likely to be fatal. This whole line of reasoning snowballs from there, because beyond that:
If you do install a well-designed cage just so you can safely use a harness, now you also need to wear a helmet. A cage without a helmet is begging for a bad head injury, because your head bounces around a little during an accident, and those bars are hefty, rigid, immobile objects mere inches away from your skull.
Even in a fairly low-speed incident that wouldn't cause other injuries, the bars would put a dent in your naked skull. You're also going to want a neck restraint system for that helmet (it's like a collar under the harness with little straps to the helmet, so that it can't move far), in part because of the added head weight (momentum) from the helmet and how rigidly your torso is locked in place, to prevent neck injuries.
Now that you have all those parts down: the combination of the harness locking down your torso, the neck restraint, and the limited window of visibility from the helmet means you've lost a good chunk of your peripheral vision and can't turn your head/neck to check things either. With your vision more or less locked in straight forward, you can't see all the things you really need to see to drive in normal street conditions safely. The limited vision works fine on a racetrack under racing conditions because everyone's trained the same way and operating in a sanitized environment, with no "intersections" or cross traffic, you have flag workers and/or radios to warn you of some things you can't see, etc.
The bottom line is that both street and race car safety systems are a whole complex of things that are engineered together in concert. They're well-tuned to the appropriate environment, and foolishly mixing ideas from the two generally makes you less safe, not more.
Random context video showing a race crash from the inside and outside perspective. Loss of brakes at the end of a high speed straight at COTA (he was still moving at 136 mph at the wall impact, after all the attempts to slow it down). The driver walked away fine, thanks to all the safety gear that the cockpit video shows off nicely near the end: https://www.youtube.com/watch?v=dQitOyEyRd0 .
Adding to that, it helps to think about how the types of accident are dramatically different.
1. In a regular car, most accidents are at relatively low speed, and in those at high speed are generally front collisions (for at least one of the cars), so cars are designed in a way that the front and back are crumple zones to absorb the shock and limit energy transfer to the passengers. So after protecting passengers from flying out of (or within) the car with the belt, airbags are there to prevent whiplash from front/rear hits.
2. In a racing car, impacts are usually at high speed, and in basically any direction. Accelerations and decelerations will be significantly more brutal, and particularly for rally cars, vehicle rollover is really common. Hence the rigid cage + harness to reduce direct damage to the driver in those events. But even so, since the helmet's weight and the rigidity of the vehicle can make whiplash and basilar skull fractures, there are also helmet restraints (like HANS[0]) to reduce this risk.
This is the reason I love HN, and I couldn’t have come up with a better example of exactly what the article talks about than this comment and it’s parent. :)
What amazes me most about that crash video is that there are no airbags deployed. You explained that racing safety systems don't belong in street cars. But I didn't expect the reverse as well.
I'm not familiar with this category of motorsport, but at least in F1, all safety systems need to allow the driver to leave the car in less than X seconds in case of accident (for example, if the driver needs to get out of the car in the case of a fire).
So maybe, and this is a guess, airbags are not activated in lateral impacts like this one as they could make this escape more difficult than the damage they are preventing.
Looks like the driver was intentionally skidding the car sideways to increase friction too. That's some very quick thinking I would not expect from a regular person on the street...
The seatbelt is an excellent example since there are multiple mechanisms with varying observability contributing to its effectiveness. Even after discovering the dynamic locking behaviour, a naive yet inquisitive passenger remains blind to the pretensioning feature that is observable only during a collision.
Another is why they lock when you pull them out all the way: I discovered this accidentally as a kid, but just learned the lesson "don't do that" and didn't really think about it further.
It wasn't until I had to install a child seat for the first time when suddenly it became clear this was a very useful feature.
It's an ingenious design. Given a seat belt that cannot be intentionally locked in this way, how would you address complaints from users who want to arbitrarily lock their seat belts? To me, "pull it out all the way" isn't an obvious choice, but is such an elegant solution.
Thank you! I have wondered this myself and mostly just fine with “it’s annoying so I make sure not to ratchet it anymore,” but since I don’t have kids (yet?) — thank you for the explanation!
Kinda related. When I was like 11-12ish, I remember waiting in my Dad's car with my younger brother when he went into a store for like 20 mins. We started playing with the steering wheel and then suddenly I couldn't turn it anymore. It was locked. We were shit scared that we broke something and how will Dad drive the car now ? I remember being quiet when my Dad got back to the car (I had warned my younger brother to deny that we did anything) and the moment he put the key in ignition, boom. I face palmed in my mind and learned how steering wheel locking works :)
One particularly prominent special case of this is when one engineer makes a snap judgment about another engineer's work that it's "overcomplicated". Oftentimes this means that the person making the judgment hasn't thought deeply enough about the problem to be able to see all the complexities in the problem that justify all the complexities in the solution.
I feel like, at this point in my career, and in the specific context where I work, this has now become the bottleneck to my career progress.
The problem is: It works as a self-fulfilling prophecy based on the fallacies of social proof and fundamental attribution error. People reach for the snap judgment "this is overcomplicated" because they expect me to just be the kind of person who just overcomplicates things. And they feel easily justified in that judgment given that other people also tend to make that judgment about me. This means they budget even less time for trying to understand the complexities in the problems I solve before reaching for the easy judgment of "this is overcomplicated".
So: The guy who always overcomplicates things, thus draining the company's cognitive resource needlessly and just generally being a nuisance, ends up looking much the same from the outside as the guy who could just be a company's strongest engineering asset. He could be the guy to put the company ahead of the competition, always understanding the engineering problems more deeply and solving them more thoroughly than others, including the competition, in a way that the competition can't easily replicate. -- ...if only the people tasked with passing judgment could overcome biased and lazy decision-making.
> Oftentimes this means that the person making the judgment hasn't thought deeply enough about the problem to be able to see all the complexities in the problem that justify all the complexities in the solution.
This is often the case, but even more often I think my annoyance with the overcomplexity of the code isn't that the code solves the problem in a too complex way, but that the code solves a much too complex problem.
Most of my job as a developer is fighting complexity. Doing it in code is an uphill battle. If you have to solve complex problems with non-complex code, you are fighting a losing battle. The struggle to keep complexity out of the code happens in the dialogue with stakeholders (Project managers, customers, team etc).
Now, there are of course cases where the conclusion after the complexity debate might have been that "yes, the requirement is for the program behavior to really be this complex, meaning the complexity of the code is required". But this is extremely rare. The reason I get annoyed when seeing complex code is because through the decades I have learned that almost invariably, someone took a complex requirement at face value and implemented a complex solution to the requirement.
I'm often reminded of the saying by H. L. Mencken "Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong."
Now, I do agree that 80% of all engineering problems that one comes across in practice are amenable to the kind of problem solving tactic where instead of solving problem X, you instead look for opportunities to solve adjacent problem Y which is of lesser complexity.
But sometimes, problem X is just really the problem you actually have and actually need to solve. And you need to add to that the fact that I specifically seek out these situations, because craftsmanship ranks high among my personal values.
To give you a bit of context: Using data from the stack overflow developer survey, I rank in the 87th percentile among developers in terms of years of my life that I've been coding, 70th percentile in terms of years of coding professionally when I don't count my Ph.D. as "professional" coding. The Ph.D. puts me in the 97th percentile of educational attainment, and happens to be from a top-5-ranked university. I spend 50% of my time at work doing boring mundane things like implementing ETLs, which I've done professionally since I was 15 years old (I'm now 37), and I don't even complain about that.
With the other 50% of my time, I try to get my psychological needs around craftsmanship met, and seek out the toughest engineering problems we have in the company where I work. And then those are precisely the sorts of situations, where I have 20-somethings who are my bosses tell me that I always overcomplicate things. This feels extremely humiliating to me, and is a huge lost opportunity for the company.
> Now, I do agree that 80% of all engineering problems that one comes across in practice are amenable to the kind of problem solving tactic where instead of solving problem X, you instead look for opportunities to solve adjacent problem Y which is of lesser complexity.
The problem with these "overcomplicators" is that they see those 80% of problems as the 20% (and in my experience it's more like a 95/5 split).
> To give you a bit of context: ...
If I gave one of my coworkers (or reports) feedback that they had overcomplicated a solution and their response was to drop they're in the 97th percentile of programmers with a PhD from a top-5 ranked university, I don't thikn that would dissuade me in any way, in fact probably the opposite. It's an ad-hominem defense; defending _you_ rather than the decisions you've made.
> With the other 50% of my time ... seek out the toughest engineering problems... where I have 20-somethings who are my bosses tell me that I always overcomplicate things.
With all due respect, they might have a point. If everywhere you go smells, maybe it's time to check your shoes.
> If everywhere you go smells, maybe it's time to check your shoes.
...well it could also be the fundamental attribution error I mentioned earlier.
The very way this thread is unfolding reflects my initial point: From the outside it looks the same.
Telling which is which really requires looking at the specific situation, and thinking about it very deeply, and resisting reasoning from an ad-hominem basis, resisting fundamental attribution error, risisting superficial analogies, like your shoe-analogy, or "I've been in lots of situations with engineers who overcomplicate things and usually it's because they actually do overcomplicate things, so I'm just going to treat it like it always is the case that the actually do overcomplicate things".
Regarding the paragraph about percentiles: I did not write that to suggest that I'm always right, and others are always wrong. I wrote it because I think it's natural that 90th-percentile engineers would seek out 90th-percentile problems to tackle. And it's precisely that point where reasoning from "usually..." becomes invalid reasoning.
In fact the seatbelt example is the perfect example, because if you show somebody a seatbelt mechanism, their initial reaction is indeed likely to be "This is overcomplicated. Why not just use a fixed rope?"
> it could also be the fundamental attribution error I mentioned earlier.
It absolutely could be, you're right. On the balance of probabilities if it's _consistently_ happening to you, maybe it's not as simple as "I'm smarter than everyone else in the room".
> "This is overcomplicated. Why not just use a fixed rope?"
The answer to that is clearly demonstrable, and the analogy holds true. If you are an engineer designing a solution to a problem, you should be able to articulate _why_ this solution is necessary, and what problems it solves. If you're consistently being told that it's over engineered, and aren't able to refute that (as you're implying that this is humiliating for you), then maybe the solution is unwarranted.
Reducing the "overcomplication" problem to a "communication" problem doesn't help, because it suffers from the same shortcoming where these Dunning-Kruger-esque paradoxes are concerned.
All too often the pattern is the judge saying or thinking something like this: Since the subject matter is amenable to proof, and since I, the judge, am such a smart fellow that I would certainly be able to follow any proof presented to me with little effort, my reasoning now works as follows: Step 1: I expend little effort when passing judgment. For example, I shall feel free to pass judgment by declaring something to be "overcomplicated" if it comes from a person about whom I hold the opinion that he tends to overcomplicate stuff (fallacy). Step 2: The burden of proof to dissuade me from that judgment then falls on the enginneer. When he presents that proof, I shall then expend only little effort in trying to follow it, and if I can't, then it surely must be due to the fact that, on top of overcomplicating his engineering solutions, this particular engineer is also a bad communicator. Note that on top of piling on one piece of fallacious reasoning on top of another, this pattern now also starts to take on the flavour of special pleading. Step 3: I am then no longer under any professional or moral obligation to listen to pretty much anything that person has to say. And I will not let this stop me from using my influence to prevent this person from having a career in the company where I work.
> Oftentimes this means that the person making the judgment hasn't thought deeply enough about the problem to be able to see all the complexities in the problem that justify all the complexities in the solution.
While this is true, oftentimes it's true that the person who solved that problem solved it through a lens of their own knowledge (blindspots included). Some of the biggest, most difficult to use messes have come from the "expert" developers who solve a whole bunch of problems that are tangentially related with one solution that is 50% neat and 50% duct tape because they _didn't_ think about the complications when they designed it.
The reverse is also quite true.
You can get a thing which just seems too slow/fiddly/buggy/etc, but it's in an area where you are unfamiliar with, so you pass it off as just your misunderstanding of the thing. Whenever you investigate more on it, you find that there are ways ppl use it that avoid the problems that you are having with it, or that pieces of the thing make a certain amount of sense the way they are.
Then as you learn more you begin to understand the underlying reason that the thing is the way it is, and you realise that it's not you. Poor decisions were made in the design of the thing initially (or at least decisions which once valued the right things, but now do not), and have become baked into the DNA of the thing so it could not be fixed without major changes that nobody wants to do.
It's at that point, you realize that the emperor has no clothes, and certainly didn't deserve the benefit of the doubt.
I find this is particularly the case when the design flaw is particularly bad, the developers end up adding complexity to work around the flaw, which has the effect of hiding the flaw, making it alot less obvious to the novice.
This reminds me of a quote from John Carmack that gnaws on the back of my brain during day to day development - "A large fraction of the flaws in software development are due to programmers not fully understanding all the possible states their code may execute in." Seems it would also be relevant to most of modern inventions.
Echoing another replying comment: and it's not possible for them to understand it all either.
Fun (/s) example:
- Outlook email client in the browser (Firefox in my case)... typing a new email up. Hold down backspace to delete a bunch of characters... and occasionally my browser will navigate backwards. ("smart" text suggestion lagging, causing keypresses not to be always captured by the email text input box, perhaps?) At least Outlook auto-saves drafts reliably!
- I (probably) can't be mad at the individual programmer though. I'm the oddball that has reverted the browser setting for "treat backspace as browser-back, outside of text fields". I bet that never made it into a test-case anywhere.
- I could be mad at the browsers that changed that default behavior several years ago (I feel like Chrome did it first, but could be wrong), though they did it for the valid reason of protecting the (less aware, imo) users!
- I guess I could be mad at all the websites that poorly implemented form-filling, letting users get burned by accidentally going back a page and losing their input. But I can sympathize with too many "features" and not having time to implement that edge case!
Sigh, my ideals of software quality are all doomed, aren't they?
For a long while now, I've always had a mouse button mapped to backspace. Good for browsing and for deleting. ... Upon testing, it seems I no longer do this, and forgot. Guess that button's set to "browser-back" now.
The whole point of layers and abstractions is to reduce the number of states you have to worry about. The way that people talk about abstractions like they're a bad thing makes me think they've never really stopped and thought about all the many, many good abstraction layers they use. It makes sense: a great abstraction is basically invisible, so you don't even notice it unless you pay attention. But everyone who says "too many abstractions" has clearly not stopped and paid attention. You don't need fewer abstractions, you need better abstractions.
An abstraction layer makes itself most visible when it fails. Avoiding failure requires better or less abstraction layers. In many cases I don't trust others to build correct abstraction layers, good UI libraries, reliable GPU drivers that don't corrupt memory when sleeping the system, etc. (Filesystems and virtual memory are largely reliable in the common case, though it's still common to avoid filesystems and allocations and swapping in real-time code, and schedulers cause grief when they cause media payback to stutter when I'm running compiler jobs.) Often I trust myself to work with less abstraction layers, more than I trust others to build better ones.
The other side of this coin is that some ignorance can be beneficial to break out of local minima.
> How do you overcome schlep blindness? Frankly, the most valuable antidote to schlep blindness is probably ignorance. Most successful founders would probably say that if they'd known when they were starting their company about the obstacles they'd have to overcome, they might never have started it. Maybe that's one reason the most successful startups of all so often have young founders.
Re: "young founders," this is simply not true, at least not anymore, if it ever was:
> More broadly, 2018 research published in the Harvard Business Review found that the average age at which a successful founder started their company is 45. That’s “among the top 0.1% of startups based on growth in their first five years,” according to the report.
Also "most successful startups". OK let's do a quality of life scale, 0 for destitute, 10 for billionaire. Millionaire would be 9.8 on the scale. Hell maybe the other way around, being a millionaire is better than being a billionaire (because of the problem of fame).
A lot of people reach that by doing mundane things like starting a tyre fitting shop, or janitorial business, or flipping houses, or very niche software businesses.
Depends what we mean by success. There is the 'make investors rich' kind of success that an investor is looking at encouraging people to do.
I think the key distinction here is "the most successful startups", not "a successful startup". If you look at the biggest successful startups in the last ~20yrs it definitely seems like the founders skew extremely young.
Older entrepreneurs are quite successful as well, the difference is they aren’t in tech so you don’t hear about them. They’re in Real Estate, construction, logistics, infrastructure and so on. They don’t found startups, they start small businesses. Usually with debt or cash and they don’t sell equity in exchange for ownership of the business. They are a big fraction of the >22 Million Americans who have a net worth greater than $1m.
> They don’t found startups, they start small businesses.
You (probably intentionally?) put the finger on something that's been annoying me about this whole "Startup!" malarkey for a while now: The pretentious terminology. "Startups" are actually just small businesses.
My “other side of the coin” is: as a designer of a system, never explain how they work, just tell people it is smart and that they should not assume otherwise since they don’t know how they work. And then go back swimming in your giant vault of money saved by skimping on security.
I think the harder part is determining how much you actually know about something, to know how much you can judge.
My default nowadays is I never know enough, even in the fields I'm specialized in. Part of it is I think just a limit of how much information I can accurately refer to/pull up in my brain at any given time too.
Sometimes the engineers are "too" clever. In my car, seatbelts have an additional mechanism which causes them to stop if the car is braking (with a certain acceleration, perhaps, or actual detection of the brake - I haven't determined). This seems very sensible until you live near an intersection which is not at right angles, which requires the driver to lean forward upon arrival at the intersection in order to look around the A pillar for oncoming traffic.
What's interesting is that I now know something that the engineers designing the system should have known but apparently didn't, or knew and didn't care about. So sometimes (or much of the time?) users do know something the engineers don't.
> In my car, seatbelts have an additional mechanism which causes them to stop if the car is braking (with a certain acceleration, perhaps, or actual detection of the brake - I haven't determined). This seems very sensible until you live near an intersection which is not at right angles, which requires the driver to lean forward upon arrival at the intersection in order to look around the A pillar for oncoming traffic.
This is a tradeoff.
You can have a pure pay-out based locking retractor (centrifugal clutch). Or you can have a pendulum-based mechanism, or a combination of the two.
If you have the pendulum-based one, you'll be less likely to false positive based on the movement of the occupant but more likely to false positive based on deceleration of the vehicle.
The combination is best, because you can be relatively insensitive to individually occupant movement and vehicle deceleration and still actuate reliably in a crash. But it still will false positive.
In any case, "not annoying the user" is prioritized beneath "saving lives in a crash" by regulators, and hence the auto industry.
I'm guessing you brake rather abruptly in this situation if this is something you routinely encounter.
I recently upgraded vehicles from a 2009 to a current one with all the new safety features. In my research I read lots of complaints about emergency/auto braking activating too much and other features of the car stepping in to assist the driver. Since switching vehicles, on the incredibly rare occasions something has activated, it's always been valid and a quicker reaction than my own.
Your tag line reminds me of all the times I've pondered how bad these complaining drivers must be.
Odd that you should mention this: tonight my car decided it needed to ABS brake (very alarming when unexpected) while I was reversing at less than 5 mph, apparently because a car >20 feet away was turning into a parking spot, well out of my path of travel. The car proudly informed me it had averted a collision, when in fact the application of the brake was unnecessarily violent enough that it quite possibly did damage on its own, and could have caused injury. Progress!
This is one of the things I love about open source software. When I want or need to, I can attempt to modify something to better suit the behavior I want. Some fraction of the time it ends up being too much effort, or I learn that my idea didn't take into account some secondary factor. But often enough I make can make a change, and even maybe contribute it back. I'd love to see more open-source with physical product - 3D printing seems like a good start.
It's always terrified me how one experience or one anecdote can completely change a persons view on a topic and make them blind to the overwhelming evidence they've previously experienced and accepted (and ignore future evidence)
A single rude salesperson and they won't step foot in the entire chain, a bad cop or doctor and the whole police force or medical establishment is corrupt. Listen to a politician giving a speech and people become lifelong supporters even when their policies change radically.
There is something deeply embedded in the human mind that makes us very susceptible to some stories - against all evidence, if it fits our preconceived perception or coincides with something in our memory.
> a bad cop or doctor and the whole police force or medical establishment is corrupt.
I’m not sure these are the greatest examples.
If “good” cops stand by watching “bad” cops abusing their position including stealing, lying, assaulting, murdering or torturing people, then maybe we are speaking of institutional issues blurring the lines between behaviors typical for organized crime and ones observed with the police force. Especially if whole police teams threaten to stop doing their jobs if their insanely inflated budgets get reduced.
Of course there are good cops. You can often hear about them getting pushed out of the police force for reporting the crimes of the bad ones.
If doctors are allowed 20 minutes per patient visit and are constantly pressured to prescribe medication and procedures for profit then we might have an issue there, too.
One judgment error that people often make is to assume it's about "cops" or "doctors" or "politicians" or whatever. Like, take any large group of people - some of them will be good, some will be bad, some will be weasely, some will be cowards, some will be heroes, some will be psychos.
If your reaction to group X is "it needs to be disbanded because <incident>"... I suggest that this is not a wise reaction. In this sense, all "establishment" is corrupt, to some degree, by definition. To try to create brand new "uncorrupted" replacement is often a recipe for disaster (especially if you only rely on "new people", not on actual new systematic ways of weeding out said corruption).
With this theory, you are making it impossible to talk about institutional failure and about culture within institutions. It also makes it impossible to reform those.
Because whenever caught, bad stuff will be always framed as individual failure, despite those individual being encouraged to act that way by their bosses and peers.
No. I'm saying that institutional failure should be addressed/solved "in place", rather than blindly hoping that disbanding said institution would solve anything.
- There's a high likelihood that the institution was created because it's useful. Before arguing for its complete removal, the onus is on you to prove that you understand why it was originally needed, and why/how the circumstances changed in a way that makes it non-useful/not needed anymore.
- There's a high likelihood that if it's needed, just disbanding it and creating a new one will not solve stuff in the long run. Before replacing the institution one needs to show that they understand what are the challenges of reforming it (why reforming will be impossible/more difficult than recreation; and why the new institution will avoid having the same fate).
It's in a way similar to software: many engineers will jump at the opportunity to declare that "old code is garbage and should be rewritten from scratch"; but that is seldom a good choice, and once you start on that path, if you're simply trying to replicate the same functionality you are more likely than not to end up with new code that is still garbage.
Perhaps I should have sad rude or lazy cop or doctor rather than bad - of course its the fault of the police force or 'medical establishment' if bad police or doctors are let continue - there's a lot of scope for occasional mistakes, rudeness, and just having a bad day that is hard for the organisation to get rid of.
I think there is an innate human bias to weight negative experiences more strongly than positive ones.
For example, I will forget a compliment a day later, but I will never forget that one time I was harshly criticized.
It makes sense from an evolutionary perspective as well, the worst possible experience (death) is far worse than the best possible experience (food,sex,comfort,etc).
Re: your example about politics. Politicans explot this by talking about how bad the others are, rather than how good they are.
I like the human-relatable story behind it. I think many kids experience a similar situation that sparks this life lesson.
My younger self condemned my father's printer as "dumb" for printing a multi-page print job in reverse order. It took me a couple days to understand it does that so that you have them sorted once you pick them up (printed side was up).
By that time I also have already publicly declared the stupidity of that printer. I think the shame after my enlightment deepened this life lesson.
In the spirit of humility that this article promotes, can anyone explain to me why so many doors have handles that stick out when they’re push doors and a sign that says push instead of having no handle and say a protruding flat push surface that requires no sign and intuitively indicates to push even for the sight impaired?
No good reason. The book he mentioned here talks about it.
> Sometimes things really are poorly designed (check out The Design of Everyday Things)
They're sometimes called Norman doors after the author of that book. (Seems weird to me to name something after the person who pointed out how bad it is, but whatever...)
Saves manufacturing costs. When I make a door I want to make one model that will function for every use case. The "pull bar" can be used for push and pull so I put it on both sides and be done with it.
My household was a little more conspiratorial, so my childhood version of “the engineers must be stupid” was “they figured out it was a good way to rip you off.”
I would add the caveat that, when doing so, be aware of, and vocalize, your current state of familiarity. Also, I find that asking questions leads to better outcomes than making statements.
My experience has bit more expansion to this 'mental rule'.
I judge something 'proportional to how much I know about it' and in context when it was developed.
Designers take things outside of the design into consideration. The design may remain alive and in use (but not necessarily the original intended use) way past the original, outside environment.
Yep. Sometimes: "this is no longer how we would handle this today, but at the time it made sense because of X" is a common finding if you dig deep enough.
When I was a kid I decided that test pilots must be idiots.
My logic:
Being a test pilot is dangerous. Only idiots do dangerous things. Therefore test pilots must be idiots.
Later (embarrassingly later) I learned that many test pilots were also engineers. This made me reconsider my opinion. I learned to be very careful when judging intelligence, and also the limits of inference.
Being an idiot us a varied and nuanced thing. Plenty of very intelligent people are idiots in one way or another. Being drawn to excessively risky behavior is just one aspect of idiocy.
I think these comments highlight to me how difficult it is to reason and talk about intelligence.
Intelligence and stupidity suffer from fundamental attribution error.
The main thing I learned from reflecting on my error is that making any sweeping conclusions about people is foolhardy.
So yes, engineers can be idiots, test pilots can be idiots, smart people can decide to do dangerous things on purpose, idiots can do dangerous things on purpose, idiots can do dangerous things on accident, so on to absurdity.
Putting people in buckets based on some observed criteria is rarely advisable; everyone's story is different.
Then don't judge anything as 'stupidly designed'. What does that mean? Just say "I think the design flaw is XYZ".
If I find a stupidly designed product, it is often because it was the cheap option. Spending more saves money in the long run. Water bottles spring to mind - most of them leak or get damaged by dish-washing sooner or later. Spending $20 on a water bottle is cheaper than spending $5 ten times. Although a high price isn't a guarantee of quality either!
If you are a 9-yr old developer, this makes a lot of sense. Most people in a professional situation are closer in skill level and this kind of disparity indicates a lack discipline when hiring. I see it all the time so it is not unusual.
I drew a similar conclusion about people's observations, in general: the faster the conclusion the less knowledge about the topic - which is similar to Kahneman's system 1(intuition).
I first tried out the WWW when it had fewer than 2000 pages, and didn't see the point. My constant reminder that if something initially looks useless to me, it's probably me.
ffff. Can confirm, people setting up traffic lights are idiots. Sometimes they're set up based on 15 minutes of observation. I don't care how many weeks you run your crappy simulations, they're so inaccurate that its laughable.
Source: I built a startup in this industry to prove it.
Edit: I kid you not, I once went to a conference and a traffic engineer for a sizable city got on stage to present the best idea they had and it went like this:
> So once traffic gets to 80% on the main road, we're just going to change all the lights to green and everybody on the side streets can just suck eggs for 5-7 minutes.
That's it. That's the best their simulations could come up with. These people use interns with clipboards to get their data.
I worked at a local department of roads and motor vehicles where their traffic light management software was written in the late 80s to early 90s and looked like it.
Mind you, they had modernised it! It worked on 32-bit!
If anyone thinks that this Windows 95-era application had any kind of smarts in it at the same level as modern machine learning, AI, or even basic queuing theory, they would be sorely mistaken.
I believe all it had were some basic weekday-weekend and peak-afterhours scheduling capabilities. It also had sensor integration, but only at a few hundred key intersections around the city.
I see your Chesterton's fence and raise a Gell-Mann amnesia. Yes, the phenomenon in the article is real, but the opposite is also true. Many times I have seen something, thought it is silly, wrong, or could be done better, but I reserved judgement because I didn't know much about the subject. And then in the end, I was right.
Be it some technical choice at work that had obvious flaws. Or when we were renovating our house and as a layperson I noticed a serious problem the experts didn't see. Or when I was reading about poststructuralism, or critical theory at university. I had a feeling it was just a lot of word games around a couple important ideas - then I put in the work and read books and went to courses, and yupp, that was basically true.
Looking at it from the other side, as an expert on some topics, I know there are a lot of things we do that are not justified by the "subject matter" but we just do them because we have always been doing them, or because a pointy haired boss decided so. Or we have operational blindness and can't notice the flaws anymore.
Like how most things is for a reason. Especially within rigorous domains such as engineering, math, physics and programming. It is not to be expected in BS vendors safe space in subjects like politics, economics and social science.
I'm really surprised by the pushback this sentiment and the related idea of Chesterton's Fence gets from a lot of folks at HN. It seems eminently sensible to me. I understand how some people might think it would promote inertia, tech debt, and a kind of conservatism that can kill companies and institutions, but approached in good faith, I think it can probably prevent a lot of headaches. Surely the point is not to uncritically preserve the status quo, but to interrogate deeply why the status quo is the way that it is, to better decide whether or how a thing should be changed. And surely there is peril in the opposite impulse.
There are two parts to this 1) willingness to challenge the status quo 2) intellectual humility enough to seek to understand why the status quo exists.
There is some conflation of these two concepts in the comment threads.
Probably more software is objectively bad than other things people make, so we are used to looking into the details and immediately making things 10x or 100x better
I think the cavalier attitude stems from the fact that a lot of people work on software that’s really not that important in the grand scheme of things so it’s not a huge deal to break things down, make mistakes, and ship buggy code.
There are a lot more people programming social media sites than there are programming train traffic systems or missile controls—I have a strong feeling those that work on these sorts of projects have a much different attitude.
One of the best documents about software development I've ever encountered was the JSF Air Vehicle C++ Coding Standards by Lockheed Martin [1] - turns out that if you are writing code for multi-million-dollar, nuclear-first-strike-capable weapon platforms, "fail fast, ship updates often" doesn't quite cut it.
>- turns out that if you are writing code for multi-million-dollar, nuclear-first-strike-capable weapon platforms, "fail fast, ship updates often" doesn't quite cut it.
I would try not to write in C++. There are languages that are much safer.
I work in defense for a company called Anduril. I can tell you the code that anduril ships is buggy, hacked together and has basically a huge lack of test coverage.