Hacker Newsnew | past | comments | ask | show | jobs | submit | mmillin's commentslogin

>Worse, many behaviors of the system don’t necessarily have a lot of conscious intent behind them (or any)

This is one of the key reasons why I’ve ingrained the “You Ain’t Gonna Need It” principle into my development philosophy. Many times I see engineers build a system that can support a dozen different use cases, but only two are used in practice. A few years down the road when several other new features have been added and the codebase needs a refactor, the extra complexity of those ten unused features become hurdles. And anyone who knows better has moved on or been reorg’d away. I’ve lost too many hours bending over backwards to support a feature that I learn much later was never used.

Note that this applies to features that increase complexity. Sometimes you can support more with less complexity through a simpler abstraction, which I’m all in favor of.


>I briefly try to think of which chucklefuck I could blame this design on, but truth be told I rubber-stamped enough questionable pull requests in my time here that a fair amount of this situation is a mess of my own damn making.

This one really rings true. Every individual change you don’t push back on makes things slowly, incrementally worse until you end up with a pile of garbage. But do you really want to block someone’s change because they wrote some awkward, hacky code? After all, they’ve solved some problem for the business and it might only take an hour to clean up later.

Later never comes. Hacks get built on top of other hacks and that one hour improvement would now be a week long refactor. No one can justify that.

After few rounds of this you start to become the type of person who blocks changes for code clarity and things others view as nitpicks. Now you’re the asshole stopping people from getting work done. You ask yourself if it’s really so bad to let it slide this one time.

Repeat.


>This one really rings true. Every individual change you don’t push back on makes things slowly, incrementally worse until you end up with a pile of garbage. But do you really want to block someone’s change because they wrote some awkward, hacky code? After all, they’ve solved some problem for the business and it might only take an hour to clean up later.

That depends on whether hacky code is nicely contained or spreads it's bad influence all over the place.

I would totally ask to add a reference to the tech debt ticket right there to the code so it's clear it's not a good practice to follow and to make sure that the ticket is actually created and not scheduled to be created by adding an async job into a memory hole.

Half the time, just asking for that is enough for stuff to be straightened out right there, as it's easier than creating a ticket.

>Later never comes. Hacks get built on top of other hacks and that one hour improvement would now be a week long refactor. No one can justify that.

But that's sometimes okay, it's a lifecycle of software. If business doesn't schedule maintenance, then maintenance schedules itself. The job of a good engineer is to make business types in the know about and let them decide.


> But that's sometimes okay, it's a lifecycle of software. If business doesn't schedule maintenance, then maintenance schedules itself. The job of a good engineer is to make business types in the know about and let them decide.

It's complicated. I've worked on a product that had accumulated 15 years of tech debt. This all happened for very good reasons. The previous leadership often needed to ship features to get contracts signed and to make payroll.

However, paying that debt off had become very expensive and getting meaningful returns from these improvements took a long time. The most direct value came out of making tests more comprehensive and faster. However, beyond that benefits were only tangible over months to years and only if you worked on the right code area. In a large corporation leadership tenure in a role is often shorter than that. So the personal incentives for much of leadership was to just ignore the tech debt.

It's an extreme example but it's where you can get yourself.

Edit: I am honestly not sure I have a strong recommendation from this, other than "watch the tech debt and pay it off when you get breathing room". But then original company leadership AFAIK never paid most of the cost (if any?) of the accumulated debt and had AFAIK two(!) nice exits from it.


> I would totally ask to add a reference to the tech debt ticket right there to the code

99 times out of 100, they create the tech debt ticket, they add the comment, and multiple years later no follow up was ever scheduled in and the ticket eventually just gets resolved out as "wont fix" or is otherwise never looked at again.


I learned a simple trick. Document all the shit, invent a metric that shows a difference between a painful shitty service-component-whatnot and a good one. The next time you are asked about feasibility of a project touching the blob of pain, point to the document listing it's status as "needs maintenance". Guess what -- time to make it right is allocated more often than it's not and if shit goes south, I will be taken seriously the next time.

That's just business. When a tram line has to be constructed and there is a leaking something something that will sabotage it if not taken care of, it's taken care of and included in the funding.


> But that's sometimes okay, it's a lifecycle of software. If business doesn't schedule maintenance, then maintenance schedules itself. The job of a good engineer is to make business types in the know about and let them decide.

Or sometimes the code continues to encrust itself, like a pearl - or more likely, like a kidney stone - not fatally, but causing pain all the goddamned time.


This doesn’t just apply to code - it applies to organisations just as easily.

I ate a lot of cat turds as a kid. Ate a whole bunch through university, and then I signed up to work for a cat turd factory in London.

Sucked. Decided to stop eating cat turds and go start a candy factory instead.

For the first few years, we made candy. Good candy. Then, slowly, gradually, over the course of a decade, we realised that our clients wanted cat turds, so little by little we started putting more turd and less candy in the product, and as part of that we of course had to start eating cat turds again. I would distribute them to my staff, our investors would check in to ensure everyone was still eating their turds, and I would eat as many of them as I could to save the staff from indigestion.

Fast forward a decade and we had a full scale cat turd factory going. I’d get up every day, ram thousands of the things down my throat, spent my nights vomiting them up, for it turns out you really can only eat so many, and then repeat.

Eventually, my body started failing. It turns out you can’t live on cat turds alone.

So I quit, and only snack on a cat turd occasionally these days, when the state mandates it.

The factory is still going strong. After my departure they got rid of the staff who didn’t realise how many turds they’d have to eat after I left, hired coprophages, and switched over to producing a successful line of cat diarrhoea.


> But do you really want to block someone’s change because they wrote some awkward, hacky code? After all, they’ve solved some problem for the business and it might only take an hour to clean up later.

Isn't this precisely the time to do it? You reject with a note that says the idea is good, but code needs to be improved. "Doesn't fit the style" type of response. Make the contributor make the update so that it doesn't need an hour later. Take the hour now.


In my experience these hacky changes are usually a consequence of some impedance mismatch in the codebase. Something is poorly factored, the interface (whether technical or personal) isn't suitable for purpose.

It's easy to push back on a hacky change if there's an elegant solution close at hand. But often the business needs and the architecture of the codebase are at odds with each other.


It might take me an hour, but communicating the idea to the engineer, convincing them it’s better, guiding them to the solution, and then having them build it will take far longer wall clock time.


And devs that have this line of thinking is precisely why there's a back log when it can only be done by one person. Instead, create style guides or other structured documentation that can be referenced in the rejection so you don't have to do it would be time much better spent. It helps contributors to be better contributors. It makes contributors want to continue contributing. Having every submit rejected gets to the point of not wanting to contribute.


The biggest cause of these issues isn’t style-related or something that can be easily documented ahead of time.

A recent example: an engineer wrote a custom caching layer for a service call, then called the service wrapper every time the data was needed, relying on caching to avoid hitting our dependency too often. I suggested fetching the data once and passing it around, but the engineer pushed back, citing the feature’s launch deadline and the effort required to update multiple interfaces. Their solution ultimately has more failure modes and is harder to test (requiring mocking of the service wrapper in several places), but it isn’t terrible and we probably won’t encounter cache overflow issues.

Another persistent example is function parameter length. Inevitably, there’s always “just one more argument” needed until we hit our configured linter limit. About 70% of the time, engineers add a suppression and push the change through as-is. Refactoring to reduce parameters can require significant work (and simply stuffing them into a parameter object doesn’t solve the underlying problem).

I could respond to these patterns by expanding our coding standards guide. It could document that caching shouldn’t be relied upon within a single request’s scope, or reinforce that functions should have fewer than 7 parameters (already enforced by our linter). But in my experience, these guidelines are rarely consulted before contributing. As the parameter example shows, people often push for exceptions rather than follow standards rigidly.

I do think standards guides can work well for open source projects, where contributions should never block someone from delivering value. Contributors always have the option to fork and use their changes as-is, which undermines arguments for “just getting something in.” Internal service codebases don’t have that luxury. When you’re changing a service to launch your feature before a major sales event, delays have real costs, and there’s no “I’ll fork this and maintain it myself” alternative.


Ahh man I just dealt with this, reviewed some code, approved, another one comes in the same code from before is moved around. So much diff. I'm like "why was this moved?" Then I just stop caring, stuff is not indented right. So many things to point out (yes use a linter/formatter). It's tough other people just give me the LGTM treatment, I'm the annoying one doing actual reviews. This one is less reviewed since it's a vibe-coded POC but yeah.


I think about this a lot, 'normalization of deviance'. https://en.wikipedia.org/wiki/Normalization_of_deviance

Paid to, supposedly. Utterly frustrating to be hired for Reliability and told endlessly to lower quality. 'Controlled opposition' it is, then.


Did Anthropic ever use the term unlimited? I understand the general frustration with the pattern, but it seems weird to put unlimited in quotes when it wasn’t the way Claude was sold.


Nope.


>In 2024, 551 tech companies laid off nearly 152,922 employees, according to data from Layoff.fyi. The pace has accelerated dramatically this year. In just the first six months of 2025, 151 tech companies have already laid off over 63,823 people. On average a tech company cut 277 workers in 2024. If that rate is maintained for the rest of the year, the average number of layoffs per tech company in 2025 would soar to 851, roughly three times the 2024 average.

Am I missing something with this analysis? It seems like 2025 is on pace for fewer workers laid off, not more.


They are counting people per company per unit of time


Yeah, but statistics don't work like that. Are those companies a representative sample? Did only 150 companies lay workers off in 2025? How did they extrapolate for 2025?


Reading this felt very similar to watching “Pitch Perfect 237” (Anna Kendrick 9/11 conspiracy) https://youtube.com/watch?v=MiC9X_MoE1M


A significant amount of context is missing if you're not aware of Room 237: https://en.wikipedia.org/wiki/Room_237

A documentary about the various conspiracy theories developed around The Shining. The music in the YT video is from the documentary.


There are important reasons to have domestic manufacturing even if automated and not creating jobs.


Sure but that's not enough to give people enough jobs.


> incrementally builds my credit rating

Most BNPL plans are not reported to credit agencies and therefore don’t build credit. But “financing” by paying with a credit card and paying the balance in full would build your credit, still with no interest.


> But “financing” by paying with a credit card and paying the balance in full would build your credit, still with no interest.

The person you replied to is doing exactly this.


Aren't most BNPL plans credit cards? How many credit card accounts are floating around out there compared to any other type of plan?


I’m curious if .NET can compare here, though I have limited experience with rails or ASP.NET both seem to give you a lot to work with. Though the overlap of rails devs with .NET devs seems minimal.


I learned to code professionally in Ruby but wrote C# .Net for almost 10 years. I've probably forgotten more about .Net than I ever learned about Ruby at this point so take what I say with a grain of salt.

.Net has tons of configuration and boilerplate so I can't say that it's exactly the same in that sense, but the more meta theme is that just as there is a Rails way to do things, there is a Microsoft way to do things. Unlike Java where you're relying on lots of third party packages that while well maintained, aren't owned by the same company that does the language, framework, ORM, database, cloud provider, IDE and so on. Having a solid well documented default option that will work for 99% of use cases takes a lot of the load of decision making off your shoulders and also means you'll have plenty of solid documentation and examples you can follow. I've been in JVM land for the past couple years and it just can't compare.

I know Java people will come fight with me after this but I just don't think they know any better.


I don't want to fight you, because I don't know .Net well enough to have an opinion.

But I just want to say that I have the same feel when I develop using Spring Boot. I am extremely productive and seldom have to pull dependencies outside Spring for 80% of what I make.


I think the Java people would say that if you want one way to do things, go do it the Microsoft way :)

But I guess Spring tried to do that, but probably didn't have the resources that Microsoft does.


I wish I could :)


I still don't get why .NET barely ever gets mentioned in these threads. Even new or niche frameworks like Phoenix, loco.rs and others get mentioned, but almost never .NET. It's as "convention over configuration" as it gets.


Platform support and open-sourcedness. The Phoenix 1.0 release predates the first open-source and Linux-supported .NET release by a year, for example. .NET is just now starting to shake off its association as a closed-source, Windows-only thing.


.NET has been open source since a decade


Yes, like I said before, Phoenix 1.0 predates the first open-source and Linux-supported .NET release by a year.

https://news.ycombinator.com/item?id=10135825

https://github.com/dotnet/core/blob/main/release-notes%2F1.0...

People aren't just going to jump onto something recently open-sourced by a company that popularized the phrase "embrace, extend, extinguish". For the last decade, they've had to earn people's goodwill, while with a project like Rails, there is no "we used to be closed source but now we're not, use our thing!" to overcome. So in the 20 years since Rails has been released, it has only ever needed to demonstrate its usefulness.

Now that a decade has past, that negative association is starting to wash away a little.


And Ruby on Rails was released 20 years ago


Asp.net core is actually pretty simple to stand up and bang something out. Stick to the Microsoft docs and most patterns are handled.

I can't really say how the web UI side holds up to alternatives, tho.


I hated asp.net mvc, I think due to the large amount of layers and boilerplate that had to be banged out.


FWIW MVC is completely optional nowadays.

Take a look at this: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/m...


Thanks for that link. If I'm not wrong, it doesn't change the fact that compared to Rails or Django there's a lot of boilerplate that needs to be written to get a database-driven web app running.


What makes you think this way?

I’m pretty sure even back then Entity Framework was very capable, it has only improved since then.

https://learn.microsoft.com/en-us/ef/core/querying/complex-q...

It’s probably better to compare with other statically typed, compiled languages since both Ruby and Python are an in a different class (and an order of magnitude slower, more painful dependency management, etc.).


The only thing I don't like about ASP.NET is all the dependency injection, last time I looked it seemed unavoidable.


A small amount of lead makes its way out of the chamber in the form of dust and fumes after shooting. It’s quite easy to breath this in or get it on your hands and accidentally ingest it. Not enough to matter for someone who occasionally goes to the range, but significant for those shooting nearly daily, especially if shooting indoors.


the obvious solution is to switch to depleted uranium ammo


>Just look at the millionaire CEO of your own company. Do they have 6 kids? They probably don't even have 4 kids.

Couldn’t this also be evidence that money is the issue? i.e. you can’t get to that millionaire CEO spot if you have a lot of kids you need to pay for. Instead the successful are those having fewer kids.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: