Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cool article. My two takeaways: 1) If your deployment process hurts do it it more often (small releases frequently are easier than few big bang releases. 2) Don't commit to being x days behind upstream, if there is a big workload on your team (upgrading, vacations, etc.) you have the flexibility to delay upgrades and reduce the stream of incoming issues.


This is true for your codebase and your dependencies as well.


Yes, one of those lessons they don't really teach you in school that is really obvious once you take note of the simple reality that integration and testing effort does not scale linearly with the amount of change. Twice the change is not twice the testing effort but more like five times.The more change you allow to pile up the more of a hurdle testing and integrating become. At some point the integration and testing work becomes the dominant activity. Usually, a good way out of that is simply increasing the release frequency.

That's also why agile processes work. Simply shortening feedback loops reduces the amount of work related to integrating and testing. Nothing magical about it. CI/CD works great too. Deploy when tested automatically instead of when the calendar says it is time to release. Get your feedback in early instead of weeks after a change.

A good way to fix a poorly performing team is simply to shorten their sprints. People hate this (because of all the meetings) but it makes sprints easier. Of course getting rid of sprints entirely is optimal. I actually prefer Kanban style processes usually and tend to separate planning iterations from day to day development work or releasing stuff. Leads to much more relaxed teams and it's also a lot easier on remote teams.


Fast deploy is good for many things, but that 5 times the testing effort means the few releases you do are higher quality. When it has to be perfect, like many embedded systems you don't do many releases.

If course the above assumes you actually do the 5 times the testing before release. Most companies skipped that, and it showed.


Automated tests are key for this. If you have those, it empowers deploying frequently. There is only so much that can break for a small delta. That typically also enables very targeted manual testing if you need that.

Many companies have the wrong reflex of releasing less often when things don't go smoothly so they can test more not realizing that if they release more often, they can get away with less testing because there is less new stuff that can break.


Automated tests are good, but even a large suite misses things. End to end full integration issues for example are very hard to automate and use a lot of time to run.


Any sort of integration, but some can be broken down into steps more easily than others. About a decade ago one of my roles at the startup I was at was keeping our browser built atop Chromium up-to-date with Chromium trunk. I normally would merge in all Chromium's changes each morning. Some weeks I just had other stuff to do and so it would be a full week I'd gone without merging.

Merging a full week of Chromium changes in all at once was just too many merge conflicts to deal with.

So I'd simulate as if I'd been merging daily by just doing 5-7 separate merges, which worked out well enough.


This, remaining on a known-good release is a form of technical debt to use judiciously and pay down when more convenient. Beta testing everything immediately can be a favor to the community, but it’s probably not your main job.


Actually, it's the opposite in my experience. Beta testing is typically done when most software is released. Basically, at that point your technical debt is all the known & fixed issues that you still have, the features you don't have access to and can't benefit from, the performance issues you still have, the newly deprecated APIs that you are still using, etc. Opting into all that without good reason is a bad idea. If you are afraid things will break, the best way to find out is to just try it. Worst case you have to roll back a change. But at least then you know and you can plan a fix. IMHO anything that isn't on a current version should have a documented reason why that is or be fixed ASAP. I rarely see this on projects. Usually that means nobody cared enough to try which is just a form of negligence.

On most of my projects I update dependencies whenever I can. On my older projects, I don't even touch any source code until after I update dependencies whenever I do maintenance on them. Typically either nothing breaks, I need to do some minor fixes, or I need to temporarily roll back some dependency to plan some bigger change. The thing is, if a new version is going to require any kind of non trivial work, I want to know about this as early as possible; especially if it is a lot of work. If you wait a year, you are looking at a lot of unknown work, which is basically technical debt you did not even know you had. I don't like that kind of uncertainty.

Mostly staying on top of things minimizes the work you actually have to do. And you get to benefit from all the fixes early. A lot of projects I join are hopelessly outdated. It's usually the first thing I fix. It's rare to find documented reasons why a particular thing can't or shouldn't be updated. If it's not documented, I'm just going to go ahead and fix it. In case it doesn't work, I'll document it.


> If you are afraid things will break, the best way to find out is to just try it.

How much of the team is awake, sober, and on the grid at that moment? Some times are worse than others for an outage.


Shouldn’t need to be anyones job. Automate:

Daily chron:

- Branch the repo, auto-update one dependency (ideally a smarter way to batch up groups)

- Run CI

- Auto-merge commit if CI passes, else discard commit.

- Loop to create a new branch for the next dependency waiting to be auto-updated.

Otherwise, not having the right testing or CI is technical debt like the grand parent commenter suggests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: