Hacker Newsnew | past | comments | ask | show | jobs | submit | Widdershin's commentslogin

One of the other bidders, like when Yahoo won the bidding in the early 2010s.


Try add it to the Gemfile of a modern Rails project, the dependencies are very out of date and it won’t install.


Docs say

> Please don't put mailcatcher into your Gemfile. It will conflict with your applications gems at some point.


Commenter was just making the fair point that the dependencies are out of date.

Maintenance doesn't always mean UI redesigns or non-compatible config changes. Sometimes it is just fixing bugs and updating or replacing old dependencies.


I always run Mailcatcher as a standalone Docker image (I'm already using Docker Compose for development), with no issues.


It's not designed to go in your Gemfile

gem install mailcatcher

:-)


This is the sort of thing people said about Blender for a long time, and my understanding is that it’s now often used in commercial contexts.

Not a given by any means, but it’s happened before and it will happen again.


Gonna get even worse once this sort of work is outsourced to language models en masse


Sure, but a lot of studios still maintain and upgrade bespoke game engines to ship projects.

All it would really take is a company of decent scale deciding they want to base their engine around Godot and having an appetite to upstream some meaningful work.


>and having an appetite to upstream some meaningful work.

That's the part that probably won't happen, unfortunately. Not without specific contracting. Blind Squirrel (the devs behind the botched Sonic Colors remaster) didn't even credit Godot as an engine it based it's proprietary engine off of.


Treesitter based highlighting can handle nested languages in some cases, maybe that’s available for your editor?


Never considered that Memento was a cinematic adaption of Reflections on Trusting Trust.


Can you give any examples of times you’ve easily anticipated X when a whole field of subject matter experts have demonstrably overlooked it?


I don't really agree with the OP, but I do think there is at least one, possibly two such examples. The pretty clear one is nutrition: the vast majority of studies and recommendations made over the years are pure bullshit, and quite transparently so. They either study a handful of people in detail, or a huge swathe of population in aggregate, and get so many confounding variables that there is 0 explanatory power in any of them. This is quite obvious to anyone, but the field keeps churning out papers and making official recommendations as if they know anything more about nutrition than "missing certain key nutrients can cause certain disease, like scurvy for missing vitamin C".


Nutrition in particular is a scenario where major corporations willfully hid research about sugar and things for years and years and funded research attacking fat content instead, which turns out is actually pretty benign. Perfect example.


Is that an example of "the experts didn't actually think of [simple explanation]" though?


Can't speak for OP, but I've had more than a few similar experiences (from both sides of the fence FWIW).

I can think of one example in software deployment frequency. The observation (many years ago), was that it's painful and risky (therefore, expensive) to deploy software, so we should do it as infrequently as the market will allow.

Many companies used to be on annual release schedules, some even longer. Many organizations still resist deploying software more than every couple/few weeks.

~15 years ago, I was working alongside the other (obviously ignorant) people who believed that when something is painful, slow and repetitive, it should be automated. We believed that software deployment should happen continuously as a total non-event.

I've had to debate this subject with "experts" over and over and over again, and I've never met a single person who, once migrated, wanted to go back to the nightmare of slow, periodic software deployments.


I don't see why a slow deployment cadence is a nightmare. When I've worked in that setting, it mostly didn't matter to me when something got deployed. When it did (e.g. because something was broken), we had a process in place to deploy only high priority fixes between normal releases.

Computers mostly just continue to work when you don't change anything, so that meant after the first week or so after a release, the chance of getting paged dropped dramatically for 3 months.


Good question. The nightmare was mostly organizational.

The amount of politicking was incredible when it came to which features would be in the next push and which features would slip. The planning meetings, the arguments, the capability slashing, the instability that came from all these political decisions. It was not great and this enormous amount of churn literally disappeared when they moved to daily pushes.


That's more "the experts had a (wrong) opinion on something" than "the experts overlooked something obvious". They didn't overlook it, they thought about it and came to a conclusion.

And if by "many years ago" you refer to a period where software deployment was mostly offline and through physical media, then it was indeed painful and risky (and therefore expensive). The experts weren't wrong back then.


Great points, I agree with you.


This isn't to agree with the parent comment, but wouldn't this situation itself be an answer to your question (assuming the claim is true)? Laymen like me easily anticipated mass divergence, but purportedly scientists have been surprised by it.


The procedure of multiple weights being calibrated against a single standard is _predicated_ on anticipated mass divergence.

The mystery being discussed is that, even after the obvious sources of error are allowed for, there is still a discrepancy, and it's not easy to determine how much of that discrepancy is with the weights being recalibrated vs the test standard they're being calibrated to. None of which is shocking to anyone involved, just puzzling.


I think that they do not have an exact reason and measured it and seen it happen is the surprising bit. Anything else is a good guess. Of those, people have plenty.


This comment chain is getting circular. We can't use this as an example for itself by assuming that it is true.


* [insert every example of "15 year old unknown vulnerability in X found" here]

* have to be a bit vague here, but while working as a research scientist for the US Department of Defense I regularly witnessed and occasionally took part in scenarios where a radical idea turned "expert advice" on its head, or some applied thing completely contradicted established theoretical models in a novel or interesting way. Consistently, the barrier to such advancements was always "experts" telling you that your thing should not / could not work, blocking your efforts, withholding funding, etc., only to be proven wrong. Far too many experts care more about maintaining the status quo than actually advancing the field, and a concerning number are actually on the payroll of various corporations or private interests to actively prevent such advancements.

* over the last 30 years in the AI field, there have been a few major inflection points, Yann LeCun's convolutional neural networks and his more general idea that ANNs stood to gain something by loosely mimicking the complexity of the human brain, for which he was originally ridiculed and largely ignored by the scientific community until convolution revolutionized computer vision; and the rise of large language models, which came out of a whole branch of AI research that had been disregarded for decades and was definitely not seen as a thing that might ever come close to something like AGI, natural language processing.

* going back further in history there are plenty of examples, like quantum mechanics turning the relativistic model on its head, Galileo, etc etc. The common theme is a bunch of conservative, self-described experts scoffing at something that ends up completely redefining their field and making them ultimately look pretty silly and petty. This happens so frequently in history that I think it should just be assumed at all times in all fields, as this dynamic is one of the few true constants throughout history. No one is an expert, no one has perfect knowledge of everything, and the next big advancement will be something that contradicts conventional wisdom.

Admittedly, I derived these beliefs from some of the Socratic teachings I received very early in life, around 6th grade or so back in the late 90s, but they have continually borne fruit for me. Question everything. When debugging, question your most basic assumptions first., "is it plugged in?" etc, etc

It's sort of at a point these days where if you want to find a fruitful research idea, probably best to just browse through conventional wisdom on your topic and question the most fundamental assumptions until you find something fishy.


I missed this comment first time around, but I really appreciate this write-up.

I apologize for being a bit snide in my original challenge, I'm fairly sensitive to the "why don't you just" attitude, but I agree with pretty much everything you have to say here.

I have a very similar approach around enumerating and testing assumptions when the going gets tough, and similarly have found that has enabled me to solve a handful of problems previously claimed impossible.

I think the tautological issue with our initial framing is that if you're able to easily identify these problems you probably are a subject matter expert. In many ways it's the outsider art of analytical problem solving - established wisdom should not be sacred.


This has happened a few times to me.

First was some downhill skateboarding projects - a bushing recommendation system and a site that allowed me to search all NZ skate shops from one place.

A popular US skate shop posted on Reddit looking for interns, but they weren’t interested in hiring so remotely.

Fast forward a week and the CTO got in touch to say that he’d interviewed a bunch of dud candidates, and meanwhile had been watching me commit exactly the code they were looking for.

Ended up contracting with them for a bit building an internal equivalent of the search tool, as well as bushing recommendations integrated with their listings.

The next is my work in the Cycle.js community (niche FRP JS framework). Mostly worked on trendy dev tools, but also did some valuable work on improving the speed, reliability and clarity of async UI tests that is still arguably close to best-in-class for JS.

That resulted in multiple job offers and an approach from Manning for a possible book deal, but none of it was that good of a fit.


Props to the writing in Endless Sky, probably has the most nuanced and interesting plot of any open source community built game I’ve played.


‹tries that karate move now, just for kicks›


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: