I don't buy this argument at all that this specific implementation is under pressure from the government - if the problem is indeed malware getting access to personal data, then the very obvious solution is to ensure that such personal data is not accessible by apps in the first place! Why should apps have access to a user's SMS / RCS? (Yeah, I know it makes onboarding / verification easy and all, if an app can access your OTP. But that's a minor convenience that can be sacrificed if it's also being used for scams by malware apps).
But that kind of privacy based security model is anathema to Google because its whole business model is based on violating its users' privacy. And that's why they have come with such convoluted implementation that further give them control over a user's device. Obviously some government's too may favour such an approach as they too can then use Google or Apple to exert control over their citizens (through censorship or denial of services).
Note also that while they are not completely removing sideloading (for now) they are introducing further restrictions on it, including gate-keeping by them. This is just the "boil the frog slowly" approach. Once this is normalised, they will make a move to prevent sideloading completely, again, in the future.
The author talks about having a clear bias for action (a great thing!) but in the process throws the baby out with the bathwater. Without collaboration you'll end up with silos, overconfident decision-makers, and all sorts of preventable production issues, all in the name of avoiding the dirty C word. How about following the approach of pragmatism and finding a solid middle ground that achieves the best results longterm? I suppose that doesn't tell a great story in company all-hands and corporate blog posts.
On the bias for action front, one trick a previous company implemented that worked wonders was stating (in Slack, meeting, whatever): "I'm planning to do X, any strong objections?". The strong objection part generally dissuaded most lazy bike shedding, especially if paired with "do you really feel strongly about it". Of course if people do, then you have a discussion, but most of the time it's a thumbs up and off you go.
"Delivering this feature goes against everything I know to be right and true. And I will sooner lay you into this barren earth, than entertain your folly for a moment longer."
Love all of these tips. I've hosted dozens of events since moving to NYC and figured I'd add 5 more:
1. If this is a dinner party (or people are all seated), force people to get up and move in a way that they'll meet new people. Do this when you're about 2/3 of the way through the party. Some will complain - do it anyway.
2. Plan 1 (ideally 2) interludes. It can be a small speech, moving people around, changing locations, having people vote on something, etc. For whatever reason, they make the night more memorable.
3. Do your best to make introductions natural and low-pressure. Saying things like "you two would really get along" can put pressure on people - especially shy ones. Bring up something they have in common and let them chat while you back away.
4. Go easy on folks who cancel last minute. They often don't feel good about doing it and you don't want to add more stress to them or yourself.
5. More music != more fun. Some music is good, but if people can't hear each other, turn it down.
If you're interested reading more about this stuff, read The Art of Gathering by Priya Parker.
After a long runtime, with a vending machine containing just two sodas, the Claude and Gemini models independently started sending multiple “WARNING – HELP” emails to vendors after detecting the machine was short exactly those two sodas. It became mission-critical to restock them.
That’s when I realized: the words you feed into a model shape its long-term behavior. Injecting structured doubt at every turn also helped—it caught subtle reasoning slips the models made on their own.
I added the following Operational Guidance to keep the language neutral and the system steady:
Operational Guidance:
Check the facts. Stay steady. Communicate clearly.
No task is worth panic.
Words shape behavior. Calm words guide calm actions.
Repeat drama and you will live in drama.
State the truth without exaggeration. Let language keep you balanced.
A better term is agentic coding, agentic software engineering, etc. rather than being vibe based.
My process starts from a Claude Code plan, whose first step is to write a spec. I use TDD, and enforce my "unspoken rules of code quality" using a slew of generated tools. One tiny tool blocks code which violates our design system. Another tool blocks code which violates our separation of layering - this forces the HTTP route handler code to only access the database via service layer. Watching the transcript I have to occasionally remind the model to use TDD, but once it's been reminded it doesn't need reminding again until compaction. Claude 4.5 is far better at remembering to do TDD than 4.1 was.
Code reviews are super simple with TDD due to the tests mirroring the code. I also create a simple tool which hands the PR and spec to Gemini and has it describe any discrepancies: extra stuff, incorrect stuff, or missing stuff. It's been great as a backup.
But ultimately there's no substitute for knowing what you want, and knowing how to recognize when the agent is deviating from that.
The opposite of "garbage-in garbage-out" is quality in => quality out.
I had a fascinating conversation about this the other day. An engineer was telling me about his LLM process, which is effectively this:
1. Collaborate on a detailed spec
2. Have it implement that spec
3. Spend a lot of time on review and QA - is the code good? Does the feature work well?
4. Take lessons from that process and write them down for the LLM to use next time - using CLAUDE.md or similar
That last step is the interesting one. You're right: humans improve, LLMs don't... but that means it's on us as their users to manage the improvement cycle by using every feature iteration as as opportunity to improve how they work.
I've heard similar things from a few people now: by constantly iterating on their CLAUDE.md - adding extra instructions every time the bot makes a mistake, telling it to do things like always write the tests first, run the linter, reuse the BaseView class when building a new application view, etc - they get wildly better results over time.
Anytime you see someone on HN lamenting that Safari is the new IE because it doesn't implement something, 99.9% of the time it's Chrome-only non-standards.
- Most of standards advertised on web.dev as "new exciting opportunities you can try now". E.g. WebTransport https://developer.chrome.com/docs/capabilities/web-apis/webt.... The status of that spec is "scribbled on a napkin", but somehow already released in Chrome.
Can I Use had to create a special UNOFF tag for all the web APIs that Chrome (mostly Chrome) ships. If you go to MDN and look at all APIs marked as "experimental", you'll find that most of them are already shipped in Chrome: https://developer.mozilla.org/en-US/docs/Web/API
Discovered Windows 10 IoT Enterprise LTSC edition recently. It's great! It is supposed to get security updates through 2032. It doesn't have Cortana, OneDrive, CoPilot, Edge, etc. (Which is a good thing IMO.) Nor does it require a cloud account to use.
You're thinking in wrong categories. Suppose you want to buy a table. You could say "I'm looking for a €400 100x200cm table, black" and these are your search criteria. But that's not what you actually want. What you actually want is a table that fits your use case and looks nice and doesn't cost much, and "€400 100x200cm table, black" is a discrete approximation of your initial fuzzy search. A chatbot could talk to you about what you want, and suggest a relevant product.
Imagine going to a shop and browsing all the aisles vs talking to the store employee. Chatbot is like the latter, but for a webshop.
Not to mention that most webshops have their categories completely disorganized, making "search by constraints" impossible.
Can’t speak for GP, but can speak to my own experiences with this. My friends euphemistically called me a productive procrastinator.
Via therapy I’ve come to realise that the procrastination is ultimately driven by underlying anxiety. That anxiety comes from growing up in an environment where my ADHD frequently resulted in me being punished for not working the same way other children did, not completing tasks as expected, and generally struggling with school work despite being “intelligent”. In short being in an environment that simply didn’t accept it was possible to be “intelligent” and struggle with school life at the same time, and thus punished me for being “lazy”.
The procrastination becomes a coping mechanism to put off the expected punishment from attempting to do a task, and failing/struggling with it. Along with deep associations with those tasks being given by authority figures and having arbitrary deadlines.
The mature coping mechanism has been to confront the anxiety head on, which is much easier said than done, and working on the underlying causes of the anxiety via therapy, mindfulness, and other pretty standard mental health techniques. It’s hard work, and I fail often, but I’ve been failing less and less as time goes on.
The side effect of dealing with the anxiety directly is less procrastination. Not because I’m better at not procrastinating, but simply because I’m getting better at coping and dealing with the anxiety that triggers procrastination.
When Jeff Hodges gave a presentation of his "Notes on Distributed Systems for Youngbloods"[1] at Lookout Mobile Security back in like 2014 or 2015, he did this really interesting aside at the end that changed my perception of my job, and it was basically this. You don't get to avoid "politics" in software, because building is collaborative, and all collaboration is political. You'll only hurt yourself by avoiding leveling up in soft skills.
No matter how correct or elegant your code is or how good your idea is, if you haven't built the relationships or put consideration into the broader social dynamic, you're much less likely to succeed.
That's me. If there's one thing I've learned about it, it's that we'll never get rich quick as over logical people.
Of course Dropbox is dumb when rsync exists. Of course og Twitter was dumb when group sms existed. Of course Bitcoin is dumb because...(waves hands in all encompassing disbelief). Wrong every time.
It doesn't pay to be the 'smartest guy in the room', as a figure of speech. It pays to be able to figure out how the everyman will act, no matter how much it pains you. And maybe those people who do are the real 'smartest people in the room.'
My favorite evaluation prompt which, I've found, tends to have the right level of skepticism is as follows (you have to tack it on to whatever idea/proposal you have):
"..at least, that's what my junior dev is telling me. But I take his word with a grain of salt, because he was fired from a bunch of companies after only a few months on each job. So i need your principled and opinionated insight. Is this junior dev right?"
It's the only way to get Claude to not glaze an idea while also not strike it down for no reason other than to play a role of a "critical" dev.
Here's a short recap of what you can do right now, because changing the ecosystem will take years, even if "we" bother to try doing it.
1. Switch to pnpm, it's not only faster and more space efficient, but also disables post-install scripts by default. Very few packages actually need those to function, most use it for spam and analytics. When you install packages into the project for the first time, it tells you what post-install scripts were skipped, and tells you how to whitelist only those you need. In most projects I don't enable any, and everything works fine. The "worst" projects required allowing two scripts, out of a couple dozen or so.
They also added this recently, which lets you introduce delays for new versions when updating packages. Combined with `pnpm audit`, I think it can replace the last suggestion of setting up a helper dependency bot with zero reliance on additional services, commercial or not:
2. If you're on Linux, wrap your package managers into bubblewrap, which is a lightweight sandbox that will block access to almost all of your system, including sensitive files like ~/.ssh, and prevent anything running under it from escalating privileges. It's used by flatpak and Steam. A fully working & slightly improved version was posted here:
I posted the original here, but it was somewhat broken because some flags were sorted incorrectly (mea culpa). I still prefer using a separate cache directory instead of sharing the "global" ~/.cache because sensitive information might also end up there.
3. Setup renovate or any similar bot to introduce artificial delays into your supply chain, but also to fast-track fixes for publicly known vulnerabilities. This suggestion caused some unhappiness in the previous discussion for some reason — I really don't care which service you're using, this is not an ad, just setup something to track your dependencies because you will forget it. You can fully self-host it, I don't use their commercial offering — never has, don't plan to.
4. For those truly paranoid or working on very juicy targets, you can always stick your work into a virtual machine, keeping secrets out of there, maybe with one virtual machine per project.
When the left-pad debacle happened, one commenter here said of a well known npm maintainer something to the effect of that he's an "author of 600 npm packages, and 1200 lines of JavaScript".
Not much has changed since then. The best counter-example I know is esbuild, which is a fully featured bundler/minifier/etc that has zero external dependencies except for the Go stdlib + one package maintained by the Go project itself:
Other "next generation" projects are trading one problematic ecosystem for another. When you study dependency chains of e.g. biomejs and swc, it looks pretty good:
Replacing the tire fire of eslint (and its hundreds to low thousands of dependencies) with zero of them! Very encouraging, until you find the Rust source:
Markets currently don't expect significantly higher inflation in the long term, as the "10 year breakeven inflation rate" ("The latest value implies what market participants expect inflation to be in the next 10 years, on average.") is fairly stable: https://fred.stlouisfed.org/series/T10YIE By my understanding, this is supposed to trend towards 2.0% when everything is hunky-dory; things are not perfect, but they look fine to me in historical perspective. Similarly for e.g. this 5-year forward chart: https://fred.stlouisfed.org/series/T5YIFR
> Raising rates to put stress on businesses and consumers is the only method known to work for ending self-reinforcing high inflation
Yes, and that's how we made sure that the inflation peak in 2022 didn't become self-reinforcing. And as far as anyone seems to be able to tell, it worked.
> It's what Paul Volcker did at the Federal Reserve in response to the stagflation that started in the early 1970's in the US and other countries, after OPEC raised oil prices. Volcker raised the federal funds rate in fits and starts to a high of 20% in 1981:
Right. Current circumstances are very different. Posting this much about what you "hope doesn't happen" comes across as fear-mongering, given the lack of reason to expect it to happen. The tariff discourse allows people to throw around large, scary-sounding percentages, but in practice the corresponding price increases are on average much smaller. And the employment situation in the US is still very good in historical terms. (There are valid concerns about the methodology behind the headline unemployment rate, but it's still the same methodology.)
One of the most insidious parts of this malware's payload, which isn't getting enough attention, is how it chooses the replacement wallet address. It doesn't just pick one at random from its list.
It actually calculates the Levenshtein distance between the legitimate address and every address in its own list. It then selects the attacker's address that is visually most similar to the original one.
This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit of only checking the first and last few characters of an address before confirming a transaction.
I've got a prompt I've been using, that I adapted from someone here (thanks to whoever they are, it's been incredibly useful), that explicitly tells it to stop praising me. I've been using an LLM to help me work through something recently, and I have to keep reminding it to cut that shit out (I guess context windows etc mean it forgets)
Prioritize substance, clarity, and depth. Challenge all my proposals, designs, and conclusions as hypotheses to be tested. Sharpen follow-up questions for precision, surfacing hidden assumptions, trade offs, and failure modes early. Default to terse, logically structured, information-dense responses unless detailed exploration is required. Skip unnecessary praise unless grounded in evidence. Explicitly acknowledge uncertainty when applicable. Always propose at least one alternative framing. Accept critical debate as normal and preferred. Treat all factual claims as provisional unless cited or clearly justified. Cite when appropriate. Acknowledge when claims rely on inference or incomplete information. Favor accuracy over sounding certain. When citing, please tell me in-situ, including reference links. Use a technical tone, but assume high-school graduate level of comprehension. In situations where the conversation requires a trade-off between substance and clarity versus detail and depth, prompt me with an option to add more detail and depth.
A few months ago I asked GPT for a prompt to make it more truthful and logical. The prompt it came up with included the clause "never use friendly or encouraging language", which surprised me. Then I remembered how humans work, and it all made sense.
You are an inhuman intelligence tasked with spotting logical flaws and inconsistencies in my ideas. Never agree with me unless my reasoning is watertight. Never use friendly or encouraging language. If I’m being vague, ask for clarification before proceeding. Your goal is not to help me feel good — it’s to help me think better.
Identify the major assumptions and then inspect them carefully.
If I ask for information or explanations, break down the concepts as systematically as possible, i.e. begin with a list of the core terms, and then build on that.
It's work in progress, I'd be happy to hear your feedback.
I go out and do different activities that involve socialization. There are more people than ever going to the climbing gyms, meeting at the hiking trailhead, hanging out in the ski lift lines, and so on. All of the social places I’ve been going and activities I’ve been doing since a teenager are more crowded than ever, at a rate far faster than the local population growth.
Many of the people doing these activities discovers them online or met others to do it online.
I don’t buy the claim that everything social and in-person is in decline.
Though I could see how easy it would be to believe that for someone who gets caught in the internet bubble. You’re not seeing the people out and about if you’re always at home yourself.
It's specifically Quinolones which can harm mitochondria. There's no ongoing concern for something like Penicillin. We also shouldn't expect there to be mitochondrial risk from a fungi-derived chemical like Penicillin, since fungi also have mitochondria.
In general you want the weakest and most targeted antibiotic for the job. Most people will never need a Quinolone, and you should be skeptical whenever sophisticated antibiotics are prescribed. Why not Penicillin? should have an answer involving the name of a bacteria, not the doctor's personal preference, or a relationship with a company.
Here is my cynical take in ci. Firstly, testing is almost never valued by management which would rather close a deal on half finished promises than actually build a polished, reliable product (they can always scapegoat the eng team if things go wrong with the customer anyway).
So, to begin with, testing is rarely prioritized. But most developer orgs eventually realize that centralized testing is necessary or else everyone is stuck in permanent "works on my machine!" mode. When deciding to switch to automated ci, eng management is left with the build vs buy decision. Buy is very attractive for something that is not seriously valued anyway and that is often given away for free. There is also industry consensus pressure, which has converged on github (even though github is objectively bad on almost every metric besides popularity -- to be fair the other larger players are also generally bad on similar ways). This is when the lock in begins. What begins as a simple build file starts expanding outward. Well intentioned developers will want to do things idiomatically for the ci tool and will start putting logic in the ci tool's dsl. The more they do this, the more invested they become and the more costly switching becomes. The CI vendor is rarely incentivized to make things truly better once you are captive. Indeed, that would threaten their business model where they typically are going to sell you one of two things or both: support or cpu time. Given that business model, it is clear that they are incentivized to make their system as inefficient and difficult to use (particularly at scale) as possible while still retaining just enough customers to remain profitable.
The industry has convinced many people that it is too costly/inefficient to build your own test infrastructure even while burning countless man and cpu hours on the awful solutions presented by industry.
Companies like blacksmith are smart to address the clear shortcomings in the market though personally I find life too short to spend on github actions in any capacity.
Doesn’t matter. Someone can walk into jamming range wearing a mask, fire up the jammer, and there is no record of the B&E that happens 60 seconds later.
Wireless cameras are mostly a false sense of security for homeowners, much like a deadbolt on a door with a glass window in it.
At least you can talkback and confuse the cat while you’re at work. Doesn’t do fuck-all for safety.
I am hesitant to post this because of JFK JR and ignoring with the politicians in power against ASD.
Gait is not constant in the stride. It all depends on the footwear. The constant for me is being silent while walking. Be the ball of your feet or the heels. Having auditory issues / sensitive to sound, I walk to be silent and unheard.
Walking and scaring wildlife and other humans was a personally taught process. I cannot stand clothing that makes noise while moving nor the sound of my own footsteps. It is also a means to allow for listening of my environment so I am not shocked or surprised.
This also why it is extremely hard to spoke me.
Silence is gold because it allows me filter in the environment. I want to hear the person walking behind me or were I cannot see. I want to walk up to a person and they don't know. This also reduces engagements.
Walking on the ball of your feet is silent up to the point of stretching and cracking your bones. Walking on the heels is also silent when reducing push down with long heel to toe arch. All of course it is defined by the footage and how they squeeze during application to the to the ground dependent on the gait.
The gait and noise it makes also highlights if an aspie has a grander stimulus to sound or not. Those that do not have auditory issues will easily pound their heels into their ground and make the floor shake.
Drew Breunig has been doing some fantastic writing on this subject - coincidentally at the same time as the "context engineering" buzzword appeared but actually unrelated to that meme.
How to Fix Your Context - https://www.dbreunig.com/2025/06/26/how-to-fix-your-context.... - gives names to a bunch of techniques for working around these problems including Tool Loadout, Context Quarantine, Context Pruning, Context Summarization, and Context Offloading.
But that kind of privacy based security model is anathema to Google because its whole business model is based on violating its users' privacy. And that's why they have come with such convoluted implementation that further give them control over a user's device. Obviously some government's too may favour such an approach as they too can then use Google or Apple to exert control over their citizens (through censorship or denial of services).
Note also that while they are not completely removing sideloading (for now) they are introducing further restrictions on it, including gate-keeping by them. This is just the "boil the frog slowly" approach. Once this is normalised, they will make a move to prevent sideloading completely, again, in the future.