Hacker Newsnew | past | comments | ask | show | jobs | submit | croshan's commentslogin

This works if you're on call for your systems. In many situations (ranging from small startups to big tech), you're also on-call for the systems of sister teams.

Not that there aren't other ways to fix that. But fixing the erroring service isn't practical in all cases.


It's quite easy, actually. We did this at work, recently.

Bun is two things. Most commonly, it's known for its Node-competitor runtime, which is of course a very invasive change. But it can also be used purely as a package manager, with Node as your runtime: https://bun.sh/docs/cli/install

As a package manager, it's much more efficient, and I would recommend switching over. Haven't used pnpm, though--we came from yarn (v2, and I've used v1 in past).

We still use Node for our runtime, but package install time has dropped significantly.

This is especially felt when switching branches on dev machines where one package in the workspace has changed, causing yarn to retry all packages (even though yarn.lock exists), where bun only downloads the new package.


A bit aggressive. No, wouldn't connecting to a slow 3g tower affect ping times to all global servers proportionately?

The proposal has other flaws, but phone to tower latency isn't one.


> No, wouldn't connecting to a slow 3g tower affect ping times to all global servers proportionately?

Yep. Per the article (last point under "How it works"):

> Users with a high latency to all servers can be excluded from polls, as this is a strong indicator of a VPN/proxy usage

Something seems off about how they're measuring latency (which seems to be "fetch various AWS Lambda endpoints"), since their system seems to think that I have hundreds of milliseconds of latency even to the nearest AWS region (even though in practice it should be an order of magnitude lower), and multiple seconds to the other side of the world.

edit: well, if the slowness is just on last-mile delivery, then it should be a fixed amount of overhead added to each connection (rather than a multiplier). For instance, I have about 8ms of latency added by my ISP just by the first hop into their network. But it's that same 8ms overhead whether I'm connecting to a server on the other side of town, or on the other side of the world.


An interpretation that makes sense to me: humans are non-deterministic black boxes already at the core of complex systems. So in that sense, replacing a human with AI is not unreasonable.

I’d disagree, though: humans are still easier to predict and understand (and trust) than AI, typically.


With humans we have a decent understanding of what they are capable of. I trust a medical professional to provide me with medical advice and an engineer to provide me with engineering advice. With LLM, it can be unpredictable at times, and they can make errors in ways that you would not imagine. Take the following examples from my tool, which shows how GPT-4o and Claude 3.5 Sonnet can screw up.

In this example, GPT-4o cannot tell that GitHub is spelled correctly:

https://app.gitsense.com/?doc=6c9bada92&model=GPT-4o&samples...

In this example, Claude cannot tell that GitHub is spelled correctly:

https://app.gitsense.com/?doc=905f4a9af74c25f&model=Claude+3...

I still believe LLM is a game changer and I'm currently working on what I call a "Yes/No" tool which I believe will make trusting LLMs a lot easier (for certain things of course). The basic idea is the "Yes/No" tool will let you combine models, samples and prompts to come to a Yes or No answer.

Based on what I've seen so far, a model can easily screw up, but it is unlikely that all will screw up at the same time.


It's actually a great topic - both humans and LLMs are black boxes. And both rely on patterns and abstractions that are leaky. And in the end it's a matter of trust, like going to the doctor.

But we have had extensive experience with humans, it is normal to have better defined trust, LLMs will be better understood as well. There is no central understander or truth, that is the interesting part, it's a "Blind men and the elephant" situation.


We are entering the nondeterministic programming era in my opinion. LLM applications will be designed with the idea that we can't be 100% sure and what ever solution can provide the most safe guards, will probably be the winner.


In general, the way a walled garden wins is by providing everything its villagers need inside.

And Apple’s products seem to create walled gardens in order to prioritize [first creative, then economic] control.

Based on the demographic that a significant portion of their marketing seems targeted towards (artists and creative types), I think your theory sounds likely.


This sounds depressing. I’m sorry that you had that experience.

It’s a frustrating position to be in, and you can feel quite helpless.

In my experience, it’s less about “do only what’s asked”, and rather “say no”.

I.e. explain “I can do X, but if I do that then Y will suffer, and Y is a priority”. (Y being another company priority, or even your own mental health). Stated in these terms, it’s easier to negotiate your time with your coworkers.


Thing is that may not depend on you.

I did this but I was surrounded by coworkers who were stupidly running straight into burnout themselves and said yes to anything.

Well, upper management felt I wasn’t doing enough in comparison and pressured/harassed me. Ultimately, I were the first to burn out.

Of course, in hindsight, I should have left way before it happened, but when you are in, you have no hindsight. Sometimes you can’t grab the surrounding toxicity before being hurt.


Go bootstraps itself (Go is compiled by Go) but Gofmt does not format itself (an even simpler job).


Nice find. I think you're right.

FWIW, I don't think the code style in [2] is less simple (slightly more readable, to use `if (X) {} else {}` rather than `if (!X) {} else {}`, for example).

So to me, this reads as the author of [1] is just overcorrecting by adding process, when some test cases or code review would've been more helpful in preventing whatever incident [2] caused.


Stay tuned! We’re building this out soon. :)


real users have more CPU than a literal toaster (or smart air fryer, or IP camera, or many other common botnet devices)


Not only that, real users actually want to use the service, not overload it. A real user might only make one request a second. A botnet device is trying to make a thousand requests per second to overload the server. Even if they each have the same CPU as a normal user, now each node in the botnet can only make as many requests per second as a user or the user can outbid them.


^ this guy gets it


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: