Hacker Newsnew | past | comments | ask | show | jobs | submit | eh_why_not's commentslogin

What's up with the TeamYouTube account advising him to delete his X post for security reasons because the post contains a channel ID? Like channel ID is not public information and some secret private key or something?

https://x.com/TeamYouTube/status/1985378776562168037


I think diminishing evidence that they still use humans for customer interactions.

In a discussion of an article about encouraging fact-checking in writing, I wish you would have made your quotes informative by replacing "many wise people" with the actual names of who said them.

For everyone else: the first paragraph appears to be a quote of C.S. Lewis around 1945 [0], and the second, of Thomas Jefferson in 1807 [1].

[0] https://www.goodreads.com/quotes/502048-why-you-fool-it-s-th...

[1] https://press-pubs.uchicago.edu/founders/documents/amendI_sp...


In case it's still not clear with the "while second to Waymo" phrase: "others" refers to contenders other than Waymo.


https://en.wikipedia.org/wiki/Kosmos_482

> Its landing module, which weighs 495 kilograms (1,091 lb), is highly likely to reach the surface of Earth in one piece as it was designed to withstand 300 G's of acceleration and 100 atmospheres of pressure.

Awesome! I don't know how you can design for 300 G's of acceleration!


Overbuild everything. For things that might be fragile-ish like surface mounted electronics, cast the whole thing in resin. As a sibling poster has mentioned, we shoot things out of artillery tubes these days that have way harsher accelerations than 300g.


300g is nuts. Electronics in a shell is one thing, this is a landing craft. In a prior life my designs had to survive 12g aerial drop loads and we had to make things pretty robust.


It also blew my mind that a human being, John Stapp, survived >40g acceleration and 26g deceleration, in a rocket sled. I believe it was the deceleration that hurt him the most.


Gun scopes are minimum 500G rated. Apparently that's the ballpark for recoils(the reaction force from the barrel becoming a rocket engine, and/or the bolt/carrier bottoming out)


There are electronics and gyroscopes designed for >9,000 G loads, in guided artillery shells.

Aerospace is awesome.


88.2 m/s^2

For well under a second though, typically artillery muzzle velocity is, what, two to three thousand feet a second?

Still, it’s wild that guidance electronics and control mechanisms can survive that sort of acceleration.


According to https://en.wikipedia.org/wiki/M777_howitzer (typical howitzer):

- barrel length (x): 5.08 meters

- muzzle velocity (v): 827 m/s

Assuming a constant acceleration γ, x = γ * t² / 2 and v = γ * t

Hence:

- t = 2 * x / v = 12.29 ms

- γ = v / t = 67316 m / s² = 7000 G

A bit lower than 9000 G, but in the same ballpark.

Certain rounds, like Excalibur (https://en.wikipedia.org/wiki/M982_Excalibur) or BONUS (https://en.wikipedia.org/wiki/Bofors/Nexter_Bonus), are sophisticated and are able to cope with such accelerations.


That vacuum tubes(!) were part of that package, and were able to be that robust, still floors me every time I think about it.


> 88.2 m/s^2

Isn't that more like 9g?


Yes, thanks, I meant to write 88.2 kilometres / second squared.


If anyone wants to try and see it the orbit is listed.

https://www.n2yo.com/passes/?s=6073


Nitpicking, but wouldn't it be 300 Gs of deceleration? I know the math is basically the same, but technically the words a mean different things


Acceleration is a vector. So if you apply the “deceleration” long enough you’ll eventually be accelerating in the opposite direction. Without a frame of reference it’s all the same. Even with a frame of reference you’re still accelerating just that it’s in he opposite direction of the current velocity.


I fly through trams in completely different directions depending on whether it accelerates or decelerates. So for sure a system's design must consider more than just the magnitude of acceleration.


When you go around a tight corner and are thrown to one side, what term would you use for the tram's change in motion then?

Deceleration is a useful but non-technical term, like vegetable. A tomato is a fruit which is a tightly defined concept, but it is in this loose category of things called vegetables. It's still useful to be able to call it a vegetable.

From a physics perspective all changes in motion (direction and magnitude) are acceleration, and it's correct to say the designers had to consider acceleration in most (all?) directions when designing the tram. This is including gravity's, as they tend to give you seats to sit on, rather than velcro panels and straps like on space ships.

It is useful to say to your friend in the pub that you got thrown out of your seat due to the tram's heavy deceleration, rather than give a precise vector.


Without looking out the window how would you tell the difference between acceleration or deceleration? You can’t.

And if you say “well one way I fly to the back of the tram and the other the front” You’re arbitrarily associating “front” with decelerate and “back” with accelerate.

300gs is 300gs regardless of the direction vector of the component.

> So for sure a system's design must consider more than just the magnitude of acceleration.

What else would you need to consider? Acceleration up? Down? Left? 20%x,30%y,40%z? There’s an infinite number of directions.


Well to be fair, the person you reply to has a point. There’s a continuous range of directions, but even though I’m no spaceship engineer, I suspect they’re probably engineered to withstand acceleration better in some directions than others, given that pretty much only their thrust method, as well as gravity at source and destination, will actually be able to apply any acceleration.


You can tell because they typically accelerate faster than they decelerate.


“The enemy's gate is down.”


They tend to do this with spacecraft by turning the whole craft so acceleration always comes through the floor


I think this is a case where “technically” the words mean the same thing but “generally” they mean different things.


This is wrong when talking about the physics of something. Deceleration is acceleration. Acceleration is just a change in velocity.


Acceleration, deceleration, point is: Something is going to apply 300 gs in a certain direction to design for.

It's not like you can tell whether you're going slow or fast, in one direction, the other direction, or even just standing still, if you close your eyes.


Sure you can. You just need a luminiferous aether detector.


Of course, my bad. Otherwise the speed of light would have to be constant in any reference frame, and that would just be ridiculous.


It’s just a minus sign.


What is deceleration but acceleration in the opposite direction? /s


There's no need for the "/s" on the end, there. Deceleration, and especially in this case with a natural frame of reference, deceleration is negative acceleration.


More stringently, deceleration is decreasing the magnitude of the velocity vector, I would say.

If acceleration can be negative, so can speed. A negative speed with negative acceleration would not imply deceleration?


The magnitude of the velocity vector is dependent on the frame of reference.

If you measure the same object's velocity from a spaceship traveling through the solar system, you'll get a different answer from what we measure from Earth.

That's why physics doesn't distinguish between acceleration and deceleration. What looks like acceleration in one frame looks like deceleration in a different frame.


Speed is not a vector, it is a scalar. You are thinking of velocity.


Flip your phone upside brah


The ChatGPT session he links [0] shows how powerful the LLM is in aiding and teaching programming. A patient, resourceful, effective, and apparently deeply knowledgeable tutor! At least for beginners.

[0] https://chatgpt.com/share/68143a97-9424-800e-b43a-ea9690485b...


I'm constantly shocked by the number of my coworkers who won't even try to use an LLM to get stuff done faster. It's like they want it to be bad so they don't have to improve.


Maybe they have tried and found it lacking?

I have an on again off again relationship with LLMs. I always walk away disappointed. Most recently for a hobby project around 1k lines so far, and it outputs bugs galore, makes poor design decisions, etc.

It's ok for one off scripts, but even those it rarely one shots.

I can only assume people who find it useful are working on different things than I am.


Yeah I'm in the holding it wrong camp too. I really want LLMs to work, but every time I spend effort trying to get it to do something I end up with subtle errors or a conclusion that isn't actually correct despite looking correct.

Most people tell me I'm just not that good at prompting, which is probably true. But if I'm learning how to prompt, that's basically coding with more steps. At that point it's faster for me to write the code directly.

The one area where it actually has been successful is (unsurprisingly) translating code from one language to another. That's been a great help.


I have never been told I'm bad at prompting, but people swear LLMs are so useful to them I ended up thinking I must be bad at prompting.

Then I decided to take on offers to help me with a couple problems I had and, surprise, LLMs were indeed useless even when being piloted by people that swear by them, in the pilot's area of expertise!

I just suspect we're indeed not bad at prompting but instead have different kinds of problems that LLMs are just not (yet?) good at.

I tend to reach for LLMs when I'm (1) lazy or (2) stuck. They never help with (2) so it must mean I'm still as smart as them (yay!) They beat me at (1) though. Being indefatigable works in their favor.


My experience tracks your experience. It seems as if there are a few different camps when it comes to LLMs, and that’s partly based on one’s job functions and/or context that available LLMs simply don’t handle.

I cannot, for example, rely on any available LLM to do most of my job, because most of my job is dependent on both technical and business specifics. The inputs to those contexts are things LLMs wouldn’t have consumed anywhere else. For example specific facts about a client’s technology environment. Or specific facts about my business and its needs. An LLM can’t tell me what I should charge for my company’s services.

It might be able to help someone figure out how to do that when starting out based on what it’s consumed from Internet sources. That doesn’t really help me though. I already know how to do the math. A spreadsheet or an analytical accounting package with my actual numbers is going to be faster and a better use of my time and money.

There are other areas where LLMs just aren’t “there yet” in general terms because of industry or technology specifics that they’re not trained on, or that require some actual cognition and nuance an LLM trained on random Internet sources aren’t going to have.

Heck, some vendors lock their product documentation behind logins you can only get if you’re a customer. If you’re trying to accomplish something with those kinds of products or services then generally available LLMs aren’t going to provide any kind of defensible guidance.

The widely available LLMs are better suited to things that can easily be checked in the public square, or to help an expert summarize huge amounts of information, and who can spot confabulations/hallucinations. Or if they’re trained on specific, well-vetted data sets for a particular use case.

People seem to forget or not understand that LLMs really do not think at all. They have no cognition and don’t handle nuance.


Don’t get them to make design decisions. They can’t do it.

Often, I use LLMs to write the V1 of whatever module I’m working on. I try to get it to do the simplest thing that works and that’s it. Then I refactor it to be good. This is how I worked before LLMs already: do the simplest thing that works, even if it’s sloppy and dumb, then refactor. The LLM just lets me skip that first step (sometimes). Over time, I’m building up a file of coding standards for them to follow, so their V1 doesn’t require as much refactoring, but they never get it “right”.

Sometimes they’ll go off into lalaland with stuff that’s so over complicated that I ignore it. The key was noticing when it was going down some dumb rabbit hole and bailing out quick. They never turn back. They’ll always come up with another dumb solution to fix the problem they never should have created in the first place.


I do the designing, then I write a comment explaining what happens, and the LLM then adds a few lines of code. Write another comment, etc.

I get very similar code to what I would normally write but much faster and with comments.


I use LLMs often - a few times a week. Every time I gain confidence in a model I get burned. Sometimes verifying takes longer than doing the task myself, so “AI” gets a narrower and narrower scope in my workflow as time goes by.


And I'm constantly shocked by number of people still shilling for it, despite it hallucinating constantly.

Plus having used it in JetBrains IDE it makes me sad to see them ditching their refactoring for LLM refuctoring.


The normal refactorings are still there AFAICT.


That implies that they were there in the first place. For some IDEs the refactoring are essentially rename, and buy JetBrains AI plugin.


Then don't complain about them going away?


I didn't complain about them going away. I complained about using LLMs upsell rather than implementing refactoring like they used to for their previous IDEs (e.g. IntelliJ).


You did, actually.


I complained about them no longer being added, not being removed (at least not yet). Look at CLion refactorings, compare this to IDEA and Rider that preceded the LLM enshittification.

For C++, there should be quite a few refactoring on the count of it being OOP like Java.

Even IDEA and Rider didn't add any new refactorings, despite Java advancing quite a bit.


A lot of people just don't have the dexterity. Doesn't mean they're stupid necessarily (although the two do rhyme)


Some people just don't want to use AI and there are very legitimate reasons for that.

Why are you so willing to teach a program how to do your job? Why are you so willing to give your information to a LLM that doesn't care about your privacy?


I agree there can be very legitimate reasons for personally not wanting to use AI. At the same time, I'm not sure I find either of those questions to be related to particularly convincing reasons.

Teaching a program how to do your job has been part of the hacker mindset for many decades now, I don't think there is anything new to be said as to why. Anyone here reading this on the internet has long since decided they are fine preferring technical automations over preserving traditional ways of completing work.

LLMs don't inherently imply anything about privacy handling, the service you select does (if you aren't just opting to self host in the first place). On the hosted service side there's anything from "free and sucks up everything" to "business data governance contracts about what data can be used how".


> Anyone here reading this on the internet has long since decided they are fine preferring technical automations over preserving traditional ways of completing work.

Well, that's a huge unsubstantiated leap. Also, it's not about "preserving traditional ways of completing work." It's just about recognizing that humans are much better at the vast majority of real world work.


> Well, that's a huge unsubstantiated leap.

I suppose that might depend on how you read "preferring". As in "is what one would ideally like" then sure, it's a bit orthogonal. As in "is what one would decides to use" is what I mean in that we are willing to try and use technical automations over traditional means by nature of being here, even if a face to face conversation would be higher quality or an additional mailman would be employed.

> Also, it's not about "preserving traditional ways of completing work." It's just about recognizing that humans are much better at the vast majority of real world work.

While an interesting topic I'm not sure this really relates to why people are willing to teach a program how to do their job. It would be more "why people don't bother to", which is a bit of the opposite assumption (that we should if it were worth it).

The most interesting thing about recognizing humans are much better at the vast majority of real world work is it doesn't define where the boundary currently sits or how far it's moving. I suspect people will continue to be the best option for the majority of work for a very long time to come by our nature to stop considering automated things work. "Work" ends up being "what we're employed to do" rather than "things that happen". Things like lights, electricity, hvac, dishwasher, washer/dryer, water delivery & waste removal, instances of music or entertainment performances, and so on used to require large amounts of human work but now that the majority of work in those areas is automated we call them "expenses" and "work" is having to load/unload the washer instead of clean the clothes and so on.

So, by one measure, I'd disagree wholeheartedly. Machine automation is responsible for more quality production output that humans if, for anything, because of the sheer volume of output and use than being better at a randomly chosen task. On another measure I'd agree wholeheartedly - the things we define as being better at tend to be the things it's worth us doing which become the things we still call "work". Anything which truly has the majority done better (on average) by machines becomes an expense.


Seeds were also the first thing that came to my mind.

I've always found it fascinating that I could plant many spice seeds (e.g. mustard) as long as their container said "not irradiated", and they would sprout and grow just fine, several years after buying them. I.e. they are still technically alive, and can stay as such for many years, which is just amazing life resilience.

That said,

> ...except that as these organisms are simpler than seeds...

I wouldn't say any animal that can move around to be simpler than seeds. IMHO by any definition animals are a big jump up in complexity over plants.


Plants in general have much larger genomes than animals, and that's clearly a definition of complexity.


That just means they have less selective pressure to reduce it - possibly because they are simpler. Genome size isn't correlated much without complexity. Obviously it provides an upper bound, but a lot of genes are repeats.


Yes. You'll find that plants generally survive better after being irradiated, indicating that a lot of these genes are apparently not important.


Large software systems also often have significant chunks of code that is only historical and/or "accidental complexity" and can be removed. But we would typically say that removing it is reducing the system's complexity, rather than that it wasn't complex.


Genes are not maintained by people


> I wish they’d spin off Firefox and related stuff..., and abandon the rest of their “mission”.

I wish the community (I don't have the technical skills myself) would fork Firefox back into a privacy-focused browser; strip out all the Mozilla "products" code that's snuck in it, and manage the development in a non-profit organization like how the Linux kernel gets developed.


LibreWolf?


no! bring back the IceWeasel! (oh, apparently it still exists, but it's called IceCat now, which explains it's lack of popularity)


The Iceweasel in Debian was Firefox without branding, IceCat (which was called Iceweasel before Debian created Iceweasel) is a GNU fork (and there are Thunderbird and Skymonkey equivalents).


Can you elaborate on what objections you have to Librewolf and why IceCat is better?


> In tort.

New word for me.

A tort is a civil wrong, other than breach of contract, that causes a claimant to suffer loss or harm, resulting in legal liability for the person who commits the tortious act. Tort law can be contrasted with criminal law, which deals with criminal wrongs that are punishable by the state. While criminal law aims to punish individuals who commit crimes, tort law aims to compensate individuals who suffer harm as a result of the actions of others

https://en.wikipedia.org/wiki/Tort


What's a good way to be an "Archivist" on a low budget these days?

Say you have a few TBs of disk space, and you're willing to capture some public datasets (or parts of them) that interest you, and publish them in a friendly jurisdiction - keyed by their MD5/SHA1 - or make them available upon request. I.e. be part of a large open-source storage network, but only for objects/datasets you're willing to store (so there are no illegal shenanigans).

Is this a use case for Torrents? What's the most suitable architecture available today for this?


I’m not an expert in such things, but this seems like a good use case for IPFS. Kinda similar to a torrent except that it is natively content-addressed (essentially the key to access is a hash of the data).


https://wiki.archiveteam.org/index.php/The_WARC_Ecosystem

Set up a scrape using ArchiveTeam's fork of wget. It can save all the requests and responses into a single WARC file. Then you can use https://replayweb.page/ or some other tool to browse the contents.


In my experience, to archive effectively you need a physical datacenter footprint, or to rent capacity of someone who does. Over a longer timespan (even just 6 months), having your own footprint is a lower total cost of ownership, provided you have the skills or access to someone with the skills to run Kubernetes + Ceph (or something similar).

.

> Is this a use case for Torrents?

Yes, provided you have a good way to dynamically append a distributed index of torrents and users willing to run that software in addition to the torrent software. Should be easy enough to define in container-compose.


> All current China CDN customers must complete the transition to our Partners’ solution by June 30, 2026...

Can anyone here who works in the field shed some light on why it takes a whole 1.5 years for such a change to take effect?

What's involved in a CDN transition that can't be done in, say, 6 months?


My enterprise sized day job is an Akamai customer. Nothing in China directly so we won't be directly impacted.

As a hypothetical, if we got told that we had to switch providers to stay in a region, we'd need to rebuild pipelines, EdgeWorkers, edge caching rules, origin routing configurations and probably more I'm not aware of. Plus testing all of those changes in a non-breaking way across the entire enterprise. Along with all the normal business delivery priorities.

It'd probably take a solid year for us to fully execute it.


This. People tend not to realize how sprawling enterprise software stacks tend to be, how many implicit dependencies have to be untangled, etc. Even simple things can take years and complicated things often just don’t get done at all.


Yes, dealing with the mess that is your software stack, the mess that is your corporate structure, and the mess that is your change management process means that things a couple dudes in your startup could accomplish in a week would take a year at a crusty enterprise.


There's also a finance process. Akamai deals mostly with enterprise customers, which means step 1 may be technical validation, but step 2 is negotiating an appropriate contract with another provider, which may take weeks on its own without a clear go/no-go answer in the meantime.


Sounds fragile and pretty exposed.

(Also a complete layman here)


It's actually the opposite. I thought the same thing before working in big enterprise though so I definitely understand how you could think that.

In reality everything takes 10x longer because things are done in a very thorough way and typically with significant redundancy (high availability). The code bases are typically shite and personally I'd rather eat nails than work on them, but they are reasonably well tested and changes are typically done very conservatively. Big enterprise devs are also really good at not breaking production. As much as I detest that environment, I do think startups in general could learn a great deal about not breaking production from the big enterprise people.


What I meant to be dependent on a single essential service that much with that much difficulty to overcome. Smells like lots of trust and hope (of things will stay as they are for long) put into the architecture. It is nice there is a workaround, and there is a will for workaround, in this situation for instance.


No, quite the opposite. Big companies likely also have other big(ger) companies as partners/customers, all of which want stableness and see things keep working. Therefore companies need careful planning, execution and testing to ensure there is minimal disruption.

Startups and a certain company can move fast and break things. But not everyone can do this.


I don't think it's particularly fragile. Big systems have big dependencies, and moving those dependencies takes time if you want to minimise risk.


Vendor validation alone can take months, and that's before you start the technical process of migrating.

This is a company who is in front of your business, do you trust them?

I expect a lot of businesses will take the opportunity to send the contract out for tender.


It’s the difference between all hands on deck for 6 months and a reasonable pace over 18 months.

If you are just running a single website with DNS fronting that’s not an issue.

But large customers tend to have more advanced connectivity on L3/4 and asymmetric routing.

Then there is the CDN part itself are you only using the basic auto caching? That’s not a problem but if you manually manage it then all that needs to be converted as well and there is no guarantee that the partner API would be compatible or even have the same functionality as your current CDN.


It's specifically because Akamai wants to preserve its reputation amongst enterprise customers paying $$$ that it's giving such a big delay. And I can predict it won't be enough for many.


No doubt there be a lot of companies who don't finish until nearly the end. Barring legal reasons, I'm guessing there will actually be an extension because enough corps won't be ready at that point. Also would expect Akamai to offer extended support beyond that date (for a significant cost) on a customer by customer basis.


Because Akamai is substantially more than a CDN (arguably, CDN is now a smaller – albeit not small – part of their business): it is also certificate management, WAF, web app/API protection, IAM, edge DNS, edge workers, complex CDN rules, analytics and a whole bunch of other stuff.

Enterprise customers also typically and mostly use Akamai in non-CDN scenarios so they will have a hard time migrating off it if a need be, especially if they have invested heavily in Akamai.


CDNs are much more than dumb http1.1 compliant content caches these days. And every CDN has a huge number of integrations and functionality. All of which have a different impleme tation and behavior for different providers. Its probably a good analog to an infrastructure (“cloud”) migration. And even harder to test, validate, and switch the actual service provider as they _are_ “the front end.”


Also consider that every company has their existing roadmap. Getting the work scheduled can be difficult.


Anything less would be throwing your customers under the bus.

Of course there are well known companies out there closing services with a only couple months notice ; but that's not an example to follow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: