Hacker Newsnew | past | comments | ask | show | jobs | submit | jtchang's commentslogin

How is this mode not a standard part of their disaster recovery plan? Especially in sf and the bay area they need to assume an earthquake is going to take out a lot of infrastructure. Did they not take into account this would happen?


> While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets. We established these confirmation protocols out of an abundance of caution during our early deployment, and we are now refining them to match our current scale. While this strategy was effective during smaller outages, we are now implementing fleet-wide updates that provide the Driver with specific power outage context, allowing it to navigate more decisively.

Sounds like it was and you’re not correctly understanding the complexity of running this at scale.


Sounds like their disaster recovery plan was insufficient, intensified traffic jams in already congested areas because of "backlog", and is now being fixed to support the current scale.

The fact this backlog created issues indicates that it's perhaps Waymo that doesn't understand the complexity of running at that scale, because their systems got overwhelmed.


What about San Francisco allowing a power outage of this magnitude and not being able to restore power for multiple days?

This kind of attitude to me indicates a lack of experience building complex systems and responding to unexpected events. If they had done the opposite and been overly aggressive in letting Waymo’s manage themselves during lights that are out would you be the first in line criticizing them then for some accident happening?

All things being considered, I’m much happier knowing Waymo is taking a conservative approach if the downside means extra momentary street congestion during a major power outage; that’s much rarer than being cavalier with fully autonomous behavior.


DR always stands for "didn't realize" in the aftermath of an event.

That's what they're learning and fixing for in the future to give the cars more self-confidence.


They probably do, they just don't give a shit. It's still the "move fast and break things" mindset. Internalize profits but externalize failures to be carried by the public. Will there be legal consequences for Waymo (i.e. fines?) for this? Probably not...


What Waymo profits?

They're one-of-one still. Having ridden in a Waymo many times, there's very little "move fast and break things" leaking in the experience.

They can simulate power outages as much as they want (testing) but the production break had some surprises. This is a technical forum.. most of us have been there.. bad things happened, plans weren't sufficient, we can measure their response on the next iteration in terms of how they respond to production insufficiencies in the next event.

Also, culturally speaking, "they suck" isn't really a working response to an RCA.


Waymo cars have been proven safer than human drivers in California. At the same time, 40k people die each year in the US in car accidents caused by human drivers.

I'm very happy they're moving fast so hopefully fewer people die in the future


Both things can be true. They can be safer, but at the same time Waymo can still externalize stuff to the public...


Who cares? Honestly?


…the public?


"Move fast and break things" is a Facebook slogan. Applying it to Google or Waymo just doesn’t fit. If anything, Waymo is moving too slow. 100 people are going to die in seven days from drunk drivers and New Years in the US.

How's that for a real world trolley problem?


The most effective way of decreasing traffic deaths is safer driving laws, as the recent example of Helsinki has shown. That and better public transportation infrastructure. If you think that a giant, private, for-profit company cares about people's lives, you are in for a ride.


> The most effective way of decreasing traffic deaths is safer driving laws

This is almost hilariously false. "Oh yeah, those words on paper? Well, they actually physically stopped me from running the red light and plowing into 4 pedestrians!"

> If you think that a giant, private, for-profit company cares about people's lives, you are in for a ride.

I honestly wonder how leftists manage to delude themselves so heavily? I'm sure a bunch of politicians really have my best interests at heart. Lol


> This is almost hilariously false. "Oh yeah, those words on paper? Well, they actually physically stopped me from running the red light and plowing into 4 pedestrians!"

It's very clearly proven that hitting a pedestrian with 50 km/h is exponentially more dangerous than hitting them with 30 km/h. It's very clearly proven that having physically separted bike lines prevents deaths. It's very clearly proven that other measure like speed bumps, one-way streets, smart traffic routing prevents deaths.

And I am not even going to respond to your idiotic "leftist" statement.


It's very clearly proven that murder is dangerous, yet people still commit it. You still have not explained how laws stop things from happening, as if by magic.

> And I am not even going to respond to your idiotic "leftist" statement.

This says more about you than it does me. Taking the most cynical view possible, at least a for profit company has a profit motive to keep me alive unlike a bureaucrat. A bureaucrat doesn't lose their salary if traffic deaths go up. In fact, if a problem gets worse, they often receive more funding to fix it. If a government road is dangerous, you cannot easily fire the government and switch to a competitor's road.

The success you mentioned in Helsinki wasn't a triumph of law; it was a triumph of engineering. The question is not whether we want safety, but which system—a state monopoly with no financial penalty for failure, or a private entity that faces financial ruin if it kills its customers—is more likely to engender it.


If the onboard software has detected an unusual situation it doesn't understand, moving may be a bad idea. Possible problems requiring a management decision include flooding, fires, earthquakes, riots, street parties, power outages, building collapses... Handling all that onboard is tough. For different situations, a nearby "safe place" to stop varies. The control center doesn't do remote driving, says Waymo. They provide hints, probably along the lines of "back out, turn around, and get out of this area", or "clear the intersection, then stop and unload your passenger".

Waymo didn't give much info. For example, is loss of contact with the control center a stop condition? After some number of seconds, probably. A car contacting the control center for assistance and not getting an answer is probably a stop condition. Apparently here they overloaded the control center. That's an indication that this really is automated. There's not one person per car back at HQ; probably far fewer than that. That's good for scaling.


> For example, is loss of contact with the control center a stop condition?

Almost certainly no - you don’t want the vehicle to enter a tunnel, then stop half way through due to a lack of cell signal.

Rather, areas where signal dropouts are common would be made into no-go areas for route planning purposes.


relying on essentially remote dispatch to resolve these errors states is a disaster


Fundamentally is there anything you can't write in rust and must write in C? With AI languages should mostly be transposable even though right now they are not.


Why rewrite something with multiple decades of validation and bugs-now-depended-upon-features?

Word and Excel almost certainly have 30 year old C++ code that #must-not-be-touched, bugs and all.


So they're going to port Microsoft Edge web browser to Rust?

Are they going to upstream their changes to the Google Chrome-codebase?


Mozilla seems to be doing well after inveting rust and rewriting firefox using rust


According to https://4e6.github.io/firefox-lang-stats/ , only 12.3% of Firefox is written in Rust.


Security. I know it's boring for most, but important for those who need to handle cybersecurity issues.


Some of the “algorithms” libraries in C++ are very difficult to express in safe Rust and might require proc macros.

Most anything related to “intrusive linked lists” is difficult or outright impossible in safe Rust, but is commonly used in operating system code.


To be fair, one of the main reasons linked lists are used is that more advanced data structures are too hard at the OS level with C.


Intrusive linked lists have performance and robustness benefits for kernel programming, chiefly that they don't require any dynamic memory allocation and can play nice with scenarios where not all memory might be "paged in", and the paging system itself needs data structures to track memory statues. Linked lists for this type of use also have consistently low latency, which can matter a lot in some scenarios such as network packet processing. I.e.: loading a struct into L1 cache also loads its member pointers to the next struct, saving an additional step looking it up in some external data structure.


Great video on how the public is getting screwed on energy deals.

Basically large tech companies have the deep pockets to push up prices at electricity auctions. But why bid in public when you can do those deals in private. That's the first problem. All that needs to be out in the open.

What really irks me is that the market is so manipulated that we can't do anything about it. Think about NEM 3.0 vs 2.0. Putting data centers in their own rate class does make sense as the first step.


>Basically large tech companies have the deep pockets to push up prices at electricity auctions. But why bid in public when you can do those deals in private.

Public utilities can't do the same? Moreover if the implication is that large tech companies are somehow getting great prices at the expense of residential users, what does that mean for the electric generators on the other end of this transaction? Why are they leaving money on the table by selling to large tech companies for cheap?


Watch the video.

These companies are regulated and can only charge for the costs they incur plus a flat profit on top of that of 10% or so.

The datacenters give allow them to justify building a lot more capacity to serve them. That increases costs, which means that 10% added for profit is now a bigger number and they can give bigger returns to their shareholders. But those profits are extracted from the existing customers who now see higher bills to cover the costs of expanding capacity to serve the datacenters.

It's a question of aligning incentives.


(Why) are the datacenters not (also) charged (pun intended) for this?


Because they are given a sweetheart deal to attract them to that specific area. And that deal happens behind closed doors.


Can you link the .PDF with this evidence?


The whole capped profit creates the distortions you illustrate.

The effect has a name: the Averch–Johnson effect, named after the Harvey Averch and Leland Johnson paper: "Behavior of the Firm Under Regulatory Constraint"

Fun stuff: https://en.wikipedia.org/wiki/Averch%E2%80%93Johnson_effect


Private deals - do you mean like a Power Purchase agreement? That doesn't fall out to the public cost domain and certainly shouldn't be a public good.


NEM 2.0 was completely unsustainable and it was extremely regressive, punishing people who couldnt afford panels. 3.0 is a much better system


A number of years back I got bored during covid and decided to reverse engineer as much of the Wyze Cam V2 camera I could and make some custom firmware for it. Right now that lives at https://github.com/openmiko/openmiko

That said it's really hard to make long term supportable open source camera software/firmware. And when picking cameras it is even harder because the market as it stands now does not let you have it all. You need to pick what facets you really care about.

Also keep in mind even the above code is not really opensource all the way: I still had to load the driver binaries. Not sure that source will ever be released. The kernel is also old as heck.

What I do feel good about though is saving these old cameras from the dumpster if Wyze ever stops supporting them. The firmware works for simple cases: just load it up and you can start curl'ing frames. I used it in scripts to put together timelapse videos with ffmpeg. No need to screw around with authentication, phones apps, email, etc.


Learning to do something like this (reverse engineer electronics and flash them with custom firmware) from scratch is one of my life dreams!

Having read https://github.com/openmiko/openmiko/blob/master/doc/develop... -- is there anywhere that you document how you learned to do this / how you got started with this project?

I would love to find a "zero to hello world, from scratch" type tutorial for putting custom firmware on a camera not supported by one of the existing projects (or a similar writeup detailing how one of these projects got started in the first place).


Hey, Openmiko is a nice project. With your baggage of knowledge, I would love to see you contributing to Thingino as well. While we still depend on binary blobs from the manufacturer SDK, there is a work on alternatives to replace what is replaceable with open stack. Join the team, have fun.


So basically you trust something because you have a long chain of assurances that you trusted it before? Kinda like certificate pinning.


Is the send function considered non-blocking?


Why would it have a completion callback if it wasn't?


Visiting the site it is easy to see why they got demoted. It's all fluff and no substance. Almost all the links I clicked through could have been generated by a LLM.

Lots of ads. I might be old school but I still hold quality to something like anandtech back in the day. Something that a real human spent time on because they were genuinely interested in the topic and decided to write about it. The bar is so low these days not sure if there even is a bar.

This site is suppose to be a men's lifestyle magazine but it barely has any content that isn't just filler and fluff.


I love it. Could definitely see more features where you can see the results of the jury questions.


^ this. It gets boring pretty fast being a juror, but it would be a lot more compelling if you could see the verdict.


It's so dumb to assign it a CVSS score of 10.

Unless you are blindly accepting parquet formatted files this really doesn't seem that bad.

A vulnerability in parsing images, xml, json, html, css would be way more detrimental.

I can't think of many services that accept parquet files directly. And of those usually you are calling it directly via a backend service.


Unless you're logging user input without proper validation, log4j doesn't really seem that bad.

As a library, this is a huge problem. If you're a user of the library, you'll have to decide if your usage of it is problematic or not.

Either way, the safe solution is to just update the library. Or, based on the link shared elsewhere (https://github.com/apache/parquet-java/compare/apache-parque...) maybe avoid this library if you can, because the Java-specific code paths seem sketchy as hell to me.


It’s incredibly common to log things which contain text elements which come from a user request. I’ve worked on systems that do that 100s of thousands of times per day. I’ve literally never deserialized a parquet file that came from someone else even a single time and I’ve used parquet since it very first was released.


> Unless you're logging user input without proper validation, log4j doesn't really seem that bad.

Most systems do log user input though, and "proper validation" is an infamously squishy phrase that mostly acts as an excuse. The bottom line is that the natural/correct/idiomatic use of Log4j exposed the library directly to user-generated data. The similar use of Apache parquet (an obscure tool many of us are learning about for the first time) does not. That doesn't make it secure, but it makes the impact inarguably lower.

I mean, come on: the Log4j exploit was a global zero-day!


> Most systems do log user input though, and "proper validation" is an infamously squishy phrase that mostly acts as an excuse

That's my point: if you start adding constraints to a vulnerability to reduce its scope, high CVE scores don't exist.

Any vulnerability that can be characterised as "pass contents through parser, full RCE" is a 10/10 vulnerability for me. I'd rather find out my application isn't vulnerable after my vulnerability scanner reports a critical issue than let it lurk with all the other 3/10 vulnerabilities about potential NULL pointers or complexity attacks in specific method calls.


> Any vulnerability that can be characterised as "pass contents through parser, full RCE" is a 10/10 vulnerability for me

And I think that's just wildly wrong sorry. I view something exploited in the wild to compromise real systems as a higher impact than something that isn't, and want to see a "score" value that reflects that (IMHO, critical) distinction. Agree to disagree, as it were.


The score is meant for consumption by users of the software with the vulnerability. In the kind of systems where Parquet is used, blindly reading files in a context with more privileges than the user who wrote them is very common. (Think less "service accepting a parquet file from an API", more "ETL process that can read the whole company's data scanning files from a dump directory anyone can write to".)


I get the point you’re making but I’m gonna push back a little on this (as someone who has written a fair few ETL processes in their time). When are you ever ETLing a parquet file? You are always ETLing some raw format (css, json, raw text, structured text, etc) and writing into parquet files, never reading parquet files themselves. It seems a pretty bad practise to write your etl to just pick up whatever file in whatever format from a slop bucket you don’t control. I would always pull files in specific formats from such a common staging area and everything else would go into a random “unstructured data” dump where you just make a copy of it and record the metadata. I mean it’s a bad bug and I’m happy they’re fixing it, but it feels like you have to go out of your way to encounter it in practice.


Vendor CVSS scores are always inherently meaningless because they can't take into account the factors specific to the user's environment.

Users need to do their own assessments.


This comment over generalises the problem, but is inherently absurd. There are key indicators in scoring that explain the attack itself which isn't environment specific.

I do agree that in most cases the deployment specific configuration affects the ability to be exploited and users or developers should analyse their own configuration.


Thank you for providing this.

He mentions right after that "empathy is good but you need to think it through and not just be programmed like a robot".

That quote is clearly taken out of context and is specifically chosen as click/rage bait.

Like all things, nuance and context is always key.


You may find it utterly nullifying, not just mitigating.

I, of free and sound mind, do not.

I find an argument that it could be nullifying to be quite challenging to make. (really, I can't figure out how I'd make it)

Additionally, if I could, I'd still have to wrestle with that life isn't full of cartoon villains, and people usually hedge.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: