Hacker Newsnew | past | comments | ask | show | jobs | submit | vollbrecht's commentslogin

A standard where you have to pay to play. Cheapest option is 3000$ per product and 500$ annual.

Unless you're running Home Assistant and open source nodes. You can build a Matter+Thread node that works on a nrf52840 MCU, there are examples in the Nordic SDK. But then, why would you bother with Matter which is so bloated it doesn't even fit in flash properly? The only example that works on 52840 requires external flash to hold the B partition for OTA updates :)

So I'm using ESPHome for everything that could be wall powered and BTHome (with those same nrf52840 chips, you can buy boards for like $2 on Aliexpress) for everything that needs to run on battery.


> Unless you're running Home Assistant and open source nodes.

I think the parent is referring more to manufacturers than end users.

It would suck to have fewer low-cost competitors, especially from China manufacturers.


More information here:

https://wizzdev.com/blog/how-much-does-it-cost-to-launch-mat...

But, yes, Matter/Thread is more expensive than Zigbee by a lot.


That's not bad compared to Bluetooth. Also, you will need FCC cert by law and probably some UL certs if you actually want to sell you product anywhere so you are already looking at 10s of thousands even if you choose ZigBee. I would love to live in a world where indie hardware can launch wireless products without huge cert cost but that's not the world we live in.

Use a pre-certified module and do your own unintentional emitter testing. It's not that hard.

You don't need UL for smart home sensors.


I don't think you can do you own testing like you suggest. You can self declare but you still need to include test data in that declaration so unless you happen to own all of the highly expensive calibrated test equipment you will need to pay an external test lab.

You don't need UL if you are just selling directly via your own website. However, if you want to sell the product in stores, most stores are going to require it.


This is a double-edged sword.

Zigbee's issue was that anyone could make devices and modify the protocol. Tons of devices are vendor-locked to their first-party hub. Philips attempted to do this recently with a firmware update and only backed off due to extremely bad PR.

Z-Wave has the same "problem" as Matter. You have to pay the consortium per product. Part of that what that pays for is testing, and cross-vendor compatibility is mandatory. As a consumer you are guaranteed that a Z-Wave device will work with any hub (and therefore Home Assistant/completely locally). You own Z-Wave devices.

I ran both in my old home, and used Zigbee devices where possible (Z-Wave devices are often more expensive).

I would much rather have it the way of Z-Wave and Matter. It is the lesser of two evils.


I wasn't aware of that. One other concern I have with Matter is that, if I understand correctly, Thread+Matter devices get their own IP address with internet access, whereas with Zigbee all of that has to be controlled by the gateway.

In theory that's a win for Matter, but I'm a little concerned about the security and enshitification problems that might cause. I kinda like the idea that I can buy a cheap IoT lock off Temu and as long as my Zigbee gateway is secure there's very little chance of that decision coming back to bite me...


Others have pointed out I might be wrong about this. See: https://news.ycombinator.com/item?id=45837052

Having network access is my primary concern. The protocol was developed by the largest adware companies on the planet...

I'm sure someone will chime in and say you can setup a VLAN and restrict all Matter devices from the internet yada yada...

You don't have to do that with Z-Wave or ZigBee. And with ESPHome you know exactly what the device is doing because you have 100% control over it.


This is, to me, one of the absolute biggest selling points for ZigBee and Z-Wave.

I can get some random, vendor I've never heard of, ZigBee sensor, and I know it won't do anything rogue on the internet because it doesn't have any way of getting to the internet.

Also, ZigBee is extremely power efficient compared to WiFi. With ZigBee, I don't mind putting a sensor in the crawlspace or somewhere a pain to get to. It won't need the batteries changed for a year or two anyway.

I know Matter can work over more efficient means than WiFi, but most of the cheaper devices I find are WiFi. A cheap ZigBee device is still ZigBee.


Many Matter products are running on Thread, which uses the same radio as Zigbee and has the same power savings.

Thread doesn't have accessible IP address. It uses IPv6 and the ULA space which is non-routable.


As I said, my experience has been that the cheaper products run on WiFi. I also don't like that a product advertising "Matter" doesn't answer the question of whether it uses WiFi or not.

I much prefer that a $3 ZigBee temperature and humidity sensor definitely doesn't use WiFi rather than having to dig to see if a cheap a Matter sensor uses WiFi.

I also much prefer the prices of ZigBee.


Is there anything preventing a Matter product from also requesting an IP address from your DHCP server and getting a route out to the internet?

Neither Matter nor Matter-over-Thread require Internet access.

We really should be yelling for advancements in simple-to-configure dedicated, restricted VLANs and SSIDs for IOT devices instead of yelling about how inappropriate we think that using IP is.

(Historically, IP wins in these conundrums anyway. IP has been succession of grand successes for decades.

Resistance is futile. We should work to prepare for the eventually of what is to come.)


> Neither Matter nor Matter-over-Thread require Internet access.

The protocols themselves might not but as a warning to people looking for “matter” as an indicator they can have local only control, apparently the matter spec doesn’t require local only setup. I bought Honeywell’s new matter thermostat and in order to get the QR code and keys you need to register it to a matter controller, you first have to download their app and connect the thermostat to their cloud, so that you can get the keys from the app. So the matter capabilities are still useless


>We really should be yelling for advancements in simple-to-configure dedicated, restricted VLANs and SSIDs for IOT devices instead of yelling about how inappropriate we think that using IP is.

What is the lay of the land for typical consumers in this respect? Any products you've worked with or would recommend?

I've recently started with Home Assistant and have been adding devices to my single network. The ISP provided eero modem/router doesn't provide VLAN capability.


I don't use consumer off-the-shelf routers enough (these days) to know the lay of the land very well. But when I do get my hands on them (usually when a friend wants help with something), I do have a look through the config options just to see what functions they expose. And I don't see that kind of thing available in the configs of the stuff I've recently had my hands on.

In my own little world at home, I just use OpenWRT (on a now-old Raspberry Pi 4), Mikrotik access points, and with some random switches that grok 802.11q wherever they are useful. This has let me do whatever I've imagined wanting so far with VLANs, SSIDs, routing, firewalling, ...

And a person can also use a one-box solution running OpenWRT (the OpenWRT One is such a box) or Mikrotik's RouterOS (like their succinctly-named L009UiGS-2HaxD-IN).

But all of that is drifting pretty far from the concept I'd like to see, which is:

Person walks into Wal-Mart. Person buys a router, and some Matter wifi light bulbs. As a part of setting them up, they're walked through a simple process of making an isolated network for those light bulbs.

And we don't seem to be anywhere near there yet.

(And that may seem like a far-reaching goal to some, but similar things have been accomplished in the past. A router from Wal-Mart used to boot up out of the box and Just Work -- while providing a completely unfettered, unencrypted networked named "linksys" or "NETGEAR" for anyone within earshot to participate in.

Things are longer that way these days. Consumer routers have tended to provide secure-by-default wireless networks for a rather long time now. At least in that one little, important aspect of consumer goods, sanity did eventually prevail.)


The pressing question is, how much £ per £ lost need to be invested in grid infrastructure to reduce this number?


A lot and is fixing the grid is full of other complexities - but that's not actually the best fix here. The UK could change it's wholesale energy pricing model to something that encourages usage to move closer to generation (zonal or nodal pricing).

Currently customers using cheap wind power are essentially punished if there is gas backed generation elsewhere in the UK and the energy companies reap the profit.


Nice gimmick that many elements inside that explainer image directly links you to the respective source code, its referring to.


You also now who misses the point? Qualcomm. Why? Well just read the headline qualcomm itself provides.


Rust has now a donated spec that was provided by Ferrocene. This spec style was influenced by the Ada spec. It is available publicly now on https://rust-lang.github.io/fls/ .

This is part of the effort of Ferrocene to provide a safety certificate compiler. And they are already available now.


This is only meaningful if Rust compiler devs give any guarantees about never breaking the spec and always being able to compile code that adheres to this spec.


Why so?

Specs for other languages are also for a specific version/snapshot.

It's also a specific version of a compiler that gets certified, not a compiler in perpetuity, no matter what language.


That's not how it works for most language standards, though. Most language standards are prescriptive, while Rust is descriptive.

Usually the standard comes first, compiler vendors implement it, and between releases of the spec the language is fixed. Using Ada as an example, there was Ada 95 and Ada 2003, but between 95 and 2003 there was only Ada 95. There was no in-progress version, the compiler vendors weren't making changes to the language, and an Ada95 compiler today compiles the same language as an Ada95 compiler 30 years ago.

Looking at the changelog for the Rust spec (https://rust-lang.github.io/fls/changelog.html), it's just the changelog of the language as each compiler verion is released, and there doesn't seem to be any intention of supporting previous versions. Would there be any point in an alternative compiler implementing "1.77.0" of the Rust spec?

And the alternative compiler implementation can't start implementing a compiler for version n+1 of the spec until that version of rustc is released because "the spec" is just "whatever rustc does", making the spec kind of pointless.


> Usually the standard comes first, compiler vendors implement it, and between releases of the spec the language is fixed.

This is not how C or C++ were standardized, nor most computer standards in the first place. Usually, vendors implement something, and then they come together to agree upon a standard second.

When updating standards, sometimes things are put in the standard before any implementations, but that's generally considered an antipattern for larger designs. You want real-world evaluation of the usefulness of something before it's been standardized.


Because otherwise the spec is just words on a paper, and the standard is just "whatever the compiler does is what it supposed to do". The spec codifies the intentions of the creators separately from the implementation.

In rust, there is currently only one compiler so it seems like there's no problem


There being only one compiler is exactly the problem.


How is this different from the existing situation that Rust remains compatible since Rust 1.0 over a decade ago?


Rust doesn’t have quite as strong compatibility guarantees. For example, it’s not considered a NC-breaking change to add new methods to standard library types, even though this can make method resolution ambiguous for programs that had their own definitions of methods with the same name. A C++ implementation claiming to support C++11 wouldn’t do that, they’d use ifdefs to gate off the new declarations when compiling in C++11 mode.


That's a good point about the #ifdefs thanks.


Too late to edit but I meant BC not NC


Thanks, that was easily the most confusing thing and I was like well... I understand everything else, if it's very important what exactly "NC-breaking" means I'm sure I will realise later.


By that criteria there's no meaningful C++ compiler/spec.


How so? There are compiler-agnostic C++ specs, and compiler devs try to be compatible with it.

What the GP is suggesting is that the rust compiler should be written and then a spec should be codified after the fact (I guess just for fun?).


> compiler devs try to be compatible with it.

You have to squint fairly hard to get here for any of the major C++ compilers.

I guess maybe someone like Sean Baxter will know the extent to which, in theory, you can discern the guts of C++ by reading the ISO document (or, more practically, the freely available PDF drafts, essentially nobody reads the actual document, no not even Microsoft bothers to spend $$$ to buy an essentially identical PDF)

My guess would be that it's at least helpful, but nowhere close to enough.

And that's ignoring the fact that the popular implementations do not implement any particular ISO standard, in each case their target is just C++ in some more general sense, they might offer "version" switches, but they explicitly do not promise to implement the actual versions of the ISO C++ programming language standard denoted by those versions.


Thanks for trying to explain this to us.

You are insisting here on talking about the "handle" part, though isn't the crucial part of the complete chain weather we use either did:web or did:plc?

So as you outlined yourself in the article. If you a) use did:web b) ever loose access to that domain you are cooked. No amount of handle changes can help here. If one looses a handle domain one can loose a did:web domain also, so that just moved the problem to a more opaque place.

So your identity is always either a) attached to a domain you might loose b) to some plc provider that might stop work for you.

Please correct me if i get anything wrong here, as that is just how i understand it.


The PLC is being spun out into a separate entity independent of Bluesky. The intention is for it to become something analogous to ICANN. So hopefully, with time, even Bluesky shutting down wouldn’t affect it.

It’s also very simple software and is not difficult to run. People are already running PLC mirrors (e.g. https://plc.wtf/). So if push comes to shove it should be possible to figure out the next step. Although this does require trust and coordination across the ecosystem which is tricky.

See https://updates.microcosm.blue/3lz7nwvh4zc2u for some thoughts on that.


What would in practice happen in a two user scenario where user A replied to user B, and later user B's repository gets completely deleted.

We have this cache thing via wss connections. Do they invalidate this messages from user B? Is user's A worldview now completely dead?

Owning a thing in the internet is a complicated topic i guess.

Preserving past information via copying what a user said so that it does not get lost maybe also in the interest of some users (equivalent to the webarchive). I understand that this contradict the whole "owning your data" premise, but fundamentally since it was open in the first place the thing always can be copied right?

Whatever content is produced in this "open social" network, some of it may have long lasting "value" to an individual. Is there anything to make sure that what they interacted with can not completely broken by the other site of the party?


If the user chooses to delete their account, it is a separate event on the network, which well-behaved apps should respect (and update their caches accordingly). So an app like Bluesky would display this as a reply to a deleted post.

If the user's repo just goes down (e.g. the host is down), then indeed it won't be available upstream and only cached versions will remain. It might be that the user is having problems, and the repository will be up on a different host later. It's up to each application how to handle this, but it seems reasonable to keep serving cached content since there was no explicit deletion instruction. E.g. I presume Bluesky would keep showing both replies in the conversation.

>I understand that this contradict the whole "owning your data" premise, but fundamentally since it was open in the first place the thing always can be copied right?

Yeah this is a tricky thing. The general guideline is that the user expresses intent (e.g. can delete post or entire repo) and well-behaved apps respect that intent. But of course there can be non-well-behaved apps that don't, or that permanently archive everything ever emitted.


Hmm i just tested out the claim that the following rust code would be rejected ( Example 4 in the paper).

And it seams to not be the case on the stable compiler version?

  fn write(x: &mut i32) {*x = 10}
  
  fn main() {
      let x = &mut 0;
      let y = x as *mut i32;
      //write(x); // this should use the mention implicit twophase borrow
      *x = 10; // this should not and therefore be rejected by the compiler
      unsafe {*y = 15 };
  }


Stacked borrows is miri's runtime model. Run it under miri and you will see the error reported for the `*x = 10;` version but not the `write(x);` version - "Undefined Behavior: attempting a write access using [...] but that tag does not exist in the borrow stack for this location".

rustc itself has no reason to reject either version, because y is a *mut and thus has no borrow/lifetime relation to the &mut that x is, from a compile-time/typesystem perspective.


Ah that make sense. Thanks for clarifying.


The paper mentions that the authors implemented tree borrows in Miri. Is this change likely to be adopted by Miri as the default model going forward?


The paper is describing the behavior under the proposed Tree Borrows model, not the current borrow checker implementation which uses a more limited analysis that doesn't detect this particular conflict between the raw pointer and mutable reference.


You are probably right that this is not the case right now. 25 years ago you could say the same about google employees. Incentives change with time, and once infrastructure is in place it's nearly impossible to get rid of it again.

So one better makes sure that it has not the potential to further introduce gatekeepers, where later such gatekeepers will realize that, in order to continue to live, they need to make a profit over everything else, and then everything is out of the window.


You can find a general overview for the language at hand in "The rust reference"[1]. For a more formal document, you can have a look in to the ferroscene language specification list of undefined behaviour[2] section. From there you can jump to different section, and see legality rules, and undefined behavior sections for each.

The ferroscene language spec was recently donated to the rust foundation.

[1] https://doc.rust-lang.org/reference/behavior-considered-unde... [2] https://spec.ferrocene.dev/undefined-behavior.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: