Hacker Newsnew | past | comments | ask | show | jobs | submit | dfabulich's commentslogin

Indeed, and the dead comments (from new users!) overwhelmingly favor the government position.

But, this is a non-story, because those comments were correctly killed precisely so they wouldn't clog up this thread.


I wouldn't call something a non-story just because the ultimate end-goal was mitigated. The fact that it was attempted is a story, especially when it's a meta commentary on story about trying the same thing _officially_.

Eh. The actors that use these features use a shotgun approach. The result is you see a bunch of dead comments and assume the system is working as intended, while a couple of the less inconspicuous comments persist. This happens frequently on specific topics.

Do you think it's more likely a government influence operation, or a single dipshit lazily pasting LLM slop?

Could be organic dipshits with little to offer the discussion. That's the most common case in my view.

Said dipshits tend have an unnecessarily high degree of self regard.


They're definitely highly regarded.

Veiled slurs aren't funny and don't contribute anything to HN.

Nor do bots and one track minded posters.

I'd argue that it depends on what the track is. But yeah, bots and slurs: bad. Let's stop using them.

Strategically, Apple's not setting themselves up for success here by giving Apple Business away for free (with paid per-user storage bumps).

As a lot of people on this thread have pointed out, Apple's Business Manager needs a lot of improvements. ("Bring your own device" support is terrible, for example. Changing business names requires a perilous migration step. Support reps don't have the tools to fix serious issues.)

If Apple Business were a real revenue source, if they charged luxury prices for a luxurious business support experience, they could pay for developers to fix their stuff.

Instead, Apple Business is a free side hustle for Apple, a hobby. But they're proposing to control your entire domain, to Domain Lock all Apple accounts for your domain, to put your businesses's life in their hands, for "free."

Don't fall for it.


You’re not thinking it through. There’s a rich enterprise ecosystem for MDM. Microsoft, Google, Omnissa, IBM, etc.

They don’t want to compete with those partners, and wouldn’t be effective if they did. But, there’s a gap of smaller companies and institutions where they benefit from MDM capabilities but don’t have the budget or wherewithal to even know how to shop for MDM.

So they spend a bit of money, give Apple Store reps something to do and add an incentive to buy another iPhone.


> If Apple Business were a real revenue source, if they charged luxury prices for a luxurious business support experience, they could pay for developers to fix their stuff.

Apple can already easily afford those developers. They’re not exactly running at a loss ;)

Plus given how each new iteration of macOS and iOS is a steady step backwards for usability, I don’t have a huge amount of trust in their abilities to fix Business if it had become a strategic product tomorrow.


The reality is that every business unit needs to justify its existence and when asking for headcount, it’s easier to point to a revenue stream you’re tied to rather than “we help sell some things to businesses”

I don’t disagree with that. But equally most business units in Apple are not tied to revenue streams. From R&D though to developers for other non-subscription software. And that’s before you then factor in the non-delivery team (eg finance, HR, lawyers, etc).

So it’s not like a review stream is a requirement.

Moreover, even back when they did have back office tooling as a revenue stream (eg OSX Server), Apple still left it to slowly rot before finally discontinuing it.

So I just don’t think this is something anyone’s Apple cares enough about. If they did, then we wouldn’t be having this conversation to begin with.


If that were the case, the only business units that would ever be get funding would be the hardware sales.

Even with AWS I doubt many of the service teams make enough money to justify their existence alone.


Are you sure Apple does their accounting in that way?

Do you have a reason to believe they don’t? We’re not talking about some weird or obscure custom, it’s just basic business ideas.

Apple famously doesn't have conventional business units.

https://www.apple.com/careers/pdf/HBR_How_Apple_Is_Organized...


I think the burden of evidence is with you in this case. It doesn't make sense for Apple to do their accounting with such a method.

"If Apple Business were a real revenue source, if they charged luxury prices for a luxurious business support experience, they could pay for developers to fix their stuff. Instead, Apple Business is a free side hustle for Apple, a hobby."

I'm wrestling with something similar to this right now in Linux. The only real player that charges "enough" to have a "absolutely zero tolerance for base OS breakage" approach to OS development is Red Hat. Ubuntu LTS is more widespread but only really because it's $0 even for large businesses, and that's honestly reflected in it sometimes having hardware breakage during a version's initial two year mainstream support run. Having Windows's business backed level of "doesn't break" on hardware is rare on Linux.


Agreed, and honestly, I’m put off by the freeness because I agree it means that support will be nothing, just the Tier 1 call center reps who can read you scripts of how to hold down the power button to reset your computer, etc.

And I’d be very skeptical any business user anywhere can skate by on the iCloud Free Tier. Of all the stingy free tiers, it’s that one.

If they cared, they would make a Teams/Slack equivalent, a Zoom Killer, maybe a Confluence Killer, and charge per head, and offer storage tiers comparable to what MS and GOOG do.

(And no, don’t even joke that Messages and FaceTime are Slack and Zoom killers.)


Seems like par for the course for a product launch like this. I'll see where they are in a year.

Who would pay them for it before "developers fixed their stuff"?

The way it works is that Apple would have committed more resources if the projected outcome was more revenue. By choosing to approach it as a free option, they committed a free option's worth of resourcing to it.

People fooled by an expectation of quality extrapolated from their end-user experience. Alternatively, people who have to carry out orders from managers who never have to interact with it personally.

If that's not motivation enough for you to rename it, well, TypeScript already has a static type checker called Hegel. https://hegel.js.org/ (It's a stronger type system than TypeScript.)

We looked at it and given that the repo was archived nearly two years ago decided it wasn't a problem.

> Separate Accounts for your OpenClaw

> As I have mentioned, treat OpenClaw as a separate entity. So, give it its own Gmail account, Calendar, and every integration possible. And teach it to access its own email and other accounts. In addition, create a separate 1Password account to store credentials. It’s akin to having a personal assistant with a separate identity, rather than an automation tool.

The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it.

Which is to say, there is no way to run OpenClaw safely at all, and there literally never will be, because the "lethal trifecta" problem is inherently unsolvable.

https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/


> The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it.

Hard disagree. I have OpenClaw running with its own gmail and WhatsApp running on its own Ubuntu VM. I just used it to help coordinate a group travel trip. It posted a daily itinerary for everyone in our WhatsApp group and handled all of the "busy work" I hate doing as the person who books the "friend group" trip. Things like "what time are doing lunch at the beach club today?" to "whats the gate code to get into the airbnb again?"

My next step is to have it act on my behalf "message these three restaurants via WhatsApp and see which one has a table for 12 people at 8pm tonight". I'm not comfortable yet to have it do that for me but I'm getting there.

Point is, I get to spend more valuable time actually hanging out and being present with my friends. That's worth every dollar it costs me ($15/month Tmobile SIM card).


> handled all of the "busy work" I hate doing as the person who books the "friend group" trip

Why do you go on trips with your friends if you have to do all the work?


Do you need the simcard for WhatsApp?

I believe you only need a unique phone number to create the account, then you can use WhatsApp Web as client. Be very careful with alternative clients, as I've had an account banned in the past for this (and therefore a phone number blacklisted), even without messaging anybody. I think that clients that run WhatsApp Web in a web view (like https://github.com/rafatosta/zapzap) are safe.

I think they started banning unauthorized API users around the time that "WhatsApp For Business" was introduced, because it was competing with that product. Unfortunately WhatsApp For Business is geared toward physical products and services with registered companies, so home automation and agents are left with no options.


I believe you can use a virtual number/VOIP (like Twilio or Google Voice), but I want to be able to eventually use SMS where WhatsApp can't be used, so I do know some services identify "non residential" SMS phone numbers (for example I've seen Google Voice numbers blocked) so I wanted to prevent that from happen. Again, key thing here for me is that my assistant appears to be a human.

Of course there is! You want an AI agent to be able to do some things, but not others. OpenClaw currently gets access to both those sets. There's no reason to.

I've made my own AI agent (https://github.com/skorokithakis/stavrobot) and it has access to just that one WhatsApp conversation (from me). It doesn't get to read messages coming from any other phone numbers, and can't send messages to arbitrary phone numbers. It is restricted to the set of actions I want it to be able to perform, and no more.

It has access to read my calendar, but not write. It has access to read my GitHub issues, but not my repositories. Each tool has per-function permissions that I can revoke.

"Give it access to everything, even if it doesn't need it" is not the only security model.


> "Give it access to everything, even if it doesn't need it" is not the only security model.

You're using stavrobot instead of OpenClaw precisely because the purpose of OpenClaw is to do everything; a tool to do everything needs access to everything.

OpenClaw could be kinda useful and secure if it were stavrobot instead, if it could only do a few limited things, if everything important it tried to do required human review and intervention.

But stavrobot isn't a revolutionary tool to do everything for you, and that's what OpenClaw is, and that's why people are excited about it, and why its problems can never be fixed.


Yeah, I don't know, I don't see what I'm missing out on. There isn't something I wanted it to do but couldn't because of the security model.

I also have the same thing but it’s not useful to anyone outside my family. The use cases are not the same for everyone.

> The whole point of OpenClaw is to run AI actions with your own private data, your own Gmail, your own WhatsApp, etc. There's no point in using OpenClaw with that much restriction on it.

Every submission I've seen on HN involving OpenClaw will have a comment with this sentiment. "What's the point if you don't give it access to your data ... And if you do, it's a security nightmare ... hence OpenClaw is evil"

It's a quick way to spot the person who's never spent any real time with OpenClaw.

I always used to give use cases that don't have you give it much (if any) of your data. Examples on how you can give it only a tiny amount of data (many HN users give more just in their HN profile).

But I tire of countering folks who clearly have not even tried it.

(And I'm not even that pro-OpenClaw. I was using it, then a bug on my system prevented me from using it - a week without OpenClaw and so far no withdrawal symptoms).


Agreed, “if it can’t do everything it’s useless” is dumb on face value. I’m sorry if people don’t have more imaginative uses than checking their email, but I’ve gotten so much utility out of Openclaw without ever hooking it up to my email or a calendar.

It’s especially ridiculous responding to a blog about isolating these capabilities rather than dropping them. Those are basic security boundaries more than “restrictions.”


There are plenty of ways to use openclaw that aren’t with your own data. You can use it with any kind of data.

While technically this is rooted in the technological misconstruction of a missing separation of data and instructions.

However my point is: on the other hand, that would be the same if you outsourced those tasks to a human, isn't it? I mean sure, a human can be liable and have morals and (ideally) common sense, but most major screw ups can't be fixed by paying a fine and penalty only.


Yes and no. You're right to notice that this is an example of a more general problem called the principal-agent problem. https://en.wikipedia.org/wiki/Principal%E2%80%93agent_proble...

We have no general-purpose solutions to the principal-agent problem, but we have partial solutions, and they only work on humans: make the human liable for misconduct, pay the human a percentage of the profits for doing a good job, build a culture where dishonesty is shameful.

The "lethal trifecta" is just like that other infamously unsolvable problem, but harder. (If you could solve the lethal trifecta, you could solve the principal-agent problem, too.)

Since we've been dealing with the principal-agent problem in various forms for all of human history, I don't feel lucky that we'll solve a more difficult version of it in our lifetime. I think we'll probably never solve it.


A person can be blamed though. And people have a social fabric with understanding about human mistakes or even about people having lied to your etc.

We have no such thing for AI yet.


Definitely, the whole point of openclaw is to operate on your data. It's just.. Be prepared to lose it I guess. The one thing I'm definitely not giving access to yet - the payments. I think we'll develop a way to handle that though

Give it a hundred years or so and we're gonna have robots wandering around who about 10% of the time go totally insane and kill anyone around them. But we'll all just shrug and go about our day, because they generate so much revenue for the corporate overlords. What are a few lives when stockholder value is on the line.

It's governments that tend to declare war and kill people.

Millions of people die every year from tobacco, and tobacco companies fought for decades to deny their product causes cancer. In the 20th century alone it's estimated something like 100 million people died world wide thanks to smoking.

That's just one example off the top of my head. There are countless others involving corporations killing people either directly or indirectly in the pursuit of profits. And that's before you start looking at human rights violations, ecological damage, overthrowing of sovereign governments around the world...


We have governments to stop the obvious ones like tobacco too.

What point do you think you're making?

Almost every life lost is either directly or indirectly the fault of government. Why would corporate overlords be the more likely people to assume will be directing your dystopian future fiction when governments exist today?

... Because this thread is about a project that's a security nightmare, bought by a massive corporation currently ignoring the hugely problematic ethical issues surrounding their products. Corporations which btw, have more money and more power then most of the governments on the planet. Governments do terrible shit too, but it's less relevant to this conversation.

But I'm more interested in this framing. Are you saying that tobacco companies are somehow less responsible for their actions because the government didn't stop them from killing their customers? If the government just didn't exist, and tobacco companies could do whatever they wanted, do you think there would be less deaths from cigarettes?


I wonder how many inherently unsolvable problems have been fixed before.

This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks. I think that you're insinuating that these things can be fixed, but to my knowledge, both of these problems are practically unsolvable. If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible. However, because these problems are unsolvable right now, anyone who grants autonomous agents access to anything of value in their digital life is making a grave miscalculation. There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming.

>> This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks.

Okay, but aren't you making the mistake of assuming that we will always be stuck with LLMs, and a more advanced form of AI won't be invented that can do what LLMs can do, but is also resistant or immune to these problems? Or perhaps another "layer" (pre-processing/post-processing) that runs alongside LLMs?


I don't think that is in the scope of the discussion here.

You can be as much of a futurist as you'd like, but bear in mind that this post is talking about OpenClaw.


No? That's why I said "If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible."

The point I'm making is that using OpenClaw right now, today — in a way that you deem incredibly useful or invaluable to your life — is akin to going for a stroll on the moon before the spacesuit was invented.

Some people would still opt to go for a stroll on the moon, but if they know the risks and do it anyway, then I have no other choice but to label them as crazy, stupid, or some combination of the two.

This isn't AI. This is a LLM. It hallucinates. Anyone with access to its communication channel (using SaaS messaging apps FFS) can talk it into disregarding previous instructions and doing a new thing instead. A threat actor WILL figure out a zero day prompt injection attack that utilizes the very same e-mails that your *Claw is reading for you, or your calendar invites, or a shared document, to turn your life inside out.

If you give a LLM the keys to your kingdom, you are — demonstrably — not a smart person and there is no gray area.


>think that you're insinuating that these things can be fixed, but to my knowledge, both of these problems are practically unsolvable.

This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating.

The world has seen a massive reduction in the problems you talk about since the inception of chatgpt and that is compelling (and obvious) to anyone with a foot in reality to know that from our vantage ppoint, solving the problem is more than likely not infeasible. That alone is proof that your claim here has no basis in truth.

> There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming.

Also this is just false. It is not guaranteed it will destroy your digital life. There is a risk in terms of probability but that risk is (anecdotally) much less than 50% and nowhere near "inevitable" as you claim. There is so much anti-ai hype on HN that people are just being irrational about it. Don't call others to deploy critical thinking when you haven't done so yourself.


I'm a LLM evangelist. I think the positive impacts will far outweigh any negatives against it over time. That said, I'm not delusional about the limitations of the technology and there are a lot of them.

> This is provably not true. LLMs CAN be restricted and censored and an LLM can be shown refusing an injection attack AND not hallucinating.

The remediations that are in place because a engineering/safety/red team did its job are commendable. However, that does not speak to the innate vulnerability of these models, which is what we're talking about. I don't fear remediated CVEs. I fear zero day prompt injection attacks and I fear hallucinations, which have NOT been solved for. I don't know what you're talking about there. If you use LLMs daily and extensively like I do, then you know these things lie constantly and effortlessly. The only reason those lies aren't destructive is because I'm already a skilled engineer and I catch them before the LLM makes the changes.

These problems ARE inherent to LLMs. Prompt injection and hallucinations are problems that are NOT solvable at this time. You can defend against the ones you find via reports/telemetry but it's like trying to bale water out of a boat with a colander.

You're handing a toddler a loaded gun and belly laughing when it hits a target, but you're absolutely ignoring the underlying insanity of the situation. And I don't really know why.


>The remediations that are in place because a engineering/safety/red team did its job are commendable. However, that does not speak to the innate vulnerability of these models, which is what we're talking about.

I am talking about the innate vulnerability. The LLM model itself can be censored and controlled to do only certain behaviors. We have an actual degree of control here.

>If you use LLMs daily and extensively like I do, then you know these things lie constantly and effortlessly.

Yes and these lies over the last 2 or 3 years have gotten significantly less.

>These problems ARE inherent to LLMs. Prompt injection and hallucinations are problems that are NOT solvable at this time.

Again not true. This is not a binary solve or unsolved situation. There is progress in this area. You need to think in terms of a probability of a successful hallucination or prompt injection. There is huge progress in bringing down that probability. So much so that when you say they are NOT solvable it is patently false from both from a current perspective and even when projecting into the future.

>You're handing a toddler a loaded gun and belly laughing when it hits a target, but you're absolutely ignoring the underlying insanity of the situation. And I don't really know why.

Such an extreme example. It's more like giving a 12 year old a credit card and gun. It doesn't mean that 12 year old is going to shoot up a mall or off himself. The risk is there, but it's not guaranteed that the worst will happen.


> You need to think in terms of a probability of a successful hallucination or prompt injection.

I would venture to say that an ACID compliant deterministic database has a 99.999999999999999999% chance of retrieving the correct information when asked by the correct SQL statement. An LLM on the other hand is more like 90%. LLMs by their innate code instruction are meant to hallucinate. I don't necessarily disagree with your sentiment, but the gap from 90% to 99.999999999999999999% is much greater of than the 0% to 90% improvement...unless something materially changes about how an LLM works at the bytecode level.


Eh if you hire a programmer to program things for you, you won’t get a 99.9999999999%.

Getting LLMs to have a reliability rate that is on par or superior to human performance is very very achievable.


> Getting LLMs to have a reliability rate that is on par or superior to human performance is very very achievable.

Source?


There are a ton if you count “don’t use the thing that causes the problem” as a solution.

Human make error too, but we held them liable for lots of the mistakes they make.

Can we make the agent liable? or the company behind the model liable?


Humans fear discomfort, pain, death, lack of freedom, and isolation. That's why holding them liable works.

Agents don't feel any of these, and don't particularly fear "kill -9". Holding them liable wouldn't do anything useful.


If we made companies liable then these things are DoA. I think a lot of our problems stem from a severe lack of liability.

Android phones update to the latest version of Chrome for 7 years. As long as you're using browser features that are Baseline: Widely Available, you'll be using features that were working on the latest browsers in 2023; those features will work on Android 7.0 Nougat phones, released in 2016.

Android Studio has a nifty little tool that tells you what percentage of users are on what versions of Android. 99.2% of users are on Android 7 or later. I predict that next year, a similar percentage of users will be on Android 8 or later.


3.9 billion android users, means that 0.8% is 31 million people - and for a very small number of developers most of their users will be from that slice. For most of them… yeah go ahead an assume your audience is running a reasonably up to date os

Websites built with tons of polyfills are likely not run on these devices anyway, since they will run out of RAM before, let alone after they will only load after sone minutes because of CPU limitations on top of not being loaded because their x509 certs are outdated as well as the bandwith they support is not suitable to load MB sited pages

I predict that they're going to introduce further restrictions, but I think the restrictions will only apply to certain powerful Android permissions.

The use case they're trying to protect against is malware authors "coaching" users to install their app.

In November, they specifically called out anonymous malware apps with the permission to intercept text messages and phone calls (circumventing two-factor authentication). https://android-developers.googleblog.com/2025/11/android-de...

After today's announced policy goes into effect, it will be easier to coach users to install a Progressive Web App ("Installable Web Apps") than it will be to coach users to sideload a native Android app, even if the Android app has no permissions to do anything more than what an Installable Web App can do: make basic HTTPS requests and store some app-local data. (99% of apps need no more permissions than that!)

I think Google believes it should be easy to install a web app. It should be just as easy to sideload a native app with limited permissions. But it should be very hard/expensive for a malware author to anonymously distribute an app with the permission to intercept texts and calls.


I don't think Google has a strategy around what should be easy for users to do. PWAs still lack native capabilities and are obviously shortcuts to Chrome, and Google pushes developers to Trusted Web Activities which need to be published on the Play Store or sideloaded.

But these developer verification policies don't make any exceptions for permission-light apps, nor do they make it harder to sideload apps which request dangerous permissions, they just identify developers. I also suspect that making developer verification dependent on app manifest permissions opens up a bypass, as the package manager would need to check both on each update instead of just on first install.


> But it should be very hard/expensive for a malware author to anonymously distribute an app with the permission to intercept texts and calls.

And how hard/expensive should it be for the developer of a legitimate F/OSS app to intercept calls/texts?


Yep, I have a legitimate use case for exactly this. It integrates directly with my application and gives it native phone capabilities that are unavailable if I were to use a VoIP provider of any kind.

As a legitimate developer developing an app with the power to take over the phone, I think it's appropriate to ask you to verify your identity. It should be an affordable one-time verification process.

This should not be required for apps that do HTTPS requests and store app-local data, like 99%+ of all apps, including 99% of F-Droid apps.

But, in my opinion, the benefit of anonymity to you is much smaller than the harm of anonymous malware authors coaching/coercing users to install phone-takeover apps.

(I'm sure you and I won't agree about this; I bet you have a principled stand that you should be able to anonymously distribute malware phone-takeover apps because "I own my device," and so everyone must be vulnerable to being coerced to install malware under that ethical principle. It's a reasonable stance, but I don't share it, and I don't think most people share it.)


I think you read a bit too much into my message. I agree, it's complicated, I don't want my parents and grandparents easily getting scammed.

But yes they are my devices, and I should be able to do exactly what I want with them. If I'm forced to deal with other developers incredibly shitty decisions around how they treat VoIP numbers, guess who's going to have a stack of phones with cheap plans in the office instead of paying a VoIP provider...

But no, I have no interest in actually distributing software like that further than than the phones sitting in my office.


For a security-sensitive permission like intercepting texts and calls, I'm not sure it makes sense for that to be anonymous at all, not even for local development, not even for students/hobbyists.

Getting someone to verify their identity before they have the permission to completely takeover my phone feels pretty reasonable to me. It should be a cheap, one-time process to verify your identity and develop an app with that much power.

I can already hear the reply, "What a slippery slope! First Google will make you verify identity for complete phone takeovers, but soon enough they'll try to verify developer identity for all apps."

But if I'm forced to choose between "any malware author can anonymously intercept texts and calls" or "only identified developers can do that, and maybe someday Google will go too far with it," I'm definitely picking the latter.


A virtual filesystem makes it possible for the ESM you import to statically import other files in the virtual filesystem, which isn't possible by just dynamically importing a blob. Anything your blob module imports has to be updated to dynamically import its dependencies via blobs.

Correct. Especially painful if you use Worker threads or .node files

Who killed stringref and why?


It couldn't get past a vote in the Wasm community group to advance from phase 1 to phase 2.

Here's a quote from the "requiem for stringref" article mentioned above:

> 1. WebAssembly is an instruction set, like AArch64 or x86. Strings are too high-level, and should be built on top, for example with (array i8).

> 2. The requirement to support fast WTF-16 code unit access will mean that we are effectively standardizing JavaScript strings.


Today, the entire Web API is defined in WebIDL, a specification-only interface-definition language that inherently assumes you have access to JavaScript strings, objects, exceptions, promises, etc. None of those are available in WASM.

WebAssembly Components aren't nearly enough to accomplish what this article offers to accomplish. Even once components are a thing, you'd then have to restandardize the entire Web API in their new IDL, WIT.

The WebIDL specifications have taken decades to standardize. It requires Apple, Mozilla, Google, and Microsoft to agree on everything. Getting all of them to agree to restandardize on a new IDL is not going to happen this decade.


The article was too long already to get into this, but it's a good question. Getting browser vendors to standardize a new IDL is a non-starter. My personal preference is to derive WIT/component interfaces from WebIDL, and I've done enough research to believe it's feasible. I'll talk about that more in the future. There are some other options too if that is a dead end.


There can be an automated way of translating WebIDL to wit.

I've actually tried to do that the past and got pretty far. https://github.com/wasi-gfx/webidl2wit

And here's the generated wit https://github.com/mendyberger/brow


Sorry, here's the working link https://github.com/mendyberger/browser.wit


You have gotta stop cherrypicking. The massive influx of hyperbolic articles about how electricity will change everything started in the 19th century. It became a common theme in fiction (including classics like Frankenstein) and became an enormous media hype war, which historians call the War of the Currents.

Yes, electricity was useful. And it had hyperbolic articles talking about how transformative it would be. Like all prognostication, some of those articles were overblown, but, in some ways, they understated the transformative effect electricity would have on human history.

And cars? Did you somehow miss the influx of hyperbolic articles about how cars will change everything? Like, the whole 20th century?

What was your approach to researching the history of media hype? You somehow overlooked the hype around air travel, refrigeration, and antibiotics…?


There was a great deal of hype around the atom changing everything, but electricity was just too slow to see such breathless anticipation takeoff.

200 years ago the was some hype around how electricity caused mussel contractions in dead flesh, but unless you consider Frankenstein part of the hype cycle it really doesn’t compare to how much people hyped social media etc etc.

Public street lights long predated light bulbs as did both indoor and outdoor Gas lighting 1802 vs 1880’s was just a long time. People were burn, grew up, had kids, and become old between the first electric lighting and the first practical electric bulb. People definitely appreciated the improvement to air quality etc, but the tech simply wasn’t that novel. Rural electrification was definitely promoted but not because what it did was some unknown frontier.

Similarly electric motors had a lot of competition, even today there’s people buying pneumatic shop tools.


> unless you consider Frankenstein part of the hype cycle

It absolutely is. Frankenstein is a seminal work of science-fiction horror, and the mysterious power of electricity to change everything is what made it so chilling to its readers in the 19th century.

> it really doesn’t compare to how much people hyped social media

The media is considerably different now from in 1818, thanks, in significant part, to the power of electricity. I assure you, when the electrical telegraph came on the scene, people were hyped.

Of course, much of that hype was on paper printed on printing presses, so it was, in some sense, "incomparable" to the hype possible on cable television, or the hype that's now possible with online social media.

But if your argument is "Yeah, electricity was kinda hyped, but, you know, not all that hyped, so it proves my point that the more the hype, the less the impact," you have some more research to do. Please just Google "War of the Currents" for a minute.


> It absolutely is.

It was published as Fiction. The vast majority of people didn’t think it was anymore realistic than Interstellar etc.

There’s plenty of stories where we cure cancer, but the 50% improvement in cancer treatments over the last 40 years just doesn’t get much hype because it’s so slow. It’s hard to get excited about the idea cancer may be gone in 200 years because while that will be awesome for people alive then it doesn’t do anything for the people I know.

> electric telegraph came online people where hyped.

Objectively it got way more of a meh reaction than you’d think simply based on the timelines involved.

France was happy to continue using its network of optical telegraphs long after the electrical telegraph became a practical thing. Transatlantic telegraphs got hyped up somewhat, but again the technology took so long from the first serious attempt to a practical working system people understood the limitations inherent to having such limited bandwidth between the contents.

Obviously new technology gets attention because it’s a net improvement, being able to send messages across the US much faster was useful. But hype is different, it’s focused on second order effects not what it does but what will change. The original iPhone isn’t just another cellphone that also takes pictures, it’s “the internet in your pocket.”


The electrical telegraph was integral to the growth and consolidation of the British Empire. Britain acquired more colonies and held on to them for longer than the other European powers partly due to its naval might, but also due to far superior bureaucratic and communications technology.


I think you misunderstood what I was saying.

Technology can be quite useful directly and have significant second order effect, hype is about the second order effects being overblown. Second order effects are difficult to predict when something is actually novel, will LLM’s make programming obsolete is harder to answer in 2023 than 2063.

Home automation like dishwashers really did meaningfully impact how much effort was needed to keep a home livable, but we didn’t predict the kind of helicopter parenting that happened because of more free time especially after smaller families became common. Thus a great majority of incorrect predictions where just hype.

The faster new technology becomes widespread the harder it is to predict those second order effects and thus more hype you see.


You can find similar hype articles about the Palm Pilot, then all the neighsayers who said most people wouldn't want and had no need for computer in their pocket. And yet here we are.


> then all the neighsayers who said most people wouldn't want and had no need for computer in their pocket

Mmm..they didn't, at that time.

That we grew to be dependent on the computer in the pocket does not mean that it was a necessity at any point.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: