Could we please just fix native software development, so it doesn't suck, rather than adding the kitchen sink to web browsers?
Web pages and JavaScript are the only universally trusted medium for sharing information and simple software. Making it possible for news websites to touch ethernet/usb/serial/gpu compromises that. Same goes for optimizations ilke JIT'ing and WebAssembly which have certainly done a great job making it possible for Gawker to reprogram the microcode in AMD K8 CPUs. Has the long tail of browser innovations made the web go faster? No. It went faster in 2010 IMHO. Whatever performance glut the geniuses who work on Chrome end up creating, it'll just get gobbled by more aggressive display ads and things like wild-eyed web component frameworks.
I think WASM will eventually produce the solution (hear me out before you tune out). I've been thinking about this for a while. So we have WASI[0], which doesn't yet address the UI issue, but there's half a dozen at least projects attempting to create runtimes for WASM. If most languages wind up compiling for WASM / WASI I think eventually a good effort could be done towards a WASI-UI or WASUI.
Imagine if major OS platforms had a common UI framework that any language can produce a UI for, every major browser already supports WASM, why not a WASI based common runtime that would benefit every OS that supports it. I havent seen anyone else discussing this I've just been contemplating it for a while and I'm not sure what it would look like short of recreating a cross-platform X window system.
However, as crazy as I sound my reason for this is simple: the issue with a lot of UI frameworks is that they tend to be language specific and are mostly useful to the languages they officially or directly support. If you can build a UI compatible stack that any language can target, you can gain a lot more adoption. I could easily see Rust, D, Go, etc supporting a common UI runtime.
Electron sells itself rather easily cause most devs are familiar with HTML / CSS / JS. What we need is a cross-platform cross-language solution that isn't XML based but something a language can implement rather simply, or maybe even a compiler / standard library could manage.
Imagine if major OS platforms had a common UI framework that any language can produce a UI
This has been attempted a number of times, both in the browser and without, and it always comes down to people complaining that the widget set isn't "native" in look and feel and operation, and the pendulum swings back the other way.
People used to complain that browser widgets didn't confirm to the OS's UI guidelines. Then you could style them with CSS and things really started looking crappy, then you have web UI frameworks that forgo all that customization (or at least discourages it in the name of ease of development).
Umm - a large group of most popular productivity apps are cross platform and look nothing like native - from dev tools like IDEA, VSCode over Slack, MS Office, to multimedia tools like Adobe tools, almost all 3D modeling software like maya, max, audio software like Ableton, FL. None of the apps I use on a day to day basis look native (ie. like the apps provided by Apple)
You are confusing cross platform UI with frameworks that have programmer art widgets and no cross platform polish - eg. GTK+, FLTK, and even to some extent QT widgets (and many more). When people say widgets don't look native they usually mean "the widgets suck", you can create beautiful cross platform apps - Electron has quite a few because it allows standard designer tools from web dev.
Flutter is another promising development, but the desktop port seems underwhelming - it's obvious the widgets were intended for mobile apps, they will probably need a custom widget set to cover desktop UI - but the approach is sound.
The native apps you mention have heavy customization because they are meeting advanced professional needs. But these apps still participate in the platform's UI conventions. Menus, windows, etc. work like Mac or Windows users expect.
Electron apps are also heavily customized, but ignore platform UI conventions. The buttons may have lovely drop shadows, but basic interactions (menus, undo, drag and drop...) are routinely broken or work in some discordant, unfamiliar way. This isn't meeting professional needs; it's that web tech sucks for building actual apps.
And even if web tech improved, there's still the cultural bias towards re-invention and churn. Web dev will always cons up a button from a div and an onClick handler, no matter what the framework provides. There's no mechanism to get web apps onto a shared UI platform because the gravity of the web is dispersive.
If MS Office was truly cross platform in look, feel, and operation, then finance people who insist on using Excel wouldn't also insist on using Windows just to use Excel.
All those specialized tools can get away with having completely-unlike-anything else UI/UX because people are paying for them and they are considered best(and only)-of-breed.
Electron apps are cross platform only in terms of them looking alike on different platforms, because they use the same widget set. If slack was a truly, full native app on OSX, it wouldn't look like it does.
In my experience, finance people who insist on using Excel only on Windows generally do so not because of any reliance on "native controls", but on longstanding dependencies upon VBA macros with hard-coded dependencies on specific legacy COM components.
I don't know enough about Excel but I highly doubt it's about the looks - I have Office installed on my MBP and the ribbon looks like I remember it from Windows.
I don't understand your point - non-native widgets are not a deal breaker and there are plenty of examples, Electron being the most common non-native cross platform framework recently (it has technical limitations but even the subpar performance doesn't make it a deal-breaker). So it's possible to create cross-platform UI frameworks and most of the the most successful apps I can think of are using some version.
I remember when people used to tell me VB6 was slow... but I bet you a lot of VB6 apps are miles faster than Electron apps and use significantly less resources / storage despite being single threaded by comparison.
WebAssembly is basically the latest in the line of Java Applets and Macromedia Flash. Maybe it'd be useful if someone built a tiny wasm emulator for x86 linux stdio binaries and posted it on a google cdn. I haven't seen anyone do that yet.
I'm pretty happy with web pages as a GUI framework. If I needed a desktop GUI, I certainly would not want my binaries to be 200megs with Electron. If we're OK trading away drop-down boxes, wizards, and installers, then it's actually possible to build 10kb static native win32+linux+bsd+mac terminal programs using ape. https://justine.storage.googleapis.com/ape.html
You asked me how to avoid the overhead of Electron and I suggested you try αcτµαlly pδrταblε εxεcµταblε since the distributables are 10,000x smaller. If it's not possible to meet your requirements using a command line c program, or a web application that runs in the browser, then could you help me understand why you need electron?
I'm not advocating for Electron... I think either you're misunderstanding me or we're misunderstanding each other (quite possible as well).
I'm advocating for some small cross-platform cross-language API / runtime / framework (not sure what it looks like, but since most languages are targetting WASM (if this is what's making you think I'm advocating for Electron... then you missed my commentary about people making stand alone WASM runtimes) as opposed to solutions that are platform or language specific, and when I say platform specific lump in web (which means Electron too) in that bucket.
What most people want is to be able to use their own preferred language and do a UI without having to import a bunch of C or C++ libraries that will add additional overhead over their language.
Edit: Just saw the post you were referring to, it would of made more sense had you linked to it:
While this answers the 1 binary problem while producing a rather massive executable file larger than Electron in many cases, this still doesn't solve the problem I'm mentioning: We need a cross-platform cross-language UI solution. If you want to do away with things like Electron, you need to consider all platforms and all languages.
>Could we please just fix native software development, so it doesn't suck, rather than adding the kitchen sink to web browsers?
Only in the sense that we can fix child hunger rather than spending money in e.g. condo development.
That is, technically yes, but nobody with real means cares enough to do it, and those without means don't care enough or have enough power to pressure them...
> Could we please just fix native software development, so it doesn't suck, rather than adding the kitchen sink to web browsers?
People have been trying to fix native software development forever, from the JVM to electron. But the web remains the only open platform for distributing software that works on every device. Fixing native software development will eventually mean making the web native, so that your OS is just a browser.
> Making it possible for news websites to touch ethernet/usb/serial/gpu compromises that.
There's a simple and obvious solution for that, which is to make them ask for permission, just like with desktop notifications.
> Same goes for optimizations ilke JIT'ing and WebAssembly which have certainly done a great job making it possible for Gawker to reprogram the microcode in AMD K8 CPUs.
What kind of argument is this? Someone who doesn't need a technology uses it anyway, so ... it's worthless?
Do you want to 1) do difficult and frustrating plumbing work to fix what's already here and slightly broken or 2) design and make a new generation of things with your goals and your own tools? Which one sounds more like a career that results in dying young due to extreme boredom and head-desk trauma?
"Note that this capability is already available to Chrome Apps and Extensions and in no scenario will we be handing it out like candy to any website that asks nicely; [the API] will come with a higher barrier to use."
Involving Google as a gatekeeper, of course.
Google is trying to establish the level of control on the Web it has on Android.
On Android they already crawl your app with various devices when you upload it to Google play and let you know about any crashes or accessibility issues. Nothing to stop them capturing the content too if they wanted.
I think chromebooks kind of pivoted and now cater to the school markets and mass deployments rather than individuals. They must have realized they're not gonna blow proprietary drivers and native x86 based archs out of water and offered the low cost wholesale devices to education. It's probably a good strategy when you get them vertically integrated with G Suite for edut (which is free) and just lure all the students in US Schools to get used to Google products instead of all schools teaching using MS Office. Long way to go before they can actually replace MS Windows, macOS for games, photo editing, video editing, coding (althought that's close) or such.
> Google just wants browsers to become more powerful cuz then more people will use Chromebooks.
This has little to do with Chromebooks. Google just want to be the single purveyor of the web, period. They already have the most popular browser. Firefox and Safari fought them on quite a few fronts [1]. Then Edge became Chrome. Now Mozilla basically laid off everyone. Safari only exists on MacOS and iPhones (a large market, yes, but small in the grand scheme of things).
Chrome will only accelerate its blatant disregard of anyone and push more and more internally developed barely tested crap.
[1] https://mozilla.github.io/standards-positions/ scroll down to harmful. Of course, many of those considered harmful are already implemented in Chrome. WebUSB, enabled by default in Chrome 61. Signed HTTP Exchanges, enabled by default in Chrome 73. Media Feeds, enabled by default in Chrome 85. And so on
It's the other way round. Google wants people to use chromebooks so that no other company controls the distribution. This is still about the search engine. Google doesn't want people to search some kind of app store, they want people to search the web - with google of course.
But what is that prompt? an ip and a port?, a local (m)DNS name? is it stored? what happens if the network changes? Do most people even know the names/ip of things on their network?
That sounds like basic security to me. Apps and extensions are already "verified as ok" by the user (whether fully informed or not) vs random websites. Not doing so sounds like a gross deficiency in access control.
For god's sake, if a website has to ask for permissions the browser is the "gatekeeper" I guess, and if the browser is made by google then in some sense google is the "gatekeeper", in the same sense in which google is the "gatekeeper" for desktop notifactions. Does this really justify the term "gatekeeping" or speculations about any dark plans of world conquest on google's part? Proposing an API, for which the website will have to ask the user for permission?
The web platform has lots of tech now for fully offline web apps - the currently used term for this is a 'PWA' or Progressive Web Application. On Android and iOS in many cases there's a little option in your browser to install a website as a PWA, which puts it on your home screen and locally caches the assets. IIRC Twitter and Google Maps both support this for their website.
Of course, writing a PWA is still a huge pain in the ass, and it doesn't give you those guarantees you mention from native apps - auditability, etc. But maybe it could get there, who knows...
As a gopher dweeb forced to write hackarounds like an external helper to get Firefox into Gopherspace again (either a proxy server: https://addons.mozilla.org/en-US/firefox/addon/overbitewx/ or an actual executable: https://addons.mozilla.org/en-US/firefox/addon/overbitenx/), I'm torn. I would love to see the browser able to speak raw TCP again like I could when XUL/XPCOM extensions were still a thing. But this should really be restricted to browser extensions rather than arbitrary websites because we know someone is going to abuse it, and if we go full Windows Vista as proposed here and ask every connection to be confirmed it will eventually annoy people enough that security will be slackened (and then someone is going to abuse it).
The next thunderbird will contain OpenPGP support build-in because they are finally getting rid of XUL addon support and enigmail would not work with it.
Until widespread code-signing exists on the web this seems incredibly reckless to expose. Even if you require https and put it behind a modal, history has shown that both of those measures are not impenetrable barriers to attackers and raw sockets open up a whole new set of attacks.
HTTPS is basically useless for authenticating dangerous code: There are thousands of https domains out there that you can easily put your own code onto, and many of them have EV/DV certificates with reputable names like GitHub attached to them so there won't be any obvious red flags other than the text of the domain name. A prompt with a textbox in it is a barrier but it's not hard to get users to paste random stuff into text fields (just look at the big message that appears when you open devtools in Discord warning people not to paste stuff in there) and I agree with the concern that eventually the text box will be made friendlier.
Many existing Dangerous APIs have worst-case attacks that just do things like talk to a USB device from a small list of authorized devices or talk to a MIDI keyboard. Raw Sockets are an entire new world of attacks: Hit a router or PC on the local network with a zero day and get persistent root on it, then use that to spread through the network and perform a ransomware attack, etc. The blink team needs to accept reality and catch up with where native apps have been for over a decade: Require code signing with revocable certificates and pair that with a mechanism to ensure the code was signed by the owner of the domain.
> Until widespread code-signing exists on the web this seems
incredibly reckless to expose. [...] HTTPS is basically useless
for authenticating dangerous code: There are thousands of https
domains out there that you can easily put your own code onto,
and many of them have EV/DV certificates with reputable names
like GitHub attached to them so there won't be any obvious red
flags other than the text of the domain name
How would code-signing be any better than HTTPS-certificates?
That sounds like just another instance you would have to pay fees to.
You can sign the code on your own computer (which might even be offline) before deploying on some other company's server.
So, it's harder for an attacker to get their code signed using your certificate. Hacking into the website would be useless; they'd have to get the code into the development environment.
Or to put it another way, an HTTPS server automatically signs any file that it serves. That's too easy.
"an HTTPS server automatically signs any file that it serves" is a perfect way to put it.
When it comes to getting access to code signing certificates, in many cases the state of the art is to have the signing certificate escrowed so aggressively that the only way to sign a binary is for it to go through a full build pipeline, because the central build servers are the only thing with access to the certificate. That significantly reduces the risk of someone managing to get your signing certificate and sign malware, which means you can focus on other vulnerabilities in your pipeline like your revision control server or your library dependencies.
My understanding is Signed Exchanges only solve "I want to allow another server to handle requests for my HTTPS server" and not "I want to guarantee the integrity of content coming from my HTTPS server". Did they expand the spec to somehow address the latter?
The closest to code-signing on the web is subresource integrity, a feature that adds an integrity hash to script and style tags.[0] There was a directive in content-security-policy that allowed you to require them for everything on a page, but that looks like it's been depreciated.[1]
How would code signing be helpful? How would the key end up being trusted?
I do think that content pinning/notarization of web apps could be powerful. We are building some of those ideas here but and I have an interest in how this could be used to pin critical apps to an audited/approved version: https://transparencylog.com
The attack scenario with current https is "get a file on the server". That's all you have to do in order to get full permissions because we're gating features behind https.
If features require combination of https + code signing, you now need to not only get a file on the server, but you need to sign it. For high privilege APIs you would likely also want to do certificate approval like Windows does, where it shows you the certificate and asks whether you want to approve and whether you want to trust the signer. In many cases websites are loading code off a CDN, so the domain name of the website isn't actually important: the identity of the code's author is important and its integrity is important.
You could also configure your server to pin a given code signing certificate or set of certificates, and thus even if someone manages to get code onto your server that they signed, it wouldn't run.
Modern web platform APIs do have a way to specify expected hashes when loading external resources, so you can protect yourself against a third party (like a CDN) being compromised - but this does nothing if your server gets compromised, because they can just change the hash.
> Why does discord even let users access dev tools? This is trivially easy to disable in prod builds.
Even if you disable the usual way to pop the dev tools open in desktop electron mode, this is a dangerous attitude. A malicious client can always edit the DOM, send arbitrary data to your server, read whatever you send back, etc. "Disabling the dev tools" is never the correct solution to any security problem.
I didn’t say that they should do this for security of the execution environment of the code, I’m not sure how you got to that interpretation. The GP said that there is output to tell users not to compromise themselves by blindly pasting unsafe code, which is easily prevented by disabling dev tools altogether, and there is no reason for an end user to have access to dev tools in an electron app. The other replies are probably what the GP meant and were related to the web version of discord.
This is also present in the Electron version of Discord.
> there is no reason for an end user to have access to dev tools in an electron app.
Certainly there is - there are various user-made applications that take advantage of Discord being built on Electron to allow you to write your own custom styles and scripts.
I assume the parent is referring to the web version, where a big warning is output on the console of the browser’s built-in dev tools. I don’t know of any way for a website to disable the browser’s dev tools.
I don't know; I understand the cynicism here.. But can we assume an optimistic scenario where vendors are able to properly address the security implications and this enables web developers to build all kinds of new applications? I for one would love to be able to write web applications that could, say, control the lights in my house or smart devices. I hate interfacing with my phone to control these types of actions; I don't have the same bandwidth with my phone as I do with a full-blown computer, and therefore use my phone as little as possible. Therefore, I am closed off from really having fine-grained control over the devices on my LAN. Using the browser for this type of behavior would be awesome, IMO.
It’s funny - in the arguments about Apple and the App Store, I’ve seen plenty of people making the opposite case - ‘it’s not scary out there’, the security benefits of the App Store are ‘bullshit’, etc.
No, we have decades of evidence (including state of the art tech, right now, today) that argues against your optimistic scenario. In fact, Chrome is basically a piece of spyware as is. Google has no incentive to protect your privacy. How much worse will it get if the browser can reach into your LAN?
Yeah I agree. The browser does seem to be the preeminent attack vector since it's the easiest way for actors to reach into your systems. I can imagine the next generation of ransomware where your smart devices are held hostage for small-ish ransoms.
To your point about privacy, yeah I also agree. I think the real evil here is how implicit the exploitation of trust is with using Chrome and Google services. Ideally the privacy disclaimers should be more pronounced and end users should know what exactly they're giving up by using these products.
I come from the Chinese mindset where privacy doesn't have the capital P like it does in the US. Most Chinese are fine with the government and private industry infiltrating their lives if the net effect is that their lives are improved. I think a lot of them have not fully weighed the cost-benefit analysis properly, and who knows if there ever will be a tipping point event where people decide technology has too deep a hand in their inner-lives.
I think most people in the US have yet to encounter the potentially dire consequences of the slow erosion of their privacy. Who knows, maybe the net effect for most consumers will always be positive. I'm still using Chrome, Gmail, and am locked into Google's net, but I like to think it's something I'm cognizant of, and paying attention to from far away.
Hahah. That made me laugh out loud. The idea of having a more open standard to be able to talk to devices in other people's houses is an a pandora's box; but I could also see a lot of interesting use cases. I'm just toying with the idea of warming the color temperature of the lights in my parent's house at night to help them sleep (since I can't get them to care enough themselves). Or think about some type of live poster that your circle of friends streams content to. I'm sure there are tons of interesting possibilities.
This is good. I guess. I have to develop relatively complex desktop-like applications on Chromium platform and I still feel that there are quite a few gaps until it can become a full desktop-able development platform.
What's really missing for me is Android-like permission system where website could request permissions to access webcam, bluetooth, microphone, local storage, notifications, audio playback etc...
At the moment, most of the access is provided by a number of different APIs, and, in some instances, it is heuristics-based decision (looking at audio playback).
I would have loved this in 2008 for the Beijing Olympics. My team created the point of sale, which included printing specialized tickets on specialized printers. These were going to be deployed across 1,000+ Bank of China locations, including many in pretty remote regions.
Instead, we created an applet to handle the IO with the printer. It was a nightmare. All sorts of little differences between Windows XP & JRE made it ridiculous hard to debug. While the banks were given exact specs, the culture was such that whenever they would have a problem, they insist the machine was the right spec, which wasn't true 90% of the time.
Have they even fixed websocket portscanning, canvas/audio/font fingerprinting, their crippled request blocking API etc. etc. before coming up with this new privacy and security nightmare?
Can we maybe have a breather?
There must be something about the term sandboxed environment that makes people want to poke holes in it.
I am kind of against all of this, as it will inevitably lead to better fingerprinting. I already think its a serious breach of privacy that browsers supply hardware information such as joysticks and game controllers connected etc. without any kind of permission dialog.
This is even worse because it allows for malicious websites to absolutely pwn the living sh*t out of the end user.
You thought that ebays port scanning through websockets was bad, then just wait to see what they will use this for.
Guys, "The Birth & Death of JavaScript" was supposed to be a joke. In all seriousness though, I'm beginning to get tired of webdevs reading OS books and going "oh, this would be a great addition to The Web™". Slap "in the browser" after a feature all operating systems have supported for decades and you'll hit the front page of HN in no time. What's old is new again and all that.
Security concerns here are overblown, and this is IMHO more useful than WebRTC. This is how it should have been done to begin with, as well as the ugly hack known as web sockets.
There should be some restrictions. No ports under 1024 without asking the user would go a long way.
Anyone saying “the web isn’t an application platform” needs to just accept reality. That ship sailed almost 20 years ago.
Web pages and JavaScript are the only universally trusted medium for sharing information and simple software. Making it possible for news websites to touch ethernet/usb/serial/gpu compromises that. Same goes for optimizations ilke JIT'ing and WebAssembly which have certainly done a great job making it possible for Gawker to reprogram the microcode in AMD K8 CPUs. Has the long tail of browser innovations made the web go faster? No. It went faster in 2010 IMHO. Whatever performance glut the geniuses who work on Chrome end up creating, it'll just get gobbled by more aggressive display ads and things like wild-eyed web component frameworks.