While I agree those areas are much nicer, most people still need to commute to go work in, and spend considerable amount of time in soma, fidi, etc.
I used to live in one of those areas you talk about, yet still was attacked and threatened several times in the span of 6 months during my commute to our office on Market street.
Hi HN! Excited to showcase a fun browser extension we built over the weekend. Lenses is a browser extension for Twitter that transforms all the tweets in your feed to make them match your mood.
heard you guys on the logrocket podcast btw, thank you for taking on this challenging but unexplored market. do you have any design patterns for (as geoffreylitt said) “stably addressing dom elements” in twitter and gmail? i noticed there was a gmail specific extension platform launching here the other day, have yet to try it out tho
We have a lot of ideas on how to make this problem a lot easier. It's a big reason why extensions aren't as big as they could be. Browser extension devs having to tinker and even think about the DOM or any other website-specific quirk is a leaky abstraction.
If it's an issue, people's usual approach to solving that problem is to create some mapping of "twitter-sidebar : some XPath selector" on the backend and have their extension query whenever they want to interact with the web page.
They'll have e2e tests to check if their selectors or xpaths still work. If not, they'll figure out what changed and modify the xpath. Some automate this step.
It's tricky because A/B tests are a thing, websites might change based on geographical region, and a bunch of other stuff that leads to users seeing different things in the DOM compared to the e2e test). Logging errors to something like Sentry mitigates some of this, but the complexity is still quite large.
It stems from the fact that frontends only consider humans, not robots. You either need to make your robot super resilient to change (above approach), make your robot act more like a human, have the website consider the robot ("Connect to Wallet", etc), or use the web app's backend API if it exists.
We're experimenting with all four of these approaches this year and seeing which ones are the most valuable to people, so stay tuned for more exciting extension stuff!
Not allowing TLS 1.3 means nothing (no modern web sites) works. Modern browsers and servers both speak TLS 1.3 and if they can't they give up. Some things don't work in China, but China wouldn't have a thriving economy if nothing was working. So no, they did not block TLS 1.3 although it's interesting how this rumour seems to have self-popularised. China blocks certain popular sites, but it does not block whole protocols or protocol versions.
This is actually a small triumph for the people responsible for RFC 8446. With previous iterations of TLS it was always discovered shortly after release that idiots broke stuff and so a "fallback" was necessary to allow you to speak the previous version. Such fallback is dangerous because an adversary can thus forcibly downgrade you to an older protocol, and thus attack old protocols even if the new protocol is safe.
How is it done? That is, how does TLS 1.3 avoid downgrade attacks?
When a TLS 1.3 server finds itself talking to somebody over TLS 1.2 (for example maybe a rather archaic web browser is connecting) it scribbles over some of the bytes labelled "random" in its Hello message. It scribbles 44 4F 57 4E 47 52 44 01. Which in ASCII spells "DOWNGRD".
Those bytes don't mean anything special in TLS 1.2, they're just a strange coincidence. But if you're a TLS 1.3 client, seeing those bytes means a Downgrade attack was attempted. So you immediately give up, you are being attacked.
So you might think well, a bad guy could just change those bytes blind right? Nope. The "random" field is used by both parties to choose parameters they're going to verify in a moment to check everything is safe. If you can change the bytes the values will be different and the connection fails anyway.
In his most recent testimony, Mudge mentioned that he was in the leaked OPM database with his details and clearance level leaked which implies he had clearance.
That doesn't necessarily mean Mudge had a Secret clearance or something. For all we know, he could have had a Public Trust position, which meant he handled sensitive but unclassified information. Anybody in IT or infosec would have that kind of clearance.
Did he mention a clearance level, or just being in the OPM breach? My understanding is that the OPM breach included plenty of uncleared employeesas well.
(I’m not trying to be stubborn! If he really did hold a clearance as a DARPA PM, then I’m wrong in his case.)
Yeah, that’s the part I’m curious about: there are plenty of “public trust” or SBU roles that I’d expect to have been leaked with the OPM breach that are “cleared” in the pop sense of the word, but are not actual clearances in the US Government’s sense.
That's an interesting way to respond, given the context. NodeJS doesn't run on the frontend, so your argument here would be an even _stronger_ reason to favor using Chrome as your runtime...
Surely you are trolling? That's a very, very common setup. Nothing complicated or out of the ordinary requiring specific tooling for the dev environment.
The output that runs in the browser at runtime has its own set of same constraints. Modern browser version, etc.
First of all, don't do that. It's obnoxious. Secondly:
> That's a very, very common setup.
So? Falling back on the argument that, essentially, "lots of people do this" is about as worthy as attempting to counter by saying that the other person/what they are doing is weird[1]. (It's actually slightly more respectable, but that's only because of how unrespectable the call-them-a-weirdo path is.) You either have an argument for $THING that will hold up under scrutiny without appealing to how weird/anti-weird $THING is, or you don't.
Thirdly, you are not compelled to comment. (What makes your decision to join in even more mystifying is that you were not the person being addressed—at least their impulse to do so would have made sense, even if the argument was still a bad one.) If you don't actually have an answer, why bother commenting at all?
We use Parcel under the hood and are huge fans! Our framework has opinionated abstractions on top of it that we think help improve the extension development experience considerably.
Features we've built so far:
- manifest.json is generated automatically. If you want to create a content script, you name a file content.ts, and it'll auto-gen the right manifest key-value pair for it. Same with backround.ts. [1]
- Mounting a React component to the popup or options page is similar. You create a popup.tsx or options.tsx file, export a default React component, and it'll automatically associate it in the manifest and mount the component automatically for you.
- We support environment variables with .env files [2]
- We just released support to automatically inject a shadow DOM into a webpage and mount a React component from a content script [3]
- We have remote code bundling that automatically fetches URL based imports (like Google Analytics) in build time to mitigate issues with MV3 not allowing remote code [4]
Thanks! We wrote a custom Parcel runtime [1] inspired by Parcel's HMR runtime (which was too bloated and buggy for us) that injects a web socket listener into the development build of the extension.
Whenever a bundle change happens, Parcel sends it the refresh message and it either does `chrome.runtime.reload()` or `location.reload()` depending on the context.
I used to live in one of those areas you talk about, yet still was attacked and threatened several times in the span of 6 months during my commute to our office on Market street.