It pretends to be. But in reality it's always been a VC honey pot.
I've stopped commenting here. I've made it a personal rule to only speak out against this tyranny and never talk about tech fluff, which is 100% of the front page of HN. I don't give two solid fucks about SQLite when the US government is throwing people in death camps in El Salvador.
This site is straight tech bro fascism. People are finally realizing that Elon isn't the guy his PR team created. He's not Tony Stark.
you have a lot of faith that Big Balls hasn't been compromised. Because surely none of them are using their personal smartphones or laptops and are following strict access protocols. Seeing that they are so so careful with everything else they've been doing.
I feel like this is a bad episode of the Twilight Zone.
One of the more bizarre things with this whole saga is seeing people act as though the existing government employees are any different. People throwing our “vetted” like it means something meaningful.
No, “vetting” basically means they checked to see if you ever got caught embezzling money, or in the case of clearances, if you lied about committing any crimes (committing them is ok). They are regular people and getting them to abide by sensible IT policies is a giant nightmare and compliance is poor.
Heck, have people already forgotten Trump’s tax returns were leaked by politically motivated “vetted” people working for the IRS? Not the first time that happened either. And they didn’t even find anything interesting!
"Had previously been fired from a job for leaking sensitive company data" tends to be the sort of thing that stops you from getting jobs where you work with extremely sensitive data.
I'm gonna go out on a limb and say no, not without first going through a change management process and going through a privileged session management system, except in the case of an emergency break-glass scenario where using those emergency creds throws all kinds of big DANGER alerts across the org if the access was unexpected. I can't speak to the Treasury and IRS specifically, but that's kinda standard across large orgs, especially ones that get audited regularly on their handling of sensitive data.
Some system protect against that. The philosophy behind IBM RACF is :《 A key security principle is the separation of duties between different users so that no one person has sufficient access privilege to perpetrate damaging fraud.》
> The philosophy behind IBM RACF is :《 A key security principle is the separation of duties between different users so that no one person has sufficient access privilege to perpetrate damaging fraud.》
I am so primed to parse emoticons eagerly that I thought that the philosophy was :《
> No, “vetting” basically means they checked to see if you ever got caught embezzling money, or in the case of clearances, if you lied about committing any crimes (committing them is ok). They are regular people and getting them to abide by sensible IT policies is a giant nightmare and compliance is poor.
However little is involved in vetting, it's something that has been done for regular government employees and hasn't been done for these employees. I'd rather have minimal safeguards than none.
The deal still has me scratching my head. They tossed out the brand name and logo. Elon already had the X name and domain. For much less than $44B I feel like you could clone Twitter and come up with a strategy to acquire users. Hell, for $1 billion you could probably pay a good number of influencers to move to your platform. $44 billion is an absolute fuckton of money to kill Twitter and move those people to your pet project.
> $44 billion is an absolute fuckton of money to kill Twitter and move those people to your pet project.
What's the point of having Fuck You Money if you can't say "Fuck You?" Your value assessment isn't taking into account the value of destroying old Twitter, of removing a major bullhorn in the information environment away from people that Musk probably considers adversaries at best, and malevolent actors at worst. Simply standing up his own competing platform would not have accomplished this.
It's weird that everyone and their mum "could" clone twitter easily. And yet the only products of note that's more than dismissive hackernews comments/slideware with something in production at similar scale was meta with threads and that's still inferior in terms of search and discoverability and scaffolded with the guts of Instagram and bluesky which has the advantage of being founded by Jack Dorsey and has been around for years now. For all the big talk musk et al have I'm not sure they could actually have built a clone. You can pay influencers but if we look at how dominant tiktok is/was making buzz and content is more of an outcome of a bunch of incentivisations and the kind of non technical community management stuff people dismiss as marketing than throwing money at all the big names you can find.
I know it's fashionable to use flatpak, Docker, etc. but I'd still rather not have 30 instances of Gtk running for every GUI app I decide to run. Consider that we still run on Raspberry Pi, etc.
> aren’t these shared libraries a supply chain attack vector
Not any more than the apps themselves. If you're downloading a static binary you don't know what's in it. I don't know why anyone trusts half the Docker images that we all download and use. But we do it anyway.
I think what you mean when you say instance of Gtk is a copy of the Gtk library in memory?
That's not how flatpak works; identical libraries will share the same file on disk and will only be loaded once, just like non-flatpak apps. And because Gtk is usually part of the runtime most apps will use one of a few versions.
There was also Kahn, which was a similar competitor.
I remember playing Duke3d over the internet. I was completely giddy as me and my friends all flew around with jetpacks on trying to kill each other with pipebombs.
The downside was that those games were obviously not optimized for internet latency and there wasn't much you could do about it. But I definitely had a blast.
EVs are the worst proposition for a car rental. When you rent a car, you're planning on driving it. Much more than the car that sits on your driveway and only takes you to work and back every day.
But it doesn't help that Tesla is a tech company and their product is sold as a tech product. No one wants to buy a 3 year old used iPhone, either.
I can understand why people wanted that, and the benefit of doing that.
With that said, I also see benefit in having limitations. There is a certain comfort in knowing what a tool can do and cannot do. A hammer cannot become a screwdriver. And that's fine because you can then decide to use a screwdriver. You're capable of selection.
Take PostgreSQL. How many devs today know when it's the right solution? When should they use Redis instead? Or a queue solution? Cloud services add even more confusion. What are the limitations and weaknesses of AWS RDS? Or any AWS service? Ask your typical dev this today and they will give you a blank stare. It's really hard to even know what the right tool is today, when everything is abstracted away and put into fee tiers, ingress/egress charges, etc. etc.
tl;dr: limitations and knowledge of those limitations are an important part of being able to select the right tool for the job
I see zero benefit in having artificial functionality limitations. In my hypothetical example, imagine that `sed 's/foo/bar/'` works but `sed 's/foo/bark/'` does not because it's 1 character too long. There's not a plausible scenario where that helps me. You wouldn't want to expand sed to add a fullscreen text editor because that's outside its scope. Within its scope, limitations only prevent you from using it where you need it. It would be more like a hammer that cannot be made to hammer 3 inch nails because it has a hard limit of 2.5 inches.
Those are the kinds of limits GNU wanted to remove. Why use a fixed-length buffer when you can alloc() at runtime? It doesn't mean that `ls` should send email.
There's a major benefit: you can test that a program with an artificial limit works up to the limit, and fails in a well-defined manner above the limit. A program without any hardcoded limit will also fail at some point, but you don't know where and how.
Imagine using a program that can only allocate 4GB of ram because it has 32-bit address space. There's no benefit to that limitation, it's an arbitrary limit imposed by the trades-offs made in the 80s. It just means that someone will need to build another layer to their program to chunk their input data then recombine the output. It's a needless waste of resources.
The benefit of not having a limitation is that the real limits scale with compute power. If you need more than 4GB of memory to process something, add more memory to the computer.
> Imagine using a program that can only allocate 4GB of ram because it has 32-bit address space. There's no benefit to that limitation
You're looking at isolated parts of a system. In a system, an artificial "limit" in one component becomes a known constraint that other components can leverage as part of their own engineering.
In the example of memory addresses, it might be "artificial" to say that a normal application can only use 32-bit or 48-bit addresses when the hardware running the application operates in 64-bits, but this explicit constraint might enable (say) a runtime or operating system to do clever things with those extra bits -- security, validation, auditing, optimization, etc.
And in many cases, the benefits of being able to engineer a system of constrained components are far more common and far more constructive than the odd occasion that a use case is entirely inhibited by a constraint.
That's not to say that we should blindly accept and perpetuate every constraint ever introduced, or introduce new ones without thoughtful consideration, but it's wrong to believe they have "no benefit" just because they seem "artificial" or "arbitrary".
you can have ten comments about the name of a variable, but no one bats an eye at a new npm package introduced. Also, devs that wrote code that Google depends on can't pass the leetcode gate check to get a job there.
The last sentence is an overreach to me, but I have experienced much of the same bike-shedding during code reviews. 95% of them are useless. Read that twice; I am not joking, sadly. I am not against code reviews, but in my experience(!), the reviewers are not incentivized to do a thorough job. Seriously, if you deliver a new feature vs do a deep, difficult code reviews, which one benefits you more? To repeat: I don't like it; I am only pointing out the perverse incentives to rush during code reviews.
One more thing that I don't see enough people talking about here: Writing good software is as much about human relationships (trust!) as it is about the computer code itself. When you have a deeply trusted, highly competent teammate, it is normal to put less effort into the review. And to be clear, I am talking about the 99% scenarios of writing the same old CRUD corporate code that no one gets excited about here. Please don't reply to this post with something like, "Well, I work on Dragon at SpaceX and all branches have 2x code coverage and the QA team has 600 years of combined experience... Yada..."
One 1 hour doing code review is not really stolen from doing feature work really is it? For the vast majority its stolen from playing video games or some other non-work.
That heavily depends on the individual developer and the organization in question.
In general, the most highly skilled developers who are most capable of doing a thorough code review are also the ones who are most likely to be genuinely over capacity as is.
I'm not sure about your experience, but companies which have strong code review practices also have strong controls on the third party code. In terms of review granularity, it makes more sense to be critical of maintenance/readability for code you actually own and maintain. Third party code has a lower bar and should, although I also believe it needs to be reviewed
Yeah I think this is the common case. I think we usually trust that dependency A took a look at their dependency B and C before releasing a new version of A. And even if properly reviewing our bump of A, how often do we check out changes in B and C
Edit: yes for FAANG-ish companies this is usually a bit different, for this reason. And licenses..
You say all this after recent documents were revealed about Facebook intercepting and analyzing Snapchat encrypted traffic via a man-in-the-middle conspiracy.
I've stopped commenting here. I've made it a personal rule to only speak out against this tyranny and never talk about tech fluff, which is 100% of the front page of HN. I don't give two solid fucks about SQLite when the US government is throwing people in death camps in El Salvador.
This site is straight tech bro fascism. People are finally realizing that Elon isn't the guy his PR team created. He's not Tony Stark.