Responding to you and fullshark, I'm not criticizing, only observing. Just as there is some evolutionary pressure causing carcinization, it's interesting to consider what pressure pushes things in the directions of email and LLMs.
I don't know what it is, but would love to hear others' ideas.
I think "email" is a bit of a overly specific term, but if we take a small step back, communicating with other humans is usually the most important part of any piece of software.
I have a feeling these things will spend 99% of their processing time reading other LLMs outputs.
Resumes written by LLMs and read by LLMs
PR summaries written by LLMs and read by LLMs
Emails written by LLMs and read by LLMs
...
Everything could just be a few bullet points... these things were already 90% posturing and trying to sound fancy by using convoluted sentences and big words, now that it's been automated what's the point
Exactly right. It's totally plausible that someone could build a mental health chatbot that results in better outcomes than people who receive no support, but that's a hypothesis that can and should be tested and subject to strict ethical oversight.
I feel this way about some of the more extreme effective altruists. There is no room for uncertainty or recognition of the way that errors compound.
- "We should focus our charitable endeavors on the problems that are most impactful, like eradicating preventable diseases in poor countries." Cool, I'm on board.
- "I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way." Maybe? If you like crypto, go for it, I guess, but I don't think that's the only way to live, and I'm not frankly willing to trust the infallibility and incorruptibility of these so-called geniuses.
- "There are many billions more people who will be born in the future than those people who are alive today. Therefore, we should focus on long-term problems over short-term ones because the long-term ones will affect far more people." Long-term problems are obviously important, but the further we get into the future, the less certain we can be about our projections. We're not even good at seeing five years into the future. We should have very little faith in some billionaire tech bro insisting that their projections about the 22nd century are correct (especially when those projections just so happen to show that the best thing you can do in the present is buy the products that said tech bro is selling).
The "longtermism" idea never made sense to me: So we should sacrifice the present to save the future. Alright. But then those future descendants would also have to sacrifice their present to save their future, etc. So by that logic, there could never be a time that was not full of misery. So then why do all of that stuff?
At some point in the future, there won't be more people who will live in the future than live in the present, at which point you are allowed to improve conditions today. Of course, by that point the human race is nearly finished, but hey.
That said, if they really thought hard about this problem, they would have come to a different conclusion:
Actually, you could make the case that the population won't grow over the next thousand years maybe even then thousand years, but that's the short term and therefore unimportant.
To me it is disguised way of saying the ends justify the means. Sure, we murder a few people today but think of the utopian paradise we are building for the future.
From my observation, that "building the future" isn't something any of them are actually doing. Instead, the concept that "we might someday do something good with the wealth and power we accrue" seems to be the thought that allows the pillaging. It's a way to feel morally superior without actually doing anything morally superior.
A bit of longtermism wouldn’t be so bad. We could sacrifice the convenience of burning fossil fuels today for our descendants to have an inhabitable planet.
But that's the great thing about Longtermism. As long as a catastrophe is not going to lead to human extinction or otherwise specifically prevent the Singularity, it's not an X-Risk that you need to be concerned about. So AI alignment is an X-Risk we need to work on, but global warming isn't, so we can keep burning as much fossil fuel as we want. In fact, we need to burn more of them in order to produce the Singularity. The misery of a few billion present/near-future people doesn't matter compared to the happiness of sextillions of future post-humans.
Well, there's a balance to be had. Do the most good you can while still being able to survive the rat race.
However, people are bad at that.
I'll give an interesting example.
Hybrid Cars. Modern proper HEVs[0] usually benefit to their owners, both by virtue of better fuel economy as well as in most cases being overall more reliable than a normal car.
And, they are better on CO2 emissions and lower our oil consumption.
And yet most carmakers as well as consumers have been very slow to adopt. On the consumer side we are finally to where we can have hybrid trucks that can get 36-40MPG capable of towing 4000 pounds or hauling over 1000 pounds in the bed [1] we have hybrid minivans capable of 35MPG for transporting groups of people, we have hybrid sedans getting 50+ and Small SUVs getting 35-40+MPG for people who need a more normal 'people' car. And while they are selling better it's insane that it took as long as it has to get here.
The main 'misery' you experience at that point, is that you're driving the same car as a lot of other people and it's not as exciting [2] as something with more power than most people know what to do with.
And hell, as they say in investing, sometimes the market can be irrational longer than you can stay solvent. E.x. was it truly worth it to Hydro-Quebec to sit on LiFePO patents the way they did vs just figuring out licensing terms that got them a little bit of money to then properly accelerate adoption of Hybrids/EVs/etc?
[0] - By this I mean Something like Toyota's HSD style setup used by Ford and Subaru, or Honda or Hyundai/Kia's setup where there's still a more normal transmission involved.
[1] - Ford advertises up to 1500 pounds, but I feel like the GVWR allows for a 25 pound driver at that point.
[2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...
> [2] - I feel like there's ways to make an exciting hybrid, but until there's a critical mass or Stellantis gets their act together, it won't happen...
many hybrid already way more exciting than regular ice, because they provide more torque, and many consumer buy hybrid because of this reason.
Not that these technologies don't have anything to bring, but any discussion that still presupposes that cars/trucks(/planes) (as we know them) still have a future is (mostly) a waste of time.
P.S.: The article mentions the "normal error-checking processes of society"... but what makes them so sure cults aren't part of them ?
It's not like society is particularly good about it either, immune from groupthink (see the issue above) - and who do you think is more likely to kick-start a strong enough alternative ?
(Or they are just sad about all the failures ? But it's questionable that the "process" can work (with all its vivacity) without the "failures"...)
It goes along with the "taking ideas seriously" part of [R]ationalism. They committed to the idea of maximizing expected quantifiable utility, and imagined scenarios with big enough numbers (of future population) that the probability of the big-number-future coming to pass didn't matter anymore. Normal people stop taking an idea seriously once it's clearly a fantasy, but [R]ationalists can't do that if the fantasy is both technically possible and involves big enough imagined numbers to overwhelm its probability, because of their commitment to "shut up and calculate"'
"I should do the job that makes the absolute most amount of money possible, like starting a crypto exchange, so that I can use my vast wealth in the most effective way."
Has always really bothered me because it assumes that there are no negative impacts of the work you did to get the money. If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
> If you do a million dollars worth of damage to the world and earn 100k (or a billion dollars worth of damage to earn a million dollars), even if you spend all of the money you earned on making the world a better place, you arent even going to fix 10% of the damage you caused (and thats ignoring the fact that its usually easier/cheaper to break things than to fix them).
You kinda summed up a lot of the world post industrial revolution there, at least as far as stuff like toxic waste (Superfund, anyone?) and stuff like climate change, I mean for goodness sake let's just think about TEL and how they knew Ethanol could work but it just wasn't 'patentable'. [0] Or the "We don't even know the dollar amount because we don't have a workable solution" problem of PFAS.
[0] - I still find it shameful that a university is named after the man who enabled this to happen.
And not just that, but the very fact that someone considers it valid to try to accumulate billions of dollars so they can have an outsized influence on the direction of society, seems somewhat questionable.
Even with 'good' intentions, there is the implied statement that your ideas are better than everyone else's and so should be pushed like that. The whole thing is a self-satisfied ego-trip.
There's a hidden (or not so hidden) assumption in the EA's "calculations" that capitalism is great and climate change isn't a big deal. (You pretty much have to believe the latter to believe the former).
It may help prevent linkjacking. If an old URL no longer works, but the goo.gl link is still available, it's possible that someone could take over the URL and use it for malicious. Consider a scenario like this:
1. Years ago, Acme Corp sets up an FAQ page and creates a goo.gl link to the FAQ.
2. Acme goes out of business. They take the website down, but the goo.gl link is still accessible on some old third-party content, like social media posts.
3. Eventually, the domain registration lapses, and a bad actor takes over the domain.
4. Someone stumbles across a goo.gl link in a reddit thread from a decade ago and clicks it. Instead of going to Acme, they now go to a malicious site full of malware.
With the new policy, if enough time has passed without anyone clicking on the link, then Google will deactivate it, and the user in step 4 would now get a 404 from Google instead.
Goo.gl was a terrible idea in the first place because it lends Google's apparent legitimacy (in the eyes of the average "noob") to unmoderated content that could be malicious. That's probably why they at least stopped allowing new ones to be made. By allowing old ones, they can't rule out the Google brand being used to scam and phish.
e.g. Imagine SMS or email saying "We've received your request to delete your Google account effective (insert 1 hour's time). To cancel your request, just click here and log into your account: https://goo.gl/ASDFjkl
This was a very popular strategy for phishing and it's still possible if you can find old links that go to hosts that are NXDOMAIN and unregistered, of which there are no doubt millions.
Only insofar as Google might wish to prevent it since their brand was on the shortened url you clicked to get there. And people not having malware is surely good for Google indirectly.
Presumably ACME used the link shortener because they wanted to put the shortened link somewhere, so someone’s going to click things like these. If Google can just delete a lot of it why not?
+1. users don't ask such things. they can't formulate them in writing (unless they are designers, artists, visual editors, UX experts themselves) and don't have time for that
There's a few reasons. The biggest one, IMO, is that it lets non-technical users change things quickly without having to go through the engineering team. Obviously there are limits to that, but in many cases, a product or marketing team wants to modify a form or test a few variations without having to put it into a backlog, wait for engineers to size it, wait for an upcoming sprint, then wait another two weeks for it to get completed and deployed. (Even in more nimble organizations, cutting out the handoff to engineering saves time, eliminates communication issues, and frees up the engineering team to do more valuable work.)
On the technical side, these form builders can actually save a decent amount of development effort. Sure, it's easy to build a basic HTML form, but once you start factoring in things like validation, animations, transitions, conditional routing, error handling, localization, accessibility, and tricky UI like date pickers and fancy dropdowns, making a really polished form is actually a lot of work. You either have to cobble together a bunch of third-party libraries and try to make them play nicely together, or you end up building your own reusable, extensible, modular form library.
It's one of those projects that sounds simple, but scope creep is almost inevitable. Instead of spending your time building things that actually make money, you're spending time on your form library because suddenly you have to show different questions on the next screen based on previous responses. Or you have to handle right-to-left languages like Arabic, and it's not working in Safari on iOS. Or your predecessor failed to do any due diligence before deciding to use a datepicker widget that was maintained by some random guy at a web agency in the Midwest that went out of business five years ago, and now you have to fork it because there's a bug that's impacting your company's biggest client.
Or, instead of all that, you could just pay Typeform a fraction of the salary for one engineer and never have to think about those things ever again.
But presumably you could have built it before, just slower, which is the point. For now, that speed-up just looks like a win because it’s novel, but eventually the speed-up will be baked into people’s expectations.
The biggest issue at most companies is making a lot of heat and noise while not actually delivering effectively, so I expect AI to
make this problem worse.
Usually, companies benefit more from slowing down and prioritizing, not ‘going faster’.
> For now, that speed-up just looks like a win because it’s novel, but eventually the speed-up will be baked into people’s expectations.
It will still be a win: the rewards for the new productivity have to go somewhere in the economy.
Just like other productivity improvements in the past, it will likely be shared amongst various stakeholders depending on a variety of factors. The workers will get the lion's share.
Labour share have GDP has held roughly constant throughout the ages, even as we saw massive productivity increases since the dawn of the industrial revolution.
You are comparing apples to oranges here. These are two completely different things.
Image the labour share of GDP could be a constant 100%, but perhaps the top 1% of workers (eg CEOs) get all the rewards and the other 99% get nothing.
(That's not meant to be realistic, just to illustrate that you can have a very unequal distribution despite a high or even growing labour share of GDP. No opinion expressed on whether the statistics you cite are any good.)
That means that some (few) workers may be getting the lion's share (but note that "income" isn't limited to workers, so it may also be none), not workers in general. People are interested in the latter, not the former.
Normally, if you compile the same code twice on the same machine, you'll get the same result, even if it's not truly reproducible across machines or large gaps in time. And differences between machines or across time are usually small enough that they don't impact the observed behavior of the code, especially if you pin your dependencies.
However, with LaTeX, the output of the first run is often an input to the second run, so you get notably different results if you only compile it once vs. compiling twice. When I last wrote LaTeX about ten years ago, I usually encountered this with page numbers and tables of context, since the page numbers couldn't be determined until the layout was complete. So the first pass would get the bulk of the layout and content in place, and then the second pass would do it all again, but this time with real page numbers. You would never expect to see something like this in a modern compiler, at least not in a way that's visible to the user.
(That said, it's been ten years, and I never compiled anything as long or complex as a PhD thesis, so I could be wrong about why you have to compile twice.)
I wrote my PhD (physics) in LaTeX and I indeed needed to compile twice (at least) to have a correct DVI file.
It was 25 years ago, though, but apparently this part did not change.
This said, I was at least sure that I would get an excellent result and not be like my friend who used MS Word and one day his file was "locked". He could not add a letter to it and had to retype everything.
Compared to that my concern about where a figure would land in the final document was nothing.
Did you feel that way in 2005 when the Xbox 360 was released and $60 was the new standard? Because $60 in 2005 has the same buying power as $97.41 today. This game, in real terms, is cheaper than Xbox 360 games were.
The 1993 version of Doom was $40 plus shipping and handling. Let's ignore the shipping and handling.
That's $90 today.
I don't play very many video games anymore though I did play the hell out of Doom 30 years ago.
What I have noticed, as an outside observer looking in, about people who play video games today is that they seem to be among the most entitled people on the planet who love bitching about everything.
My theory on this is that gamers are no longer a niche demographic, but rather to some extent a large cross section of the population, having been raised with games and many still playing well into adulthood. Since the Internet is an extremism machine that further amplifies the loudest voices, it’s allowed the actions of the most obnoxious members of the group to become the representative voices in social media.
I think like most groups there is a silent majority getting on with it, and a loud minority picking fights online. That said, every time I have tried to take up a competitive online game with voice chat, I have regretted it.
There are some people who make gaming their whole identity, and when something happens that they don’t like, they act like gaming is a fundamental right instead of a luxury.
I understand this, but the economics of software in general is that you have high upfront costs and then the marginal costs are minimal. Better tooling has helped keep these upfront costs from growing too much (developing a game in 2025 is MUCH easier than in 2005), the distribution costs have shrunk too and the size of the market has exploded. Given these is it really unreasonable for consumers to expect the prices to stay flat?
The economics of video games is that they are enterprise software, at least major releases from large companies are. They have large teams made up of intercommunicating subteams, large budgets, even bigger marketing budgets, corporate mission criticality (failure of a game can break a company or studio), and significant server infrastructure that must be kept online and maintained. These days they're even usually written as customizations to existing frameworks (called Unity or Unreal rather than Java EE or Spring).
So there are significant upfront and ongoing costs to releasing a game like Mario Kart World. $60 per copy just isn't going to cover those costs. The only options are to charge more upfront or introduce purchasable cosmetics and the like to extract that value from the customer another way.
Nintendo pricing is unique because they barely do sales, if a $60 Xbox 360 game was too expensive then you could just be patient and let the price creep downwards all the way to be bargain bin if desired. OTOH the last Mario Kart game from 8 years ago (which was a re-release of a Wii U game from 11 years ago) still retails for $50 to this day, even as the sequel is about to drop.
I don't necessarily think Ive is going to succeed, but if you're going to make a lot of bets, taking one bet on someone who succeeded before seems pretty reasonable. He wouldn't be the first person to rise to great heights, fall, and rise again, even in the Apple world.
I absolutely agree right up until we start talking about price. Obviously this deal was all in stock from someone who has a history creative corporate control structures, but nevertheless the on paper cost of was $6.4 billion. That's a hell of a bet.