Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An Interview with Marc Andreessen about AI and How You Change the World (stratechery.com)
32 points by thm on June 29, 2023 | hide | past | favorite | 75 comments


Seeing VCs like a16z shift so quickly from crypto to AI has made me doubt how smart these guys really are. When I started my career I thought, wrongly, that VCs were the real smart guys in the room when startups were founded. They were already rich, they provided great advice to early companies, they could see the future and knew what direction tech was going, they were essential.

As I've progressed in my career my view of them has changed dramatically. They're the money men who in many cases got lucky a few times. Sometimes really just once.

Now I don't mean to completely diminish their role. I've pretty much only worked at VC funded startups in my career and so imo there's no underselling the importance of having some cash in the bank so you can work. But the people providing that cash are not visionaries. They are simple investors chasing returns. Whatever they think is the current thing that will generate returns that's what they want to talk about. Whether it's social media, new media, crypto currencies, or now AI they're all just chasing returns and trying to hype the space their portfolio companies are in. It doesn't really feel so visionary or special after all.


Yes, the tech world puts these folks on pedestals but they are really just trend followers. The club is small and unless you went to an elite high school and then an elite university you’re probably not getting in. But the difference between these VCs and the average technologist is just a lot of polish and elite backgrounds.


VCs main alpha is that by network/relationship, physical location, cultural convention, and investment mandate.. they have access to pre-IPO tech companies that most regular Joes have to wait until they IPO to invest in (years later, at greatly inflated prices by then).


I've dealt with many different types of people, in many different professions, from the bottom of ladder to the top.

One thing that's very obvious is that you can't really predict how smart a person is based on their level of business success, what their profession is, how wealthy or poor they are, etc.

What you can predict is what their skillset is, but being "smart" and being highly skilled are two different things.


I get that feeling hearing him debate. His recent interview with Sam Harris made him look rather foolish. He may be smart but he's not great at getting that across to an audience.


“This is the only time I’ve ever said this is like the internet. If you go back through all my historical statements, one could imagine that with my experience I could have said this like 48 times. I’ve never made the comparison before. I’ve never said it about any other kind of technology. I never said it about anything else between the original internet and then the emergence of crypto, because I just wanted people to know like I don’t take the comparison lightly.” - Marc Andreesen on blockchain in 2022

So, yeah, Marc Andreesen says lots of things.


It is way too soon to really have an opinion on whether or not crypto will live up to the hype.

FWIW, I've been recommended a lot of videos from the recent Bloomberg Invest summit. There are tons of people from traditional finance (e.g. Goldman Sachs), Hedge Funds, Private Equity, people from the Fed, SEC, etc. The main questions they've been asking are related to inflation and interest rates but crypto comes up in almost every single conversation. Mostly crypto comes up because of the FTX fiasco and questions about regulation (also regulation related to recent bank collapses like SVB). But the constant refrain is "crypto is clearly here to stay".

The prevailing narrative seems to be that the adults are going to come in and take it over. Finance works very slow, but like a glacier it also has tremendous mass and therefore power. What is absent in the conversations is the snide derision and outright dismissal I often see on HackerNews.

I think we focus here on NFTs, shit coins and other token IPO scams. But the finance industry seems to accept some kind of digital currency is inevitable. There is going to be insanely big money to be made in that space.


Ty still makes Beanie Babies.


Never forget, Marc Andreessen explaining what Web3 is:

https://www.youtube.com/watch?v=Jwyp0wogOrw


I guess you mean that he has been proven wrong before but if it was 2022 I'm sure he still stands by that statement.


He has openly professed his belief in "strong opinions, weakly held."


Which is a horrible strategy for personal growth. The strength of your opinions should be modulated by the strength of their justifications. My opinion about the shape of the earth is and should be stronger than my opinion about the hilliness of sao tome and Principe.

It also leads to situations where people with skills not associated with truth finding can be more persuasive than they should be - eg charismatic or comedic people will be able to more readily persuade you than people without those skills, even though there is nothing about those skills that leads people towards more correct opinions.


wow I dislike him more already


Andreeseen first says AI is the most transformative technology since fire:

> If I’m right about that and that’s how this is going to play out, then this is the most important technological advance with the most positive benefits, basically, of anything we’ve done probably since, I don’t know, something like fire, this could be the really big one.

And then, when existential risk is mentioned, saying that AIs are "math and code" and "you're completely capable of understanding what it is":

> The thing is, what we’re dealing with here is something that you’re completely capable of understanding what it is. What it is it’s math and code. You can buy many textbooks that will explain the math and code to you, they’re all being updated right now to incorporate the transformer algorithm, there’s books already out on the market.

How can a technology be as transformative as fire, and simultaneously have implications that are easy to understand in the moment?

And for someone in Andreessen's position, who is positioned to profit enormously from unrestrained AI growth, what could explain this dissonance except for motivated reasoning?


He is also missing the point, perhaps intentionally. Of course we can understand how the learning algorithms work, since these are code written by us. But not how the models work, since these are written by the learning algorithms, as they churn through the vast training data. At billions of parameters, models are black boxes.


Such a weird direction to take it, total non-sequitur. It's like responding to concern about risks of nuclear war or bio-engineered pathogens with "pft, you can understand that tech, it's all in textbooks, it's just physics and engineering". Like... OK? So? That has nothing to do with it.


AI is going to kill us all! Unless I can make a ton of money from it, then it's going to save us all!


Thank you for stating this in black-and-white.


Why go all the way back to fire when AI is dependent on electricity and microchips? Doubt a mechanical computer could have scaled up to run machine learning models.


Fire is a such a fitting example, it made me chuckle.


After publishing the essay "Time to build", A16Z made substantial investments in crypto and him and his wife went full NIMBY when it came to building housing in Atherton (their hometown). So I wouldn't call that a seminal essay unless it is meant sarcastically.


Yeah I can't take Marc Andreessen seriously any more after that incident, no matter how hard I try. He wrote that surreal letter objecting to building multifamily/affordable housing, claiming it would immensely reduce the property value of his walled mansion in Atherton. It just reeked of pettiness and hypocrisy.


Agreed, although I wrote him off much earlier than that. I can't decide if he's actually disconnected from reality or if he's just saying whatever he feels will make him more money.

The difference between the two is purely academic, though.


Habitually doing the second inevitably leads to the first.


It has to be both.

You don't get to be connected and lust after 'moar money'. The two are incompatible.


I don't know why anyone listens to this guy anymore. He's a very talented grifter.


He's a very rich talented grifter. That means he commands a vast sum of capital and can therefore shape our society disproportionately. His opinions may be completely detached from reality, but his actions have direct consequences in reality.


Because there are vast swaths of lesser talented grifters and aspirational grifters who operate in the wake of very talented grifters.


First you buy worthless tokens, hype them through the roof, and dump it on retail investor suckers..


When HN discussed https://www.safe.ai/statement-on-ai-risk (thread: https://news.ycombinator.com/item?id=36123082) a few weeks ago, many people were adamant that the many experts who signed this statement were hopelessly conflicted due to their personal interest in the success of AI.

If you were one of those people, do you feel the same way about Marc downplaying the dangers posed by technology that a16z has heavily invested in?


I don't think the argument goes both ways. People who are heavily invested in AI naturally want to see their investment play out, so they are going to hype up the dangers of AI when pushing for regulation that benefits them and downplay the dangers when seeking investment and public opinion that benefits them. In both cases, the core idea they are pushing is that AI is important, and therefore their work is important.


Marc Andreessen on the Sam Harris podcast yesterday arguing that we don't need to worry about AI:

> The moral of every story is "the good guys win"

https://www.youtube.com/watch?v=QMnH6KYNuWg&t=3064s


I listened to this too, as Sam mentions during the introduction, Marc is on the board of Meta and that he is invested in AI startups, from that moment, his views make sense to me.

I thought he had very, very weak arguments and defenses, especially the "thermodynamic" argument he keeps going on with. It was a very, very stark contrast between his interview with Lex Fridman, who really didn't give him much else but a platform with zero push back. Sam made him work a lot harder to explain his often naive and overly optimistic views.

Side note: It actually made me think that Sam's podcast is probably worth paying for. Glad people like that exist.


Watching Lex interview two sides of an issue with VIPs is such a let down. He's like the average Youtube commenter, whatever was the last argument put before them is what he's agreeing with. If you spend 95% of your interview agreeing with Yudkowsky, then offer those rebuttals to Marc Andreesen, don't spend Andreesen's interview 95% agreeing with him too. You're offering nothing, so Yudkowsky and Andreesen should just be debating each other without you there.


He was also on the Lex Fridman podcast a week ago. Seems like he's doing something of a roadshow related to AI.

https://www.youtube.com/watch?v=-hxeDjAxvJ8


He's got egg on his face now that blockchain failed to pan out as the next big thing. AI is the new hype/grift cycle.


You know, AI-will-be-super-positive people deride AI-doomerism as being the religious one. But the only religious appeals seem to come from the positivity camp. Another one is Copilot creator's:

"Doomerism can be dispelled.

Believe that this universe is the one, maybe the only one in the multiverse, where things work out.

Otherwise humans, life, earth would be long gone by now."

https://twitter.com/alexgraveley/status/1553863686732775425


Famously this happens by good guys saying “there’s no problem here because good guys always win.”


I listened to the Sam Harris interview (the free part). I was disappointed that Marc Andreessen seemed to misunderstand the AI safety concerns of the alignment problem (undesirable subgoals of AI) and does not acknowledge the possibility that AI will use deception. I still think his essay is a good counter to the generally negative coverage of AI in the media and from effective altruism communities. I think catastrophic AI alignment is a much smaller risk than the people who own the AIs being unaligned (arms races with hostile countries, terrorists, other antisocial people using AIs).


The correct way to explain this concept of course is to note that in every story with a moral, the winners are designated as the good guys.


>> The moral of every story is "the good guys win"

> The correct way to explain this concept of course is to note that in every story with a moral, the winners are designated as the good guys.

That doesn't salvage it. His statement (and your rephrasing) is demonstrably false, as illustrated many if not most cautionary tales.


>To listen to this interview as a podcast, click the link at the top of this email to add Stratechery to your podcast player.

Notwithstanding that this text appeared on a web page, no such link is evident there. Is the audio formatted version re-hosted somewhere?


https://open.spotify.com/show/1jRACH7L8EQCYKc5uW7aPk

I'm listening in preview mode, ben says he prefers open standards for the podcast but it appears paywalled and personalized on top of an open standard.


Call me crazy, but AI will not be as transformative as the hype claims it will be. We’ll get plenty of AI enabled tools that will help us do our jobs better/faster, but they’re not going to do the jobs for us.


You just described that AI will help... But not do the job. I think that's the definition for job. Anything that the person is left doing is the job part. Anything machines or AI etc does is not the job anymore. As machines help more, jobs become different.


So you are saying that for some mysterious reason, machines cannot become as smart as humans. Mysterious.


I just heard him on Sam Harris' podcast. It was a weird experience because he spoke with the patience and respect of someone who is intellectually honest and quite articulate, but at the same time, his reasoning struck me as really naïve. Some of his arguments include "even if AI becomes super smart, it won't be a threat because the smarter humans aren't even in charge as it is", "We can talk to it (GPT-4), so we can just ask it what its intentions are", and my favourite: "Meh, the good guys always win anyway".

I think it's great to hear counter points to the dystopic fear mongering, in fact, I would love to hear more of it. But "It's not alive, it's like your toaster!" just isn't very convincing when talking about what is probably the most disruptive technology since fire, by his own words.


Is there anyone prominent in the tech world who is pessimistic? I am tired of this relentless optimism regarding new technologies.


When's the last time you saw a pessimistic car salesman? Same thing.

Now, a mechanic....


A lot of the worst x-risk AI doomers are... selling AI (Altman at OpenAI being the most visible).

X-risk, of course, is leveraged as a powerful argument for regulation which narrows the field to a small number of favored firms which have privileged mutual relations with government, with a difficult on-ramp for anyone else, and minimum public transparency.


Both Altman (OpenAI) and Hassabis (DeepMind) have said existential risk from AI is real.


But their actions say that they don't really believe it. Or that they're sociopathic. Pick your poison.


I don't think they do. You might disagree but I think Altman's stance is that this will happen no matter what, so it's better for "good guys" (for some definition of that word) to take the lead to minimize the threat.


Yes, I understand his stance. I think that stance is, at best, misguided. But given how the worldcoin thing shook out, it seems to me that he doesn't have a great deal of concern over how his actions impact others, and I question his ability to judge who are the "good guys" and "bad guys". I am trying to view him in the most positive possible light here.


Musk is pessimist about AI.


Two of the three so-called "Godfathers of AI," Yoshua Bengio and Geoffrey Hinton, as well as Sam Altman and Bill Gates, have signed the Statement on AI Risk.

https://www.safe.ai/statement-on-ai-risk#open-letter


If I recall correctly sometime during the pandemic Marc Andreessen lamented how all that tech was good for nothing.

Changing the world requires changing the software of society. We already have all the tech we need for that. But its not happening.


I thought web3 was going to change the world, Marc.


The use case for AI is spam.


TLDR; A fluffy conversation nominally about AI with ample I’m-not-like-the-others disclaimers by both parties spends more time Monday-morning-quarterbacking COVID policy and otherwise generally back-patting than saying anything meaningful about AI.


Why does anybody still pay attention to this guy?


Because outside of the tech crowd, nobody really knows anything about him beyond that he's wealthy and in tech. At most, some might remember him as being a developer on Mosaic.


One funny thing about MA is how happily he broadcasts his white supremacist tendencies on Twitter (by following lots of creepy ones). Makes me hope he won't be in charge of changing my world.

EDIT: e.g. @FistedFoucault, @empathyhaver, @0x49fa98, see also: https://theoutline.com/post/6708/marc-andreessen-twitter-fav...


> One funny thing about MA is how happily he broadcasts his white supremacist tendencies on Twitter (by following lots of creepy ones).

Can you go into more details about that? Who exactly are the "creepy ones" he's following on Twitter?

> Makes me hope he won't be in charge of changing my world.

He won't be in charge, but he'll have a disproportionate influence. After all, he's rich and powerful.


> Can you go into more details about that? Who exactly are the "creepy ones" he's following on Twitter?

A while ago, someone wrote an article on it: https://theoutline.com/post/6708/marc-andreessen-twitter-fav...

But that's very out of date. I found out (before reading that) by following him, and then for a month my "for you" feed was full of cringy far right accounts (not Robert Spencer-level, but not too subtle either). Just some random ones I saw recently: @FistedFoucault, @empathyhaver, @0x49fa98


He follows ~21,700 accounts. Do you think he shares or endorses the opinions of all those accounts?


I've never noticed him following any of the left-leaning accounts I know, and I've had tons of far right stuff that he follows recommended to me by Twitter after following him, so it seems like he's leaning one way.


Oh, so according to you, one side is merely "left leaning" and the other is "creepy" and "far right". Do you have any substantial counterarguments to any things he actually said, apart from guilt-by-association smearing him because you don't like politics of some accounts he is following?


I didn't exactly claim that he said something objectionable. I did accuse him of broadcasting white supremacist tendencies by his follows. You're free to interpret the significance of these follows differently. I personally would be uncomfortable following accounts that retweet and tweet clearly racist statements. You can go to @empthyhaver for examples literally from the last couple of days.


> I did accuse him of broadcasting white supremacist tendencies by his follows.

Simply following someone doesn't count as "broadcasting". If you don't like things being recommended by the Twitter algorithm, just use the "following" tab. If you don't, that's on you.


What does it count as to follow a bunch of racist tweeters? Are you saying it's not indicative of anything?


if you can think of a better word than “creepy” to describe milo yiannopoulos i’d like to hear it


I you have factual arguments instead of ad hominem attacks I'd like hear them.


I can't stand the guy, but if you're going to make accusations like that you should bring receipts.


Fair point, added.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: