They are researchers, not professional presenters. I promise you if I told you to do a live demo, on stage, for 20 minutes, going back and forth between scripted and unscripted content, to an audience of at least 50 million people, that unless you do this a lot, you would do the same or worse.
I know this because this is what I do for a living. I have seen 1000s of "normal" people be extremely awkward on stage. Much more so than this.
It's super unfortunate that, becasue we live in the social media/youtube era, that everyone is expected to be this perfect person on camera, because why wouldn't they be? That's all they see.
I am glad that they use normal people who act like themselves rather than them hiring actors or taking researchers away from what they love to do and tell them they need to become professional in-front-of-camera people because "we have the gpt-5 launch" That would be a nightmare.
It's a group of scientists sharings their work with the world, but people just want "better marketing" :\
I think they're copping this criticism because it's neither one thing nor the other. If it was really just a group of scientists being themselves, some of us would appreciate that. And if it was inauthentic but performed by great actors, most people wouldn't notice or care about the fakeness. This is somewhere in the middle, so it feels very unnatural and a bit weird.
You're describing low skilled presenters. That is what it looks like when you put someone up in front of a camera and tell them to communicate a lot of information. You're not thinking about "being yourself," you're thinking about how to not forget your lines, not mess up, not think about the different outcomes of the prompt that you might have to deal with, etc.
This was my point. "Being yourself" on camera is hard. This comes across, apparently shockingly, as being devoid of emotion and/or robotic
Yeah, but I disagree with you a bit. If it were less heavily scripted, it may or may not be going well, but it would feel very different from this and would not be copping the same criticisms. Or if they unashamedly leant into the scriptedness and didn't try to simulate normal human interaction, they would be criticised for being "wooden" or whatever, but it wouldn't have this slightly creepy vibe.
I think for me, just knowing what is probably on the teleprompter, and what is not, I am willing to bet a lot of the "wooden" vibe you are getting is actually NOT scripted.
There is no way for people to remember that 20 minutes of dialog, so when they are not looking at the camera, that is unscripted, and viceversa.
I agree with your take, makes a lot of sense. I think most of the criticism I see directed at the presenters seems unfair. I guess some people expect them to be both genius engineers and expert on-screen personalities. They may feel a little stiff or scripted at times, but as an engineer myself I know I’d do a hell of a lot worse under these circumstances. Your view seems like a reasonable one to me.
You are acting like there aren't hundreds of well-preserved talks given at programming conferences every year, or that being a good presenter is not a requirement in academic research.
Also, whether OpenAI is a research organization is very much up for debate. They definitely have the resources to hire a good spokesperson if they wanted.
I don't know how many conferences you have been to but most talks are painfully bad. The ones that get popular are the best and by people who love speaking, hence why you are seeing them speak (selection bias at it's finest)
They do have the resources (see WWDC), the question is if you want to take your technical staff of of their work for the amount of time it takes to develop the skill
It's better marketing and more credible to have the researcher say "We think GPT 5 is the best model for developers, we used it extensively internally. Here let me give you an example..." than it is for Matthew McConaughey to say the same.
They shouldn't be presenting if they can't present.
"Minimal reasoning means that the reasoning will be minimal..."
Jakub Pachocki at the end is probably one of the worst public speakers I've ever seen. It's fine, it's not his mother tongue, and public speaking is hard. Why make him do it then?
Totally. I mean at this point Elon has 1000s of hours of practice doing interviews, pitches, presentations, conferences, etc. See Sam Altman in this context.
Have to disagree on this. Watching Elon trying to get a thought out always makes me cringe. Something about his communication style is incredibly frustrating for me.
If I'm not mistaken, Musk admitted he has Asperger's. His speech resembles someone with a mild autistic disorder.
Still, I'd rather patiently wait for him to serialize his thoughts, than to listen to some super fluent person saying utter nonsense, especially if it's a pitch talk. It's all about _what_ is being said, not _how_.
Yes, I think that’s right, but I find _what_ he says to be equally intolerable. I’m not interested in waiting patiently to hear someone overpromise again and again, contradict themselves on almost everything, pick public fights, or toss out provocative bad takes. This is a guy who hyped Dogecoin, called a guy he didn’t he didn’t like a pedo, announced Tesla would take Bitcoin and then reversed it, dismissed the pandemic as “dumb,” amplified far-right talking points, lashes out at anyone who challenges him, the list goes on and on. Honestly, it’d be better if he just stopped talking.
Looks like we're listening to different Elons. The one is a tech guy, the other is politician, so to speak. Long time ago I decided for myself that I never trust words solely on their origin. However, I take into account what the person is known for and their profile of competence. I think no one would argue that Elon is competent in technology. Yes, Elon time is a thing, but again, what he does is not just another college project, you know. Space is hard, so does AI and the human brain. There is fair amount of uncertainty in the domain itself which makes all predictions estimates at most.
Sure, you can ask your grandma for investment advice and that would probably be a poor decision, unless she worked in finance her whole life. On the other hand, she would probably be more than competent to answer how to make cookies and pies.
Long story short: use your brain, don't trust media, do your own research. Even Nobel winners are often known for some crazy or unscientific stuff. People are people, after all. You can't be an expert in everything. Otherwise we'd cancel literally everyone.
It seemed like good performances from people whose main skillset is not this.
For me, it's knowing what we know about the company and its history that gave a eerie feeling in combination with the sterility.
When they brought on the woman who has cancer, I felt deeply uncomfortable. My dad also has cancer right now. He's unlikely to survive. Watching a cancer patient come on to tell their story as part of an extended advertisement, expression serene, any hint of discomfort or pain or fear or bitterness completely hidden, ongoing hardship acknowledged only with a few shallow and euphemistic words, felt deeply uncomfortable to me.
Maybe this person enthusiastically volunteered, because she feels happy about what her husband is working on, and grateful for the ways that ChatGPT has helped her prepare for her appointments with doctors. I don't want to disrespect or discredit her, and I've also used LLMs alongside web searches in trying to formulare questions about my father's illness, so I understand how this is a real use case.
But something about it just felt wrong, inauthentic. I found myself wondering if she or her husband felt pressured to make this appearance. I also wondered if this kind of storytelling was irresponsible or deceptive, designed to describe technically responsible uses of LLMs (preparing notes for doctor's visits, where someone will verify the LLM's outputs against real expertise), but to suggest in every conceivable implicit way that these ChatGPT is actually capable of medical expertise itself. Put alongside "subject-matter experts in your pocket", talk of use in medical research and practice (where machine learning has a dubious history of deception and methodological misapplication problems), what are people likely to think?
I thought also of my mom, who drives herself crazy with anxiety every time my dad gets a new test result, obsessively trying to directly interpret them herself from the moment they arrive to his doctor's visit a week or two later. What impression would this clip leave on her? Does the idea of her using an LLM in this way feel safe to me?
There's a deeper sense that OpenAI's messaging, mission, and orientation are some mixture of deceptive and incoherent that leaves viewers with the sense that we're being lied to in presentations like this. It goes beyond stiff performances or rehearsed choices of words.
There's something cultish about the "AGI" hype, the sci-fi fever dream of "safety" problems that the field has mainstreamed, the slippage of OpenAI from a non-profit research institution to a for-profit startup all while claiming to be focused on the same mission, the role of AI as an oracle so opaque it might as well be magic, the idea of finding a sacred "rationality" in predictions founded purely on statistics without communicable/interrogable structural or causal models... all of it. It's against this backdrop that the same kind of stiffness that might be cute or campy in an infomercial for kitchen gadgets becomes uncanny.
Not even 10 seconds after I started watching the stream, someone said how much more human GPT-5 is, while the people sitting and talking about it don't seem human at all, and it's not an accent/language thing. Seems they're strictly following a dialogue script that is trying to make them seem "impromptu" but the acting isn't quite there for that :)
I use LLMs to get answers to queries but I avoid having conversations with them because I'm aware we pick up idiosyncrasies and colloquialisms from everyone we interact with. People who spend all day talking to thier GPT-voice will adjust their speaking style to be more similar to the bot.
I developed this paranoia upon learning about The Ape and the Child where they raised a chimp alongside a baby boy and found the human adapted to chimp behavior faster than the chimp adapted to human behavior. I fear the same with bots, we'll become more like them faster than they'll become like us.
One woman who went through her calendar with GPT had good acting that the GPT reply helped her find impromptu information (an email she needed to answer), and someone staged GPT-5 to make a French-learning website lander - which butchered its own design in the second run; but that's all the good acting for a "candid presentation" that I could find.
I laughed my ass off immediately after it gave that output, until the presenter made clear that it was a flash card for learning the words, "the cat" in French - and backed it up.
I don’t blame them, they aren’t actors. And yes, it’s clearly not impromptu, but I am trying to not let that take away from the message they are communicating. :)
Presenting where you have to be exactly on the content with no deviation is hard. To do that without sounding like a robot is very hard.
Presenting isn't that hard if you know your content thoroughly, and care about it. You just get up and talk about something that you care about, within a somewhat-structured outline.
Presenting where customers and the financial press are watching and parsing every word, and any slip of the tongue can have real consequences? Yeah, um... find somebody else.
Steve Jobs is meant for moments like this. He would have explained everything crystal clear. Everyone else pales in comparison. I wish he is there to explain the current state of AI.
interesting how they put this effort to making us feel physiologically well with everyone wearing blue shirts, open body language, etc. just to give off sterile robotic vibes. also noticed a dude reading off his hand at 45 minutes in, would think they brought in a few teleprompters.
this is just the way that american middle and upper classes are going. This kind of language/vibe is the default outside of a specific type of WASP IME at least.