Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This comment seems short-sighted. Code that can generate a working set of interlinked page elements from natural English? How on earth is that a gimmick? This is incredible. This is the Pong to the Crysis of 25 years from now, when we can describe entire applications into existence.

I can't fathom the mindset that dismisses this as a gimmick. Is the first step on every journey meaningless? And this isn't even the first step-- this is miles in!



It's only a tiny step above searching Github and copy-pasting.

Searching and copy-pasting isn't the hard part of software development. Guaranteeing validity, safety and performance is the hard part.

This doesn't get us any closer to the goal. (In fact it's the opposite, a big step back.)


This is something nobody's ever done before, and which nobody could have done before. How can you dismiss it so easily? If this is a step back, what does a step forward even look like?

Generating code with guaranteed validity, safety, and performance are worthy goals, but how can you work on them without having an AI that can generate any code in the first place?

I'm just endlessly confused by this mindset. If you wanted a palace and someone built you the foundation, would you deride it as being nothing like a palace and in fact a step AWAY from a palace?


> If you wanted a palace and someone built you the foundation, would you deride it as being nothing like a palace and in fact a step AWAY from a palace?

Depends on whether it's actually possible to build a palace on top of the foundation. If I know the foundation is going to crumble under the extra weight of the walls, then yes, it's a step backwards.

This is a fantastic achievement, and I think people who are saying it's nothing new are kind of being obstinate. But there is real debate over whether GPT is a foundation we can build on.

It's not as simple as just saying, "generate the code, and then we'll come up with ways to make sure the code is correct." Fundamentally, that might require us to generate the code differently than GPT does. GPT's foundation might crumble under the weight when we try to put walls on top of it.

This is particularly worrisome with GPT because it's still a very active area of research, so we don't know for sure that GPT's weaknesses aren't intrinsic to its design. We could end up devoting a ton of time to pushing GPT to its limits only to find out that the entire process has to be scrapped and that we'll need to start over from the beginning.

I think people have a tendency to see something new and either only see the capabilities or only see the weaknesses. There's been some really startlingly impressive things coming out of GPT-3, in particular the 'infinite' text adventure someone posted a while ago. But all of those projects have also had substantial weaknesses, and the weaknesses are forming a pattern across all of the projects. There are certain tendencies that GPT seems to universally have around recycling content, going off on weird asides -- stuff that should have proponents at least slightly worried, even while they rightly praise its advances.


We could end up devoting a ton of time to pushing GPT to its limits only to find out that the entire process has to be scrapped and that we'll need to start over from the beginning.

This is perfectly fine. In fact it's the only way forward currently. Unless you have some alternative which is more promising? GPT models are like large convolutional networks in 2012 - they were so much better than all existing CV approaches at the time that it didn't make any sense to keep working on those other approaches.


Totally agree with you! I'm definitely not arguing that GPT-3 is necessarily the foundation; I'm arguing that it's part of the research process by which we build the foundation.


Don’t know if you’ve noticed but many day jobs are exactly “searching/copy+pasting.”

Think of this as a new IDE. Not an end to end CI/CD.

Non-coders can compose React components, vet them, commit to artifact repo for further use/analysis.

Oh yes sure more abstraction is needed “down below” and testing.

Also has much of anything gotten closer to safety? Have you proved much of the code you import as deps is safe all the way down?

And yet... still works.

The clean room, perfect implementation you’re chasing can never exist. Much of this is up to imagination more so than provably correct math.


Predicting the future of one technology from the development of another technology doesn’t make much sense imo. Where are our fusion reactors and flying cars ;-)?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: