Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GPT-3 can generate a React app from natural language description (twitter.com/sharifshameem)
44 points by andxor on July 21, 2020 | hide | past | favorite | 28 comments


Soon machines will steal all the copy-pasting from Stackoverflow jobs.

But really, this is super impressive for ML. It's not software development, though. It's analyzing a corpus and performing a search.

The nasty bit is that if that is all it takes to do most software dev jobs, and hey, we might find that most software dev really is just pattern matching and regurgitating those patterns—as the project gets more complex, the specifications will have to be more complex and we'll end up mostly back where we started.


this is great for all the people who hate gluing together boilerplater and pipes. in my opinion, the hardest part of development is the "natural language description". gathering requirements, making sure everyone is on the same page with what is being requested, etc. if we're able to quickly generate prototypes from that language that greatly reduces the cycle time from ideation to prototype. what I forsee is that companies will need less developers overall, but more companies will bring in technical talent they wouldn't have otherwise, knowing it's that much easier to get working code


The trick will be whether the code generation works well enough most of the time, but if it generates some 1% weirdness that becomes a needle in the haystack. It seems to be it will be great for product manager/business person prototypes and potentially helpful for "hyper auto-complete" for a developer, but likely not for automating mission critical code.


we just need an adversarial network to generate unit tests


I don't really understand. How is GPT-3 able to do this? Understanding English is one thing, but how did it learn JavaScript, HTML, CSS, and the React framework?


It doesn't understand English. It knows how to make up text that looks a lot like the text that it read. It read a lot of examples of code, so it know how to make up code that looks a lot like what it read. Some of this code works, since some of what it read works. But if it read too much non-working code from stack overflow questions, it might write code that doesn't work.

Edit: apparently it read a lot of Github repositories, so most of what it mimics is working code.


Apparently it was trained over GitHub repositories.

https://youtu.be/fZSFNUT6iY8


What I'm wondering is how it matched up the natural english description with the repository content though.


The prompt contained a few react examples with annotated English descriptions that were similar enough. You’re not seeing the entire prompt and as impressive as this is, it’s likely that you’d end up writing more code trying to get it to do what you want than if you just wrote it yourself. Future GPTs may of course be better, but there’s a bit of “magic” to these demos that the author isn’t super upfront about.


I'd like to see this done with tests as the input, and the working app as the output. Then writing an app would be as simple as defining the tests it must pass.


> Then writing an app would be as simple as defining the tests it must pass

In most cases, if you can define the tests, you have already figured out 80% of the solution.


Probably through tutorials and verbosely-commented code examples?


The twist to this is that no matter what you ask the model, it will make you a TODO app because that’s all it’s been trained on.


Fun gimmick, but nothing more.


This comment seems short-sighted. Code that can generate a working set of interlinked page elements from natural English? How on earth is that a gimmick? This is incredible. This is the Pong to the Crysis of 25 years from now, when we can describe entire applications into existence.

I can't fathom the mindset that dismisses this as a gimmick. Is the first step on every journey meaningless? And this isn't even the first step-- this is miles in!


It's only a tiny step above searching Github and copy-pasting.

Searching and copy-pasting isn't the hard part of software development. Guaranteeing validity, safety and performance is the hard part.

This doesn't get us any closer to the goal. (In fact it's the opposite, a big step back.)


This is something nobody's ever done before, and which nobody could have done before. How can you dismiss it so easily? If this is a step back, what does a step forward even look like?

Generating code with guaranteed validity, safety, and performance are worthy goals, but how can you work on them without having an AI that can generate any code in the first place?

I'm just endlessly confused by this mindset. If you wanted a palace and someone built you the foundation, would you deride it as being nothing like a palace and in fact a step AWAY from a palace?


> If you wanted a palace and someone built you the foundation, would you deride it as being nothing like a palace and in fact a step AWAY from a palace?

Depends on whether it's actually possible to build a palace on top of the foundation. If I know the foundation is going to crumble under the extra weight of the walls, then yes, it's a step backwards.

This is a fantastic achievement, and I think people who are saying it's nothing new are kind of being obstinate. But there is real debate over whether GPT is a foundation we can build on.

It's not as simple as just saying, "generate the code, and then we'll come up with ways to make sure the code is correct." Fundamentally, that might require us to generate the code differently than GPT does. GPT's foundation might crumble under the weight when we try to put walls on top of it.

This is particularly worrisome with GPT because it's still a very active area of research, so we don't know for sure that GPT's weaknesses aren't intrinsic to its design. We could end up devoting a ton of time to pushing GPT to its limits only to find out that the entire process has to be scrapped and that we'll need to start over from the beginning.

I think people have a tendency to see something new and either only see the capabilities or only see the weaknesses. There's been some really startlingly impressive things coming out of GPT-3, in particular the 'infinite' text adventure someone posted a while ago. But all of those projects have also had substantial weaknesses, and the weaknesses are forming a pattern across all of the projects. There are certain tendencies that GPT seems to universally have around recycling content, going off on weird asides -- stuff that should have proponents at least slightly worried, even while they rightly praise its advances.


We could end up devoting a ton of time to pushing GPT to its limits only to find out that the entire process has to be scrapped and that we'll need to start over from the beginning.

This is perfectly fine. In fact it's the only way forward currently. Unless you have some alternative which is more promising? GPT models are like large convolutional networks in 2012 - they were so much better than all existing CV approaches at the time that it didn't make any sense to keep working on those other approaches.


Totally agree with you! I'm definitely not arguing that GPT-3 is necessarily the foundation; I'm arguing that it's part of the research process by which we build the foundation.


Don’t know if you’ve noticed but many day jobs are exactly “searching/copy+pasting.”

Think of this as a new IDE. Not an end to end CI/CD.

Non-coders can compose React components, vet them, commit to artifact repo for further use/analysis.

Oh yes sure more abstraction is needed “down below” and testing.

Also has much of anything gotten closer to safety? Have you proved much of the code you import as deps is safe all the way down?

And yet... still works.

The clean room, perfect implementation you’re chasing can never exist. Much of this is up to imagination more so than provably correct math.


Predicting the future of one technology from the development of another technology doesn’t make much sense imo. Where are our fusion reactors and flying cars ;-)?


It looks like a gimmick but it certainly is not. This is a highly underrated demo.

GPT3 incorporated into IDEs can provide a productivity boost. Imagine this coupled with a testing framework. I would not trust a GPT3 produced code base, after it does not "understand" what it is doing but if it passes all tests, I might be able to. Would you?


For now... :)


I did not see your comment was a reply on another comment, and thought it referred to the post title. :-D


That could very well be the reality. Everyday more people are training this thing on god knows what tasks.


This will work really really well for schema design and generation. A very structured problem with a reasonably small set of good solutions. This is something that you can already do with schema design tools - where you draw a schema and it produces it. But imagine doing it through a textual description sounds even more impressive especially if it can auto-magically fill in all the fields I might possibly need. Again, these are not seismic game changers but productivity enhancers.


I've seen numerous tweets from this guy and they frankly annoy me. If u are about to bring down the hammer on my career, using btw exclusive access to an API that you have through connections, then just do it already. Stop with this teasing videos bullshit that we've seen many times.. as if you were stirring up a frenzy at a games con with demo reels. Show me the money or take your smug twitter Avatar profile and gtfo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: