I've tried it multiple times, but even after spending 4 hours on a fresh project I don't feel like I know what the hell is going on anymore.
At that point I'm just guessing what the next prompt is to make it work.
I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
I don't understand how anyone can work like that and have confidence in their code.
> I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
Peter Naur argues that programming is fundamentally an activity of theory building, not just program text production. The code itself is merely the artifact of the real work.
You must not confuse the artifact (the source code) with the mind that produced the artifact. The theory is not contained in the text output of the theory-making process.
The problems of program modification arise from acting on the assumption that programming is just text production; the decay of a program is a result of modifications made by programmers without a proper grasp of the underlying theory. LLMs cannot obtain Naur's Ryleian "theory" because they "ingest the output of work" rather than developing the theory by doing the work.
LLMs may _appear_ to have a theory about a program, but this is an illusion.
To believe that LLMs can write software, one must mistakenly assume that the main activity of the programmer is simply to produce source code, which is (according to Naur) inaccurate.
I agree with this take, but I'm wondering what vibe coders are doing differently?
Are they mainly using certain frameworks that already have a rigid structure, thus allowing LLMs to not worry about code structure/software architecture?
Are they worry-free and just run with it?
Not asking rhetorically, I seriously want to know.
This is one of the most insightful thoughts I've read about the role of LLM's in software development. So much so, indeed, its pertinence would remain pristine after removing all references to LLM's
There isn't a whole lot of theory building when you're writing a shell script or a Kubernetes manifest... You're just glad to get the damn thing working.
I recently made a few changes to a small personal web app using an LLM. Everything was 100% within my capabilities to pull off. Easily a few levels below the limits of my knowledge. And I’d already written the start of the code by hand. So when I went to AI I could give it small tasks.
Create a React context component, store this in there, and use it in this file. Most of that code is boilerplate.
Poll this API endpoint in this file and populate the context with the result. Only a few lines of code.
Update all API calls to that endpoint with a view into the context.
I can give the AI those steps as a list and go adjust styles on the page to my liking while it works. This isn’t the kind of parallelism I’ve found to be common with LLMs. Often you are stuck on figuring out a solution. In that case AI isn’t much help. But some code is mostly boilerplate. Some is really simple. Just always read through everything it gives you and fix up the issues.
After that sequence of edits I don’t feel any less knowledgeable of the code. I completely comprehend every line and still have the whole app mapped in my head.
Probably the biggest benefit I’ve found is getting over the activation energy of starting something. Sometimes I’d rather polish up AI code than start from a blank file.
It's interesting that this is a similar criticism to what was levelled at Ruby on Rails back in the day. I think generating a bunch of code - whether through AI or a "framework" - always has the effect of obscuring the mental model of what's going on. Though at least with Rails there's a consistent output for a given input that can eventually be grokked.
A framework provides axioms that your theory-building can work on top of. It works until there are bugs or inconsistencies in the framework that mean you can't trust those axioms.
I've tried it multiple times, but even after spending 4 hours on a fresh project I don't feel like I know what the hell is going on anymore.
At that point I'm just guessing what the next prompt is to make it work. I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
I don't understand how anyone can work like that and have confidence in their code.