The sad reality is that this is probably not a solvable problem. AI will improve more rapidly than the education system can adapt. Within a few years it won't make sense for people to learn how to write actual code, and it won't be clear until then which skills are actually useful to learn.
My recommendation would be to encourage students to ask the LLM to quiz and tutor them, but ultimately I think most students will learn a lot less than say 5 years ago while the top 5% or so will learn a lot more
> AI will improve more rapidly than the education system can adapt
We’ll see a new class division scaffolded on the existing one around screens. (Schools in rich communities have no screens. Students turn in their phones and watches at the beginning of the day. Schools in poor ones have them everywhere, including everywhere at home.)
Every school has students work off their Chromebooks here in Colorado, regardless of how rich community is. This started with the Covid lockdowns and is pretty much standard now.
> most students will learn a lot less than say 5 years ago while the top 5% or so will learn a lot more
If we assume that AI will automate many/most programming jobs (which is highly debatable and I don't believe is true, but just for the sake of argument), isn't this a good outcome? If most parts of programming are automatable and only the really tricky parts need human programmers, wouldn't it be convenient if there are fewer human programmers but the ones that do exist are really skilled?
Well, as a college student planning to start a CS program, I can tell you that it actually sounds fine to me.
And I think that teachers can adapt. A few weeks ago, my English professor assigned us an essay where we had to ask ChatGPT a question and analyze its response and check its sources. I could imagine something similar in a programming course. "Ask ChatGPT to write code to this spec, then iterate on its output and fix its errors" would teach students some of the skills to use LLMs for coding.
This is probably useful and better than nothing, but the problem is that by the time you graduate it's unlikely that reading the output of the LLM will be useful.
Tons of devs (CS grad devs that is) have made their career writing basic CRUD apps, iOS apps, or python stuff that probably doesn't scratch the surface of all the CS course work they did in their degree. It's just like everyone cramming for leetcode interviews but never using that stuff in the job. Being familiar with LLMs today will give you an advantage when they change tomorrow, you can adapt with the technology after college is over. Granted, there likely will be less devs needed but the demand for the highly skilled ones could be moving upwards as the demand for this new AI tech increases
Fair point. Perhaps I'm just too pessimistic or narrow-minded, but I don't believe that LLMs will progress to that level of capability any time soon. If you think that they will, your view makes a great deal of sense. Agree to disagree.
Right, but if AI gets to the point where it can replace developers (which includes a lot of fuzzy requirement interpretation etc.); then it will replace most other jobs as well, and it wouldn't have helped to become a lawyer or doctor.
Its not cruel, its stupid. Why would we organize our society in such a way that people would be drawn towards such paths in the first place, where your comfort and security are your first concerns and taking risks, doing something new, is not even on your mind?
> where your comfort and security are your first concerns and taking risks, doing something new, is not even on your mind?
Because individually, lots of people seek low-risk high-return occupations. Systematically, that doesn’t exist in the long run.
Societies do better when they take risks. Encouraging the population to integrate that risk taking has been a running them in successful societies from the Romans and Chinese dynasties through American commerce and jugaad.
It may highlight some "fraud people" (do not know how to say it in english .. you know, people who fake the job so hard but are just clowns, do not produce anything, are basically worthless and just here to grab some money as long as the fraud is working)
You can still argue that LLMs won't replace human programmers without downplaying their capabilities. Modern SOTA LLMs can often produce genuinely impressive code. Full stop. I don't personally believe that LLMs are good enough to replace human developers, but claiming that LLMs are only capable of writing bad code is ridiculous and easily falsifiable.
>AI will improve more rapidly than the education system can adapt.
Is entirely obvious, and:
> Within a few years it won't make sense for people to learn how to write actual code, and it won't be clear until then which skills are actually useful to learn.
is not obvious, but quite clear from how things are going. I expect actual writing of code "by hand" to be the same sort of activity as doing integrals by hand - something you may do either to advance the state of the art, or recreationally, but not something you would try to do "in anger" when faced with a looming project deadline.
> I expect actual writing of code "by hand" to be the same sort of activity as doing integrals by hand - something you may do either to advance the state of the art, or recreationally, but not something you would try to do "in anger" when faced with a looming project deadline.
This doesn’t seem like a good example. People who engineer systems that rely on integrals still know what an integral is. They might not be doing it manually, but it’s still part of the tower of knowledge that supports whatever work they are doing now. Say you are modeling some physical system in Matlab - you know what an integral is, how it connects with the higher level work that you’re doing, etc.
An example from programming: you know what process isolation is, and how memory is allocated, etc. You’re not explicitly working with that when you create a new python list that ends up on the heap, but it’s part of your tower of knowledge. If there’s a need, you can shake off the cobwebs and climb back down the tower a bit to figure something out.
So here’s my contention: LLMs make it optional to have the tower of knowledge that is required today. Some people seem to be very productive with agentic coding tools today - because they already have the tower. We are in a liminal state that allows for this, since we all came up in the before time, struggling to get things to compile, scratching our heads at core dumps, etc.
What happens when you no longer need to have a mental model of what you’re doing? The hard problems in comp sci and software engineering are no less hard after the advent of LLMs.
Architects are not civil engineers and often don't know details of construction, project management, structural engineering etc. For a few years there will still be a role for a human "architect" but most of the specific low level stuff will be automated. Eventually there won't be an architect either but that may be 10 years away
An LLM is a tool and its just as mad as slide rules, calculators and PCs (I've seen them all although slide rules were being phased out in my youth)
Coding via prompt is simply a new form of coding.
Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages. The idea is that you will be more productive with say Python than you would with ASM or twiddling electrical switches that correspond to register inputs.
A purist might note that using Python is not sufficiently close to the bare metal to be really productive.
My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies - that will involve problem definition and formulation and then an iterative effort to solve the problem. It will obviously involve how to spot and deal with hallucinations. They'll need to start discovering model quality for differing tasks and all sorts of things that look like sci-fi to me 10 years ago.
I think we are at, for LLMs, the "calculator on digital wrist watch" stage that we had in the mid '80s before the really decent scientific calculators rocked up. Those calculators are largely still what you get nowadays too and I suspect that LLMs will settle into a similar role.
They will be great tools when used appropriately but they will not run the world or if they do, not for very long - bye!
> Remember that high level programming languages are "merely" a sop for us humans to avoid low level languages.
High-level languages are deterministic and reliable, making it possible for developers to be confident that their high-level code is correct. LLMs are anything but deterministic and reliable.
You keep saying this but have you used an LLM for coding before? You just don’t vibe code up some generated code (well, you can, but it will suck). You are asking it to iterate on code and multiple artifacts at the same time (like tests) in many steps, and you are providing feedback, getting feedback, providing clarifications, checking small chunks of work (because you didn’t just have it do everything at once), etc. You just aren’t executing “vibecode -d [do the thing]” like you would with a traditional shoot once code generator.
It isn’t deterministic like a real programmer isn’t deterministic, and that’s why iteration is necessary.
Not all code written by humans is deterministic and reliable. And properly guard-railed LLM can check its output, you can even employ several, for higher consensus certainty. And we're just fuckin starting.
Unreliable code is incorrect thus undesirable. We limit the risk through review and understanding what we're doing which is not possible when delegating the code generation and review.
Checking output can be done by testing but test code in itself can be unreliable and testing in itself is no correctness guarantee.
The only way reliable code could be produced without human touching it would be using formal specifications, having the LLM write the formal proof at the same time as the code and using some software to validate the proof. The formal specification would have to be written using some kind of programming language, and then we're somewhat back to square one (but with maybe a new higher level language where you only define the specs formally rather than how you implement them).
But, we as humans still have a need to understand the outputs of AI. We can't delegate this understanding task to AI because then we wouldn't understand AI and thus we could not CONTROL what the AI is doing, optimize its behavior so it maximizes our benefit.
Therefore, I still see a need for highlevel and even higher level languages, but ones which are easy for humans to understand. AI can help of course but challenge is how can we unambiguously communicate with machines, and express our ideas concisely and understandably for both us and for the machines.
> My recommendation would be to encourage the tutor to ask the student how they use the LLM and to school them in effective use strategies
It's obviously not quite the same as programming, but my English professor assigned an essay a few weeks ago where we had to ask ChatGPT a question and then analyze its response, check its sources, and try to spot hallucinations. It was worth about 5% of our overall grade. I thought that it was a fascinating exercise in teaching responsible LLM use.
> Coding via prompt is simply a new form of coding.
No, it isn't. "Write me a parser for language X" is like pressing a button on a photocopier. The LLM steals content from open source creators.
Now the desperate capital starved VC companies can downvote this one too, but be aware that no one outside of this site believes the illusion any longer.
Maybe pointless, but I for one disagree with such rulings. Existing copyright law was formed as a construct between human producers and human consumers. I doubt that any human producers prior to a few years ago had any clue that their work would be fed into proprietary AI systems in order to build machines that generate huge quantities of more such works, and I think it fair to consider that they might have taken a different path had they known this.
To retroactively grant propriety AI training rights on all copyrighted material on the basis that it's no different from humans learning is, I think, misguided.
> Maybe pointless, but I for one disagree with such rulings.
That's a fair position: laws are for the nation (and in a democracy, that's supposed to mean the people), and the laws we make are not divine or perfect.
But until the laws change, it is what it is.
> To retroactively grant propriety AI training rights on all copyrighted material on the basis that it's no different from humans learning is, I think, misguided.
I would say it's not retroactive, it's the default consequence of what already is. Changing the law so this kind of thing is no longer allowed in the future is one thing, but it would be retro-active to say it had always been illegal.
I say retroactive not because the law changed, but because the law was never written with AI training in mind. I don't think existing copyright laws fit this situation, and I feel applying this interpretation to works already under copyright, when the creators of those works surely never envisioned this outcome, is an unfair interpretation.
there isn’t a company in the united states of 50 or more people which doesn’t have daily/weekly/monthly “ai” meetings (I’ve been attending dozens this year, as recently as tuesday). comments like yours exist only on HN where selected group of people love talking about bubbles and illusions while the rest of us are getting sh*t done at pace we could not fathom just year or so ago…
I am sure that "AI" is great for generating new meetings and for creating documentation how valuable those meetings are. Also it is great at generating justifications for projects and how it speeds up those projects.
I am sure that the 360° performance reviews have never looked better.
Your experience is contradicted by the usually business friendly Economist:
this is same as polling data when Trump is running - no one wants to admit they will vote for DJT much like no one wants to admit these days that “AI” is doing (lots of) their work :)
jokes aside I do trust economist’s heart is in the right place but misguided IMO. “the investors” (much like many here on HN) expected “AI” to be magic thing and are dealing with some disappointment that most of us are still employed. the next stage of “investor sentiment” just may be “shoot, not magic but productivity is through the roof”
the numbers I could provide you are just what I have been involved with and we are 2.5/3.0x points-wise from 16 months ago. my team decided to actually measure productivity gains so we kept the estimation process the same (i.e. if we AI-automated something we still estimate as if we have to do it manually). we are about to stop this on Jan 1
since you referenced a trusted Economist here’s much-more-we-know-what-we-are-talking-about MIT saying 12% of workforce is replaceable by AI (I think this is too low) - https://iceberg.mit.edu/
I’m not interested in point estimates. I’m interested in real verifiable numbers. If AI is speeding up software teams by 3x, that will show up somewhere outside of internal metrics.
So far no verifiable metrics show any hint of a 3x productivity boost.
My recommendation would be to encourage students to ask the LLM to quiz and tutor them, but ultimately I think most students will learn a lot less than say 5 years ago while the top 5% or so will learn a lot more