This reply will likely sound disrespectful, but I post it not to be so, but rather to perhaps spark an alternate path.
As the world changes, education can be slowest to adapt. My father did his math on a slide rule. I was in high school as we transitioned to using calculators.
My personal take on your approach is that you're seeing this from the wrong side. Creating an artificial environment for testing suggests to me you're testing the wrong thing.
Of course most school, and college, classes devolve to testing memory. "Here's the stuff to learn, remember it enough to pass the exam." And I get it, this is the way it's always been, regardless of the uselessness of the information. Who can remember when Charles 1st was beheaded? Who can't Google it in an instant?
Programing on paper without online reference tools isn't a measure of anything, because in the real world those tools exist.
Indeed, the very notion that we should even be testing "ability to write code" is outdated. That the student can create code should be a given.
Rather an exam should test understanding, not facts. Here's 2 blocks of code, which is better and why? Here's some code, what are the things about it that concern you?
Instead of treating the use of AI (or Google, or online help, or that giant C reference book I had) as "cheating", perhaps teach and assess in a world where AI exists.
I truly do get it. Testing comprehension is hard. Testing understanding is hard. Testing to sift wheat from chaff is hard. But, and I'm being harsh here i know, testing memory as a proxy for intelligence or testing hand-code-output as a proxy for understanding code is borderline meaningless.
Perhaps in the age of AI the focus switches from 'writing code' to 'reading code'. From the ability to write to the ability to prompt, review, evaluate and so on.
Perhaps the skill that needs to be taught (to the degree that community college seeks to teach skills) needs to be programing with AI, not against it.
I say all this with respect for how hard your job is, and with my thanks that you do it at all. I also say it understanding that it's a huge burden on you that you didn't necessarily sign up for.
The problem is that tools like AI are useful if and only if you have the prerequisite knowledge, otherwise they are self-destructive.
It's similar to a calculator. We give student graphing calculators, but ONLY after they have already graphed by-hand hundreds of times. Why? Because education does not work like other things.
Efficiency, in education, is bad. We don't want to solve problems as fast as possible, we want to form the best understanding of problems possible. When I, say, want to book an airplane ticket, I want to do that in the fastest way possible. The most efficient manner. I care not about how an airport works, or how flight numbers are decided, or how planes work.
But efficient education is bad education. We can skip 99% of education, if we wanted. We can have, say, the SAT - and spend 1 year studying only for the SAT. Don't bother with the other 12 years of schooling.
Will you get an acceptable score on the SAT this way? Maybe. Will you be intelligent? No, you will be functionally illiterate.
If we use AI for programming before we can program, then we will be bad programmers. Yes, we can pass a test. Yes, we can pass a quiz. But we don't know what we're doing, because education is cumulative. If we skip steps, we lose. If we cut corners, we lose. It's like trying to put a roof on a house when the foundation isn't even poured.
As the world changes, education can be slowest to adapt. My father did his math on a slide rule. I was in high school as we transitioned to using calculators.
My personal take on your approach is that you're seeing this from the wrong side. Creating an artificial environment for testing suggests to me you're testing the wrong thing.
Of course most school, and college, classes devolve to testing memory. "Here's the stuff to learn, remember it enough to pass the exam." And I get it, this is the way it's always been, regardless of the uselessness of the information. Who can remember when Charles 1st was beheaded? Who can't Google it in an instant?
Programing on paper without online reference tools isn't a measure of anything, because in the real world those tools exist.
Indeed, the very notion that we should even be testing "ability to write code" is outdated. That the student can create code should be a given.
Rather an exam should test understanding, not facts. Here's 2 blocks of code, which is better and why? Here's some code, what are the things about it that concern you?
Instead of treating the use of AI (or Google, or online help, or that giant C reference book I had) as "cheating", perhaps teach and assess in a world where AI exists.
I truly do get it. Testing comprehension is hard. Testing understanding is hard. Testing to sift wheat from chaff is hard. But, and I'm being harsh here i know, testing memory as a proxy for intelligence or testing hand-code-output as a proxy for understanding code is borderline meaningless.
Perhaps in the age of AI the focus switches from 'writing code' to 'reading code'. From the ability to write to the ability to prompt, review, evaluate and so on.
Perhaps the skill that needs to be taught (to the degree that community college seeks to teach skills) needs to be programing with AI, not against it.
I say all this with respect for how hard your job is, and with my thanks that you do it at all. I also say it understanding that it's a huge burden on you that you didn't necessarily sign up for.