> By pre-1990 standards about 20% of the students should have been failed.
Interesting, I find that number quite low.
The failure rates for most CS courses at my BSc/MSc university (VU Amsterdam) usually hovered around 50% in the post-00s era, especially for the hard-line courses such as computer networks, finite fields and data structures. We were allowed to do an unlimited number of retrials (6 months in between), but still over a third of the students would drop out before completing their BSc.
When I started my PhD at a British university I was obliged to do some MSc courses. Basically every course started with: No one will fail this course. At first that seemed like a comfortable idea since I didn't want to be distracted from my research. Soon I learned that actually means the course is going to be boring as hell.
I don't believe the Dutch system works better than the British one. We've had our share of choice fanatics. Despite Andy Tanenbaum raking in millions, my current department is also a whole lot better funded than the VU.
I think it has more to do with attitude than politics. Tanenbaum basically created the department with his bare hands, writing Minix and his well-known books in the process. He's no longer actively involved in teaching, but he created a culture of outstanding education. There's still an enormous amount of attention on making courses more interesting, challenging and up-to-date. It's easy to get lazy and lower your standards, but not impossible to say no and become a better university.
I don't understand the value of failing students in this way. At medical school I failed some exams but was allowed to retake them, and received coaching for one of the re-takes. (Relax, I had to prove I'd learned the material before I could work with patients.)
At my brother's economics PhD, however, the school (can you guess which it is) took pride in failing 50% of the class. Why? And when the students are going into so much debt, why is this appropriate? Pick students who can pass your exams in reasonable numbers, reject those who cannot, and then invest in teaching the ones you chose.
Bravado over failing students to demonstrate how difficult your course is is a waste of human talent.
It's not about failing students. Sure, students could be failing because your education is poor. That was not the case. It's about challenging students to the point where those with the intellectual prowess and discipline and the rest go on to study something that more closely matches their abilities (and the university should help them with that).
Debt is never a serious problem in the Netherlands. The government interest rates are so low that if you loan the maximum and put the money in bonds or a high interest savings account you'll actually turn a profit.
It might be the socialist in me, but schools should always be public. No business should be involved in educating the next generation.
I think of universities more like a R&D department for the general public. (which also is why I think all software developed in universities should be Free)
Almost every country in the world is more socialist that the (former) U.S. None of those countries' university systems come close to that of the U.S. And what are the best universities in the U.S.? MIT, CalTech, Harvard, Stanford, Yale, Princeton, Dartmouth, etc. (private) vs. UC Berkeley and UCLA (public). The latter two are soon to feel the financial collapse of California.
This would be true if the business were selling degrees.
If the business were selling an education, then it would actually result in increased revenue, since students would have to stay around longer to learn required materials.
Lots of business sell education instead of certifications. The problem here is the government stepping in and mucking around with standards, creating financial incentives that looked good in theory but don't work in the real world.
This happens a lot with well-meaning politicians trying to "help"
To be fair to the time, in 1999, there was the dot-com bubble going on and a lot of students were going into CS for the wrong reasons so there could have been more at play than government objectives. I suspect there were a lot of CS students who did not enjoy programming in such programs, thus accounting for seniors who couldn't do projects. I know CS and programming are not the same, but I liken it to studying Shakespeare and not enjoying reading or writing.
Our school (UWaterloo) noticed a huge drop in CS enrollment soon after that, between 2002 and 2004 (roughly). At which point business programs saw a spike in enrollment. I wonder if this professor had stuck around whether he would have noticed an improvement once the stories of 24 year old billionaires were removed from the headlines.
I am an American, so forgive me, but I was under the impression that UWaterloo was also regarded as one of, if not the best, CS schools in Canada. I would argue that many of the more elite universities have managed to beat back this trend for the most part, as I do not believe that Stanford or MIT are graduating substandard computer science students.
I agree with this statement. Other schools (perhaps not as highly regarded as the ones you mentioned) such as Texas A&M do hardcore, low-level C++ courses, compiler stuff, etc. for both CS and CE students as core requirements. They lightened the requirements several years back to compete with other schools, but changed back to them when they realized how soft their students had become. Of course, Bjarne Stroustrup is the chair of the CoE CS program at TAMU... so that may have a lot to do with it.
I live in what you'd call a 3d world country (Mexico), and this stuff makes me think about why empires fall and others rise...
The rising stars get to watch how an empire falls ;-)
One of my dreams is to open a school that's much better than most of the schools in Mexico, using the good and bad examples of schools in countries like the US and the UK. My 2 cents to leave to the world.
And probably a lot easier in the modern day as more and more people get access to the web. I'm very interested in web learning and am trying to put something together for it too.
I've had a long conversation with someone that is very much into this and there are some challenges but it definitely is possible.
Already there are tons of lectures online in flash video format. It's amazing the stuff you can learn about but to actually make lesson material and tests work in an online environment is far from simple (cheating for instance!).
I imagined something along an entirely different approach. Putting videos online is just doing it the same way over a new medium. I am thinking of a way to take advantage of what the web has to offer.
Yes, I got that, sorry for not being clearer. The videos are a simple starting point, there is no interaction and so on, it's just like watching TV (only worse quality...).
The subject of the discussion was basically a complete virtual education from grade school level all the way up to university level. Interactive lectures, web based self-study units the works.
We realized that this is a very complex undertaking and that it would require fairly massive funds to be executed properly as well as an enormous amount of expertise.
It might be worth looking at http://www.khanacademy.org/ for an effort in web learning. At this point it's mostly short youtube videos (quite a few of them) concentrating on one concept spanning math, physics, economics and some others. You can also create an account and get a personal "map" to see how you are progressing (were you go from basic concepts, and then build from them). The guy in charge wants to have videos about every possible subject and have one such knowledge map for them. Eventually, I think he also wants to be able to have online tutors helping individual students in the future.
I realize my explanation probably doesn't do it any justice so I instead refer you to http://www.youtube.com/user/khanacademy (the video in there) which should give a better overview of it all. There is also a longer video somewhere that has even more details.
I just thought I would share, maybe someone here will like it or find it useful.
Ah my mistake - I think the ideal case is going to be a combination of both but at some level using technology will become a more viable alternative, especially for those that do not have access to teachers.
In that case, you should look more closely to elite schools in the US (I have no idea how Cambridge and Oxford changed in the UK). Most of the top 15-20 schools in the United States have succeeded in continuing to graduate competent students.
All of the elite universities in the UK are under sustained attack by the government in the name of "equality". Oxford and Cambridge in particular are under intense pressure to take substandard students from failing State comprehensive schools.
The Russell Group should take the plunge and break away from government control altogether.
That is a dream I share, as a disillusioned graduate student in the US. I'm sure a lot of people, students and junior faculty especially, have the same dream.
We can't generalize. Not all students are like the ones described in this article. I work with CS and CE graduate students 40 hours a week. I help them conduct research. The best of these students have an eagerness and desire to learn. They work on projects at home in their spare time, they learn new languages (just for the sake of learning them), etc. They want to learn C and C++. They love it.
Since I work directly with these kids and have seen dozens come and go, it's easy for me to quickly pick out the good ones. They are the most fun to work with too as they teach me as much or more as I teach them.
I see the average/bad ones too. The ones who do just barely enough to get by. The ones who "code by Google". They are far more common than the good ones. Their work speaks for itself. Poorly written, plagiarized and seldom delivered on time.
Hopefully employers have methods in place to weed out the bad and average kids. And incentives to hire and more importantly keep the best ones.
I think comments like there are border-line trolls in a community like HN.
The languages used in various parts of a CS curriculum are one component to the whole. Holding up that component as the reason that the sky is falling is, at best, disingenuous. I can construct terrible CS curriculums that start with C, and terrific ones that start with Java.
If those who construct the curriculum want the beginning courses to focus on algorithmic thinking, I think it's fair to use a language that abstracts away much of the physical machine. The abstractions can be peeled away in later courses.
If, instead, they want the beginning courses to focus on the realities and difficulties of dealing with computer systems, it makes sense to start with something like C. They can then introduce the abstractions that let people manage those difficulties in later courses.
I think both approaches are valid, as long as a student gets a view of the important points of the field. I can even see arguments why one approach might be better than the other. But claiming that one approach represents the failure of our CS academic system is zealotry.
You have to meta it one level. The reason C was taught was because it was close to the machine without actually being assembly language. That's pedagogically useful because it exposes the student to all the properties of the Turing machine without having to be too specific about implementation (not that MIPS assembly isn't also a good choice). Similarly, the reason to teach Smalltalk is to teach OO, the reason to teach Lisp is to teach Lambda Calculus. The only reason Java is taught is because students demand it because they've heard that if you know Java you can get a good job.
Now, that doesn't imply that Java shouldn't ever be taught. But the reasons for choosing any language should be academic reasons. Particularly, it is bizarre to see academia trailing industry in language adoption.
You claim that "the only reasons Java is taught is because students demand it," and it appears your objection is based on this claim. Do you have any support for this claim outside of your personal belief? My undergrad made the switch to Java right before I graduated, and they are a counter-example to your claim.
Also, I doubt your claim about C is true. C was used because it was close to the machine without actually being assembly. I suspect C was then taught because it was used everywhere.
I think his claim is based on the belief that Java has no academic merit (in the context of the other languages he mentioned) - it's just useful for development.
One step further: If you think that learning a specific language in university is going to get you a job you are not studying computer science, you should be going to a trade school.
A cs degree is universal, it should be language agnostic.
One computer language or another, it doesn't matter one bit, they're all functionally equivalent. Just like a chef cook has 30 knives to choose from you have a palette of languages that you could choose from to solve a given problem.
If you really understand computers then the languages are just a means to an end.
I disagree with your statement that choice of language "doesn't matter one bit." Some languages are better at some tasks than others. Appealing to functional equivalence ignores the relative cost (in time and characters typed) of expressing the same idea in different languages.
I suspect you're abusing the term "theoretical computer science," which is surprisingly common in this crowd. I assume you actually mean basic computer science concepts relevant to programming.
To address your question, Java abstracts away memory management - not just dynamic memory allocation, but common off-by-one mistakes will result in a runtime exception. It's possible, but unlikely, that you'll get a segfault in C. You probably own the memory just past your array, and you're more likely to get strange errors because you're invisibly overwriting values.
If you want to focus on algorithmic thinking, and not the realities of a computer, this is a win. As another posted pointed out above, I think other languages are better suited for this, but Java is still valid.
"If those who construct the curriculum want the beginning courses to focus on algorithmic thinking, I think it's fair to use a language that abstracts away much of the physical machine. The abstractions can be peeled away in later courses."
I don't know that Java does this all the much better than C. The problem with C for a beginning student isn't so much that you have to manage memory manually--it generally takes a few weeks to even get to malloc() in a C-based introductory course--but that C gets in your way with explicit typing, #includes, etc. Java does away with some of that but introduces its own OO scaffolding to get in your way too. Instead of having to write main()s and #include's, the Java student has to enclose their functions in a class and so forth. Let's compare Hello World in C, Java, and C#.
C:
#include <stdio.h>
int main(void)
{
printf("Hello, World!\n");
return 0;
}
Java:
class HelloWorldApp
{
public static void main(String[] args)
{
System.out.println("Hello World!"); // Display the string
}
}
The Java and C# examples are even more cluttered than the C example when it comes to superfluous tokens: it has a class declaration, the method signature is more unnecessarily elaborate, and the print command has like three levels of object-drilldown in it. When you get to the simple procedural programs that a beginning student will write, this mysterious crud remains unresolved for longer. It's not enough to explain typing as you would in C or Pascal, but you have to talk about object-oriented programming before you get into problems complex enough to justify that level of abstraction.
If you really want an abstract language to enforce algorithmic thinking, pick one that doesn't have all that extra mental burden when you first approach it.
Perl 5.8
print "Hello World!\n"
Perl 5.10
say "Hello World!"
Python
print "Hello World!"
Ruby
print "Hello World!"
The cool thing is that these languages still have subroutines and classes and so forth, but they don't force you to declare a class, declare a subroutine, and call an object method just to code "hello world".
Java has advantages over C. These advantages don't include "letting beginning programmers focus on algorithmic thinking by using high level abstractions". Java's higher level than C in that it protects you from naked pointers and lets you do OOP, but that's not the type of high-level abstraction that helps a beginning programmer, especially not when it comes at the cost of forcing them to put everything in classes and methods.
If those who construct the curriculum want the beginning courses to focus on algorithmic thinking, I think it's fair to use a language that abstracts away as much as possible. We have no shortage of good interpreted languages to accomplish this.
Actually I'd argue that with C, you have to start managing memory manually before you even get to malloc. The abstraction advantage of Java over C (not that I think Java is necessarily a better intro language) is that you can generally explain the syntax in abstract concepts and then use it like you'd expect. With C, it's far more likely to encounter scenarios that don't fit a simple model of understanding.
For example, unless you're concerned with specific performance issues, you're not likely to care how a string is implemented in Java. It's difficult to use strings in C without understanding memory. Without understanding when strings are mutable and when they aren't, what null-terminated means, how "%s" works, and such you will quickly run into some unexpected behavior and will likely just trial and error until you get something that seems to work. When you understand that C is a syntax for allocating and manipulating memory, it tends to make a lot more sense.
I purposefully phrased my statement to allow for dynamic language like Python - which is what I would probably choose for a starting language. I constructed my comment to also address the same arguments that have popped up in the "MIT switched to Python" discussions.
I've taught intro to Java labs to undergrads, and I did get questions on the scaffolding required. I answered their question, but also told them they don't need to understand that now. I'd rather not have to do that.
I would argue that Scheme fulfills the same purpose we discussed--getting out of the way and letting students focus on algorithms--except it emphasizes expressing those algorithms in a functional style.
I think the "Java <-> C"-debate is an instance of the problem "Should studying computer science be mindwreckingly hard or should studying computer science make you able to program things?". Plus, you can also bash Java in this special instance, which is always nice coughs.
Other instances include "Compiler construction or not?", "Theoretical computer science or not?", "Assembler or not?".
And then you had to debug something which resulted from corrupted pointers into the stack, partially being still good data and partially being nonsense.
>I think the "Java <-> C"-debate is an instance of the problem "Should studying computer science be mindwreckingly hard or should studying computer science make you able to program things?".
Your wording is heavily loaded. In any case, learning computer science should absolutely not be about learning to program.
You make a good point - I was thinking that Java shouldn't be taught in introductory CS classes for people planning to be a CS major, but obviously didn't get that across.
It would be fairly easy for someone to learn Java once they've been taught C. Professors should not have to spend time going over pointers and memory management in an OS class, however.
I think OS classes are the best place to cover pointers and memory management. Assuming you are covering how how an OS works vs an intro to UNIX class.
Granted in our OS theory class we spent most of our time using / talking about ASM, but C could also work fairly well.
This course has four purposes. First, you will learn about the hierarchy of
abstractions and implementations that comprise a modern computer system. This
will provide a conceptual framework that you can then flesh out with courses such
as compilers, operating systems, networks, and others. The second purpose is to
demystify the machine and the tools that we use to program it. This includes
telling you the little details that students usually have to learn by osmosis. In
combination, these two purposes will give you the background to understand
many different computer systems. The third purpose is to bring you up to speed in
doing systems programming in a low-level language in the Unix environment.
The final purpose is to prepare you for upper-level courses in systems.
This is a learn-by-doing kind of class. You will write pieces of code, compile
them, debug them, disassemble them, measure their performance, optimize them,
etc.
At my school, between an intro class that did C and a class where we programmed microcontrollers in assembly, pointers and bit-twiddling were already pretty well established by the time we get to OS classes.
At my university, Intro to Prog. is in C and C++ (yes, both), and the course that immediately follows is in Scheme, some made up language (we had to build an interpreter for that language in Scheme) and Haskell (for the motivated students).
No. Java and Python and Scala are not improvements. Why? They're too easy. Pointers are hard (relatively). Compiler design is hard. Complexity theory is hard. A loss in any one of these sections is a loss to the degree as a whole. The ACM ICPC helps a little in promoting intelligent problem solving (every CS student should be able to write a program to use Dijkstra's algorithm in under 20 minutes from memory), but it's the university's fault in the end. Fortunately in the United States, some of our top universities seem to have (thus far) escaped the treatment that other ones did. Stanford, Yale, Caltech, MIT, CMU, et al. all continue to teach very much the same curriculum that they taught intro CS students 10 years ago. Systems is a required course and taught in C. Intro programming is a required course and taught in Scheme/LISP. Unfortunately, many schools did not fair so easily and now teach Java or some-such exclusively. I think that this is partially to blame for the number of unfortunately bad computer science students with degrees. How to change this, I have no idea.
See? This syntax trouble is already lesson #1 you learn from pointers: The _address_ of some value and the _value itself_.
If you have a pointer (that is, the address), then you need to prefix it with a * in order to get the value in order to do useful things. If you have a value, you need to prefix it with & in order to get the value's address in order to pass it around more efficient (at least it will be more efficient if it is some large data blob).
Did anyone ever mention addresses and values and their difference when looking at Java from a users point? Not to me, to be honest.
Pointers are easy. I thought several people in our CS program how to use them in ~2 hours. What's hard for most people is thinking abstractly between what the code looks like and what happens when you run it.
Pointers are simply the first thing that forces most coders to consider that split. But, a reasonably competent JAVA developer moving to C can pick them up in little time. The problem is reading other peoples C code that looks more like line noise than structure. But, pointers are a tiny step along that path.
Sadly, you are not entirely correct. Stanford's intro course uses Java (though learning C is also required) and MIT just switched from Scheme to Python.
I would agree that low-level programming should be taught at some point. But one course is sufficient and it could be one that is taken in second or third year.
Low-level memory issues should be avoided whenever possible by using a higher level language. There's a reason why garbage collection was invented.
Depends on what you're studying. System programming is and should always be done in C/assembly. Personally I am doing virtual machine design in C at the moment.
I'd rather see C taught than Scheme. Hardware matters, and a good CS-education should be centered in languages that recognize that.
Software engineering, on the other hand is too important a subject to be left to the schools. Let them learn Scheme on their own. They'll appreciate it more that way.
They should learn both. Any CS student who does not understand simple hardware concepts such as page faults does not deserve a degree. Similarly, every CS student should learn functional and imperative programming. Period.
But this would produce a generation of students that know how to do things that we already know how to do really well already. Maybe it would be better if we skipped page faults and taught things that would advance CS. CS has moved on to more interesting areas that simple page faults. Think about how massively parallel systems run, or how distributed databases handle consistency, or high volume scaling, or NLP or any of the other areas of CS that are far more interesting that page faults.
Language details are all academic, pointless debates to be had by people who like one over another where the differences are often trivial (C++/Java). If you have a great mind and can understand what languages are doing then either will do you just fine.
I'd rather not teach the next generation of CS under-graduates the same old stuff that I did (and yes that included how the VAX-11/750 - one of the first to do so if I recall correctly - didn't have to have all a programs data or code in memory...)
You need to understand what has come before so that you can build upon it effectively. Taking past progress for granted will not help develop the future; if anything, you'll get a lot more reinvented wheels.
Among the higher-level problems you mentioned, how many of them are interrelated? I'd rather see professors cover material that is useful in 80% of cases, even if they are well-trodden topics - students can specialise in the rest. In my CS course (UK) we studied parallel systems, including MapReduce; distributed databases and NLP can be studied optionally along with several other topics of much higher level than mere page faults. That said, not all CS courses are created equal.
you do not need to understand how your compiler works, or how De Morgans law is responsible for all those NOT gates in your CPU to produce useful and meaningful CS.
Page faults illustrate the idea of the memory hierarchy, which is one of the most important architectures in computer science. Programmers who don't understand the memory hierarchy write slow code. You also can never do "massively parallel systems run, or how distributed databases handle consistency, or high volume scaling, or NLP or any of the other areas of CS that are far more interesting that page faults" without virtual memory, paging, and page faults.
You're confusing what's important to you with what's actually important.
"Programmers who don't understand the memory hierarchy write slow code" is just nonsense.
Most programmers today use languages where they aren't even aware of, or able to manipulate in any way how and when memory is managed. And this trend will continue as the hardware their programs run on are increasingly virtualized.
Let's face it, virtual memory is a done deal, its now fundamental until we find computer hardware architectures that don't ever have address spaces that exceed the physically addressable memory. With virtualized kernels, possibly its a better solution to boot operating systems that have no concept of constained memory resources, and let the virtualizer do the paging.
C would be my first preference as well, but I also appreciate that Scheme (or another functional language) would provide better opportunities than C or Java for students to learn more "advanced" algorithmic techniques such as recursion.
The problem with Java (and C, to a lesser extent) in basic problem solving is that you can almost brute-force-code and get a working solution that will get full credit, but after writing it a more elegant (and efficient) solution isn't always obvious. In functional languages, that "more correct" solution almost always seems to stand out more, at least to me.
Recursion is only considered an "advanced" algorithmic technique because people are taught to think of it that way. Recursion is actually pretty straightforward once you have a bit of practice using it.
Well, yes, most things do become easier with a bit of practice.
If a functional language was taught in beginner CS classes I think it would be easier for students to see how it works. It certainly "clicked" for me once I'd learned a bit of Scheme in my first AI class.
Certainly, there's loads of stupid "research" in academia—cranking out papers that interest no one, in order to justify tenure and more grants.
However, "90% of everything is crap". The relevant question, before you throw out the entire institution, is: Are there real opportunities to do good work?
Good work takes two main forms in academia: research and teaching. Is good research work being done? Is good teaching being done?
"The relevant question, before you throw out the entire institution, is: Are there real opportunities to do good work?"
I disagree. The relevant question is, "What is the best possible way to do good work, including practical concerns taking into account the behavior of real people and not just theoretical people, and is that what we have? If not, how can we get there?"
The way you ask the question is basically the same thing as the sunk cost fallacy. Yes, we've got a lot of investment in the current system, but if it isn't working, it's time to change it to something that will. Whatever that may be. And, again, the question is whether it is working with real people, not hypothetical people who are custom-designed to work with the system you want to be ideal.
Indeed, comparison with other opportunities is most important. I didn't make that clear. If you can do better teaching and/or better research outside the university, then go for it. (I am wrestling with this decision right now.) That would be a good reason to resign professorship, not the fact that a lot of university people are doing mediocre work.
My understanding, BTW, is that there is lots and lots of fantastic research being done in universities. You have to find it and hang around with the right crowd. Dr. Tarver's problem may simply be that he didn't find the opportunities.
He does have a point, but something is missing in most such lamentations of watered down academic standards. It's that education, in the past, was not meant to be vocational training and therefore the eoconomic case for broadening access just wasn't there.
Knowing classic literature was not (functionally) why upper class kids of past times went on to earn much more than their lower class peers. It was just part of the symbolic glue that allowed members of that class to recognise each other.
Throughout history, access to good education was by birth, not by talent.
Most analyses featuring Mozart are flawed. It's probably because nobody is quite sure whether the purpose of education systems is to find those rare individuals with innate genius, or rather to teach a large number of reasonably intelligent people something so they are more capable of solving particular problems as a result. I don't think there is one approach that is suitable for doing both.
Mozart is often introduced into the debate by those demanding more stringent standards of admission. However, if it's about finding the Mozarts of the world, radical and indiscriminate broadening of access must be the top priority.
It's the nature of innate genius that it occurs just as likely in illiterate african street kids as in the offspring of english professors. Taking on a large number of totally unprepared kids and exposing them to interesting stuff is probably the most beneficial approach to finding the Mozarts. Demanding good preparation and high entry standards is a social filter, not a talent filter.
So I think, universities must commit themselves to actually teaching students something instead of whining about low entry standards. I fully understand that the author of the article does more than that, and I agree with his other points.
But let's face it, universities are there to bring as many people as possible onto a high level of knowledge and skill. The goals are largely economic ones and that should not be lamented. They are no longer the kind of upper class culture club they once were, and they are not primarily an endeavour to find the Mozarts and Einsteins.
There are lots of problems with the Mozart analogy. For one, Mozart was trained by his Father. His education in music began at a very young age (which is common among many if not most of the highly successful classical concert musicians in the world).
For another, despite the fact that Mozart's music was creative (I've loved every Mozart piece I've ever heard performed well and most of those that haven't), from a superficial perspective many of them are also remarkably similar. If you aren't ready to appreciate subtle differences you aren't ready to appreciate Mozart for many of the reasons his music remained popular for 200 years after his death.
Which brings me to the next point, which is that he talks about Mozart's music existing in a sort of historical vacuum, and doesn't bother to compare Mozart's work to the work of his contemporaries [1]. Most of his contemporaries have been forgotten by the mainstream, for various reasons, but while they were alive they still produced a lot of music.
Finally, while there is creativity in science and one can take a scientific approach to creating (or "discovering") music, it's difficult to argue that the importance of scientific rigor is not more important in a paper about Algorithms for Mesh Analysis than in a Piano Concerto in Eb Major.
I can't comment on the US experience, but as a student in the UK in the 80s and Canada in the 90s I concur with Mark Tarver.
This is not just a CS issue, it's universal and the real damage is being done in the second tier establishments where grade inflation and dumbed-down courses are endemic. The tier one establishments (Oxbrige, Ivy League etc) can still use their own screening methodologies for aptitude and smarts to minimize the problem. As a result---and to continue Tarver's analogy of the Cultural Revolution---the 'party' is looking after its own whilst the rest goes to rot.
> This is not just a CS issue, it's universal and the real damage is being done in the second tier establishments where grade inflation and dumbed-down courses are endemic.
It's worse that that, because the dumbing down doesn't just effect universities. Secondary school science education has also been badly hit. The links that follow contain actual questions from GCSE science exams (note for non-British people: a GCSE is an exam typically taken by 16 year olds).
"For example, in a biology exam, a question asked whether you see with your eye, ears, nose or mouth -- "
No it doesn't. The question is "which organ contains light receptors?", which although simple, does requires some knowledge of scientific jargon. And this is on the foundation paper, for which the maximum mark is a C - IIRC, the intermediate and higher papers don't include questions this easy.
This type of question is not for A-Level candidates, it's for distinguishing between the weaker students. Of course it's worthwhile keeping track of what kind of questions appear in exams as a benchmark of educational standards, but only looking at the worst examples you can find does not give a clear snapshot.
> although simple, does requires some knowledge of scientific jargon
Knowledge of jargon isn't the same as knowledge of science.
The problem is not so much that the questions are too easy, it is that they are not science questions, because they don't test knowledge of scientific concepts.
It would be easy to ask questions that are science questions but that are also easy questions (for less able or younger examinees).
For example in physics one could ask: A man went to the top of a tall building, and threw a glass from it. The glass landed on concrete 30 m below. What happened to the glass when it landed? he then dropped a rubber ball; what happened to the ball when it landed?
Or a simple biology question: A woman wanted to breed striped cats. She had a male cat and a female cat. Both cats were coloured black all over. She painted white stripes on both cats, then got them to have sex. Is this likely to produce striped kittens?
These are very easy questions, yet to answer them one needs to understand important concepts about science. Teaching science is about teaching concepts, not about rote learning of definitions. Unfortunately the bumbling incompetents who're in charge of education don't seem to understand that.
I had very similar experiences as a Computer Science undergraduate not many years ago at a prestigious British university. The expectations were so low that eventually I stopped attending lectures and got a full-time programming job and I still kept getting quite good grades. The problem is so widespread that it's not just about undergraduates; as an undergraduate I have met several PhD students who struggled with even basic programming or mathematical concepts.
Still, in a class of 300 people, there are still about 10 very talented and bright students who'd rather be (and not afraid of) writing Prolog interpreters and doing similarly interesting and cool stuff. There should be a way these students could learn advanced concepts with while being mentored by professors.
Yeah I worry about the fact that this discussion became an emotional discussion in regards programming languages. There is a lot of evidence that students learn Java, because of the job market. I've worked in a school as Physics Teaching Assistant for a year (while at grad school) and the huge problem in Ireland, the UK and the USA is that people assume that students know best about choice etc. The fact is students are rubbish at telling what they'll enjoy, and most students will shirk away from thp the harder sciences etc. But the responsibility is to keep up the standards of academic rigor. Some wise young Cambridge scholar said to me recently 'When u audit something you change the standards, and the aims of it. League tables made schools focus on results, modularization make students focus on results, the job market makes people look for certain skills' and that is true. My most useful course in Philosophy at University, was a course I despised at the time. Perhaps we need to just let professors set the courses, or we get to the scenario that scares me the most when I get Physics graduates who don't understand what a Partial Derivative is.
A CS graduate who doesn't know what the Lambda calculus is, is perhaps the same?
A really interesting read and very much echoes some of the reasons I chose not to pursue a career in academia. When you realise you're at a fork in the road, and down one route lies being a supervisor's paper-churning-out slave, researching something that doesn't even interest you so the department can get more funding...grnnnh.
Edit: To clarify, this was my problem, as a PhD student with a PhD supervisor (not sure what the US term is; adviser?) who was forcing me to do the wrong thing all in the name of academia.
It seems to me that you have it exactly backwards.
First, as a university professor (in the U.S.), I don't have a supervisor in the sense the term is used in business. Yes, my department has a chair, but this is a "first among equals" position, periodically elected from among the members of the department. Governance is mostly collegial in nature, and much of the evaluation & promotion process is handled by committees. (Note: all this varies a fair amount from one university to another.)
Second, I do whatever research I want; no one tells me what to research. My work needs to be able to get through a peer-review process and be published, but that is not much of a restriction on topics.
Certainly academia has its downside; things like political nastiness, low pay and other budgetary woes, and excessive paperwork are often cited, along with various troubles with the tenure system. More related to what the article is talking about, there is the question of the value of research. If research topics are chosen on the basis of interest, then there is not much incentive to do something that makes the world a better place, generates money for someone, or anything like that.
You don't have a supervisor. You have many and they hide behind anonymity -- namely your peers who evaluate your papers, grant proposals, etc.
In a way this is much worse than having a supervisor in industry. At least your supervisor wants you to succeed. Your peers not so much unless it suits their agenda.
The freedom you have is only an illusion. What good is this freedom if your grant proposals / papers get rejected?
Moreover, your freedom is severely restricted by the scientific approach demanded by computer science. It is hard to fund/publish novel ideas that are not easy to evaluate.
In startups, you can more easily explore novel ideas and change them without worrying about evaluation methodology. Ultimately, your criteria for success are simple: how many users do you have? how much money are you making? etc.
Well, it sounds like we want to live our lives in different ways. Which is fine. In any case, as I said, academia does have its downside. So, I think, does working for a large company, or starting a small one. Life is full of trade-offs.
> In startups, you can more easily explore ideas and change them without worrying about evaluation methodology. Ultimately, your criteria for success are simple: how many users do you have? how much money are you making? etc.
I find this comment odd. First, how can I explore something in a startup that isn't going to make anyone some money? That sounds like a severe restriction to me. Second, it seems strange to name your evaluation criteria ("how much money ..."), but also say you don't need to worry about it. Of course you need to worry about it. A company that doesn't make any money is going to die.
Moreover, your freedom is severely restricted by the scientific approach demanded by computer science. It is hard to fund/publish novel ideas that are not easy to evaluate.
What? That's the point -- science is hard. If it weren't hard it wouldn't be worth doing. I guess you could argue that academia is bad because you don't get to sit around and jerk off and collect a pay check, and you'd be right, but you would completely be missing the point.
For me, academia is about examining the limits of our knowledge in a field and then working to expand it.
The freedom you have is only an illusion. What good is this freedom if your grant proposals / papers get rejected?
Fantastic! It means you are not good enough! (or your idea isn't anyway) The scientific community does not accept substandard research or conclusions and rightly so. Academia is not fun and games but it can be rewarding in its own way.
Academia can often be a good place to do that, but it depends on exactly what you are working on. I don't know of anywhere else where I could get paid to work on what I find interesting (but I am not a computer scientist).
One thing you're missing here is the collaborative aspect of academia. Most people work much better when they're constantly in contact with other people who are experts in their areas of interest. In many subjects, you just have to be in an academic environment to get that, since there isn't a culture of online discussion. For example, there is very little high quality discussion of philosophy online. (For whatever reason, the best philosophers just don't engage in it.)
If you're trying to do serious intellectual work, don't kid yourself that you can work out for yourself whether you've got it right or not. You need other experts to tell you when you're full of crap.
While there is some truth to what you say, I've seen a few startups fail because they lost focus. While there may be fewer people to oppose you if you want to experiment with truely novel ideas in a startup, you're still increasing your risk doing so. Startups should avoid experimenting until after they are profitable.
I wouldn't say that is an entirely accurate characterization. I don't know anyone in my department doing research that doesn't interest them. Students generally love the research they are doing and are probably a bit more autonomous than your comment seems to imply. I've taken both directions (industry then academia) and in industry I was a code-churning-out slave working on something that barely interested me. Now I work on my own on a topic that really interests me!
At the university I studied at, I wasn't the only student facing years of working on my supervisor's pet project in the name of funding. Some people stuck it out, some of us looked elsewhere for academic gratification, some switched supervisor; generally we all ended up doing something we loved, but the general situation at the university wasn't exactly set up to maximise student happiness from the get-go; instead, it maximised publication numbers and grants.
Not at all. Not a CS guy, but research was some of the most fun (and trying) times of my life. Figuring how things work, or how to make them work better is a ton of fun.
It's not for everyone, sure, but all scenarios have their pros and cons
I am certainly not ruling out a startup at some point. I was an early yc interviewee (and rejectee :-)) and afterwards just figured I'd enjoy my work for a while instead of worrying about the dollars and other minutiae. In reality, I'm just waiting for the right opportunity. "Life'a a marathon, not a sprint." An extremely successful local entrepreneur once told me that and it has stuck with me since.
I think most people are missing the point here. What he is saying is that CS (and the university model itself) has stopped being a correct vehicle for preparing students for a technical society.
I happen to agree with the guy. The model is broken, and we should move on to something much more libertarian, where the individual is in control of its own learning.
As a completely self-taught software engineer, I'm inclined to agree. Except for one big doubt: A large percentage of people do not excel on their own. They rely on social standards to specify what counts as "good enough" and to provide them a roadmap through the learning process.
Of course, that's what vocational schools are for, not universities. Universities got confused when they started pretending to offer job training.
Should be "The Decline and Fall of a British University".
The only thing this article tells us is the quality of the students that study at Leeds. It's a bit of a stretch to extrapolate from experience of one department in one university to claim that the same is happening everywhere.
I originally graduated from one of the better known universities in 1983, and returned to post-graduate study and teaching in 1989. I experienced an even worse scenario then he - teaching as a part-time temporary lecturer at several of the 3rd tier universities (polytechnics).
Even when students turned in work that was totally incoherent (unfinished sentences and half the size of the minimum word count), I wasn't allowed to fail them. This was on modular degree courses, were most (if not all) of the assessment was of such submitted work.
For years I witnessed half the students turn up to class with no recriminations against the non-attendees, and with only half of the actual attendees having read the 1 or 2 articles which were required reading for that class (photocopies of which they had been given previously, so they didn't have to find the original journals themselves).
The overall level of education and comprehension was appalling. I realised that pursuing such a career was a path to frustration.
In 1997 I decided I couldn't be part of this sham any more. I have friends who had gone through the same experiences as I, and they too decided to quit and find new careers. The story of one of them was even written up in a national newspaper ten years ago.
I have other friends who teach only at post-graduate level. They are shocked by the lack of basic maths, basic grammar and basic essay-writing skills of the graduates they teach. And in their institutions they too provide the students with all the reading matter, so that these post-graduate students don't have to find their way round a library or find out anything for themselves.
Degrees from UK universities were now meaningless, and it was only a matter of time before the wider world found out.
What will probably happen is that employers and/or students will realise that in the majority of work-related degrees the degree itself doesn't indicate anything in terms of skills or competencies.
Several of my neices and nephews have gone to university. Whilst I went there to study, their principal purpose is to party.
My partner is Thai, and I know several of his family and friends who graduated from Thai universities. Some have completed post-graduate qualifications. Yet they would never read a book that was more complex than Harry Potter (even in Thai). Nor would they ever go to a museum or art gallery.
Degrees have just become something that most people do either as a way to move out of home safely and/or a stepping stone to a job. For most graduates there is no sense of education being important in itself.
Just looked up at Leeds on Wikipedia - I'd assumed it was a fairly mid-range institution (and so I thought there'd be other institutions the same or worse, but many which were better). Just found out it's in the Russell Group. Christ. I (partially) retract my earlier respones.
I study CS (just finished first year) in the UK at a institution that usually places in the top 10... now you mention it, a lot of students do match those described by you and others. However, it's easy to ignore these people, since they never go to lectures (I have friends on other courses who say they attend on average 1 lecture a term...) It's trite, but with university, I feel you get out what you put in.
The opportunity to mix with the people who are bright and motivated (and they do exist) is great, as is having a library full of free programming books. For every student who only goes to party, there's one in the CS building late at night writing a Scheme parser in Haskell. (OK, the ratio's probably more like 3:1, but you get my drift).
On the other hand, I feel like I've learned about as much from pursuing independent projects than I have from my course, though the resources on campus make such projects easier.
BTW, a large part of me is thinking about dropping out of university at some stage and startup a startup - read too much pg - so I've been thinking about the value of university lately.
CS is a bit of an odd subject in this regard - with a lot of subjects^, I'd guess a majority of students don't expect their degree to be useful in their future careers (how many history grads become history teachers?), so it's understandable that many would just see their degree as a career ticket. I do actually hope to learn useful stuff on my course, but I'm not sure if I'd learn more by dropping out and starting up.
*Exceptions I can think of - Law, Medicine, and possibly foreign languages, Engineering, and Economics. Also more 'vocational' subjects like Nursing.
The gentleman is right on; in the US too it's far from a new story. The decline started in secondary and then spread up through higher education.
But does the fault really start with the students, as he claims, or is that blaming the victim? Certainly noone wants to work as hard as people used to ... that's a fact. The old culture (50 to 100 years ago) also supported a much lower population, and there were many more resources - many trees to chop, dams to build, necessities to invent ... all that's now paved over.
Many more people wanted to attend college to get 'the good jobs'. Only,there was only such much demand on the high-end. You wind up with an over-educated population, deep in debt, with Great Expectations that just won't be happening. Those who can't be anaesthetized with TV or video games or web-surfing inevitably turn their frustrated analytical skills upon the powers-that-be.
So, well, here we are. Clearly colleges are in for a deep re-organization. We need to recreate eduction - but to what end? what future will we re-tool to create?. And how will we provide opportunities for all that 'computing power' waiting to be harnessed!
Everyone. Although, really, this goes beyond being just a large headache for everybody involved; it's more of a startling clusterfuck. I mean, honestly, how does a system get to be so broken that they graduate CS students who can't program a computer? Or simply stop offering core classes like compilers?
Interesting, I find that number quite low.
The failure rates for most CS courses at my BSc/MSc university (VU Amsterdam) usually hovered around 50% in the post-00s era, especially for the hard-line courses such as computer networks, finite fields and data structures. We were allowed to do an unlimited number of retrials (6 months in between), but still over a third of the students would drop out before completing their BSc.
When I started my PhD at a British university I was obliged to do some MSc courses. Basically every course started with: No one will fail this course. At first that seemed like a comfortable idea since I didn't want to be distracted from my research. Soon I learned that actually means the course is going to be boring as hell.
I don't believe the Dutch system works better than the British one. We've had our share of choice fanatics. Despite Andy Tanenbaum raking in millions, my current department is also a whole lot better funded than the VU.
I think it has more to do with attitude than politics. Tanenbaum basically created the department with his bare hands, writing Minix and his well-known books in the process. He's no longer actively involved in teaching, but he created a culture of outstanding education. There's still an enormous amount of attention on making courses more interesting, challenging and up-to-date. It's easy to get lazy and lower your standards, but not impossible to say no and become a better university.