The problem for most information jobs is it's very hard to measure performance. Basically, I don't know how long a particular task should take, since every task is so different. It's very easy to notice when a guy on an assembly line is slacking.
But, if Tom takes 3 months to complete a new feature/analysis and turns in reams of code/analysis, there's no easy way for me to tell if he was super-productive and finished 6 months of work in 3, or if he over engineered a solution and stretched a month's work into three.
Even worse, Tom doesn't even know how productive he is. He'll think that whatever solution he ends up with is the best possible.
Code reviews, pair programming, etc. all work to alleviate this information black hole. But they just end up stratifying information workers since, over time, workers will rise to better companies in accordance with their competence. And this will just worsen the information disparity. If most of your coworkers are at the same level of competence as you, no one will be able tell you how good you really are.
Incidentally, based on anecdotal evidence, I believe this stratification will start at the bottom and work its way up. And that it is now about 25% complete. I've actually visited a company whose core business is sorting (mailing industry), where not a single employee had a reasonable understanding of Big-O notation, any standard sort algorithms, or any standard threading techniques. I've visited other companies (healthcare industry) that have over a million lines of VB6 (and they like it!) with no plans to transition for at least a decade. Another does most of its work on IBM Mainframes (VSE), and in spite of constantly running up against the limits of the platform, had never even heard of or investigated z/VM, LPARS, or Hipersockets (they thought I was making stuff up when I started to tell them about how all these features would make their lives easier). They scare away most decent programmers, and people who become decent run away. And, all the engineers at these places think they're great.
Anyways, done ranting. I just finished interviewing around the country (eventually found a great company), and was really surprised by the fact that companies with dozens/hundreds of programmers apparently didn't have one good programmer.
stratification to this extent makes me think that a "good" programmer, one who has had the opportunity to learn even a little from programmers in reasonably high levels, will always be able to find a job, even if it's just a crap job with good pay at a company where one has to lead the raising of the status quo.
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. [...] As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. [...] because human work will no longer be necessary the masses will be superfluous, a useless burden on the system.
The only employment available to an increasing - as technology evolves - portion of the population, will be workfare.
Ted is assuming - without evidence - that machines will be able to make decisions. Today that is not true, and for the foreseeable future it's not true as well.
Someone needs to program all those machines to crunch the data needed to help humans make good decisions.
...and machines become more and more intelligent...
They have never done so in the past, so why do you think they'll start now? Machines became faster, but not more intelligent.
Take chess as an excellent example: humans do not evaluate every possible more - somehow they just think of what the best move is. Machines can not do that, and probably will never be able to. So far they can only do what was pre-programmed for them.
Q. Are computers the right kind of machine to be made intelligent?
A. Computers can be programmed to simulate any kind of machine.
Q. Are computers fast enough to be intelligent?
A. Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.
Q. What about parallel machines?
A. Machines with many processors are much faster than single processors can be. Parallelism itself presents no advantages, and parallel machines are somewhat awkward to program. When extreme speed is required, it is necessary to face this awkwardness.
A. Alexander Kronrod, a Russian AI researcher, said ``Chess is the Drosophila of AI.'' He was making an analogy with geneticists' use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.
Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.
But he neglects the reason for it: it's because no one was able to take that step of "Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.".
A significant part of artificial intelligence deals with autonomous planning or deliberation for systems which can perform mechanical actions such as moving a robot through some environment. This type of processing typically needs input data provided by a computer vision system, acting as a vision sensor and providing high-level information about the environment and the robot. Other parts which sometimes are described as belonging to artificial intelligence and which are used in relation to computer vision is pattern recognition and learning techniques. As a consequence, computer vision is sometimes seen as a part of the artificial intelligence field or the computer science field in general.
It's the capacity to self program if you like - a human can learn anything, just tell him about the subject. A program can not.
So, if an artificial information-sensing/learning program is not as strong at sensing information as the strongest information-sensing/learning machine that has ever existed - the human brain - then it cannot sense and learn information at any level? Wouldn't that be a strawman argument?
You seem to be claiming that if artificial intelligence does not exist at a human level, that it therefore does not exist at any level. Why are you arbitrarily selecting human-level general intelligence as the only intelligence that could possibly exist? Your line of reasoning seems to imply that human olfactory nodes (sub-processors which help humans sense, and learn, smells) do not represent any level of intelligence at all, since they are not at the level of current human general-intelligence.
Is a single-celled protozoan (e.g. ameba) intelligent? An invertebrate? A vertibrate? A lower-mammal? A non-human primate? Do you deny the possibility of biological evolution by natural-selection?
You seem to be claiming that if artificial intelligence does not exist at a human level, that it therefore does not exist at any level.
Right now it doesn't - not at any level. There are no AI's that can self-program. So it's hardly a strawman argument. There are some that can learn the specifics of pre-programed concepts, but none that can self learn a new concept. That is something that most animals can do, but computers are not even close.
Yes, human olfactory nodes do not have any intelligence at all - but not because "they are not at the level of current human general-intelligence", but because they can only do the single function they were created to do, and can not learn anything else.
There are lines in intelligence - it's not a continuum. The first line, to use your example, is being able to recognize any human face after being programmed with a sample of human faces. The second line is to be able to recognize the concept of face of a different species, i.e. the concept of the "front" of a creature.
AI has not even reached the second line.
A next major line (I'm sure there are others before that) is being able to learn to self-program. This means: use only the IO of the program, and using that teach it something new, without sending any commands that change it's programming. I don't think AI will ever reach this, even though animals can do it to some degree.
A further line is being able to create something new, not simply respond to requests. This does not mean a refinement of an existing technique, but something totally new. Not even all humans actually do this, it's in the realm of genius.
I skipped the line separating animals from humans. But it is a line, it's not a matter of degree - there is a jump from non-sapient to sapient.
Even the most retarded human is noticeably different from the smartest animal.
If you program it to recognize faces, it will not, on it's own, learn to recognize cows.
So, if you walked into a VC's office to pitch a startup, you might say with a straight face that you believe the market for cow recognition exceeds the market for human facial recognition? Do you realize that to serve the cow recognition market, we simply clip RFID tags to their ears? http://images.google.com/images?q=cow+ear+rfid+tag
The artificial-machine face-recognition market is fueling the development of special intelligence in that area. There is relatively quick-return money in that area, so that is an area where intelligence is being developed first. Smell-recognition is another area where development of special intelligence can pay off relatively quickly, and therefore is providing impetus for development. So far, there are at least these relatively-fast-payoff markets for which special intelligences are being developed:
The imaging studies revealed significant face-selective activity in brain regions known to make up the distributed cortical face-processing network in humans. Further study showed distinct patches of activity in a region known as the fusiform gyrus - the primary site of face-selective activity in humans - when chimps observed faces.
Do you deny that the modern human brain probably resulted from evolution by natural-selection, rather than from Intelligent Design?
This post totally confuses me. What does the market for VC funding have to do with AI?
Are we even talking about the same things? Are you an AI? You are sort of acting like one (in this post), not responding to the points, but picking nouns from my post and finding info about them. (I'm joking - sort of.)
"....for which special intelligences" - is your definition of intelligence different from mine? Because none of the items on your list are forms of intelligence.
Yes, I am aware of the "special sub-processor" for recognizing faces.
What does the market for VC funding have to do with AI?
Technologies develop where they can earn a reasonable return on investment. If AI is developing first as facial recognition, instead of cow or everything recognition, it's because that is where the money is, relative to the amount of investment required. The return on an everything recognition AI investment might be higher, but the investment required would be higher still, implying a poor ROI.
a human can learn anything, just tell him about the subject.
There has never been any evidence of any human not being able to learn a given subject or concept or rule, given ample time and resources to learn it? Have you heard of Piaget? http://en.wikipedia.org/wiki/Conservation_(psychology)
Preoperational children have an inability to conserve liquid volume. If you give a preoperational child a glass of milk in a tall, thin glass, they will think they have more milk than if it were in a short, fat glass. The child will focus only on the dimensions of the glass, not on the volume of the liquid inside. [...] this confusion [is] born from a pre-operational child’s inability to understand the notion of reversibility; the ability to see the reversal of a physical transformation as well as the transformation itself.
Sometimes I find that after a few months working in the same job theres little left to learn which causes the boredom to set in.
The problem is that you either settle into it and plug away, or move on to something new and challenging.
If the pay and benefits are very good this counteracts the will you might have to leave, which I guess is what the company is trying to do. They don't want to lose the people who can do the job (perhaps even if you're coasting) so they pay you a salary/benefits to make you stay.
It seems like an easy question of efficiency to me. Can passion scale to large corporations? I don't think so, though perhaps large complex systems can be abstracted into a hierarchy of simply modules. THe solution is not performance reviews, it's "small businesses". ('middle management' is just a stereotype from poor implementations of the hierarchy...i hope)
from my own comment on the blog:
Except that the talented people are going to the companies that give them good salaries and benefits and treat them with respect.
The important thing for employers is to hire good people who respect the company and themselves, and thus do good work. The important thing for employees is to work for good companies that respect them, and thus provide decent responsibilities (job ownership) and power (salary, benefits).
I believe that he's saying you shouldn't pay people well enough that they'll stay even if they really dislike the work. Instead, find someone who's excited about the work,and then find out how much they want.
There's a lot I don't know about business, but here's an idea I haven't heard of: pay people very little, but with immediate raises. I think the common wisdom is to avoid raises until you have revenue, but if you're going to spend, say, 60K/yr on someone and have funding for one year, you could structure the 60K in such a way as to provide 5 or 10 percent increases in paycheck for the entire year. Since people are happier about getting a raise than having any given amount, this might maximize payroll satisfaction.
You'd have to start them off at a pretty low salary though (like 30k if my back of the envelope math-fu is good today). Would you take a job at a lower pay rate if they claimed to offer weekly raises? Seems like this would really shift too much power to your employer.
What happens in year 2 on the job where its no longer possible to keep raising the pay on a regular basis?
I don't think the idea is to promise the pay raises. Where would the production-incentive be, if the employees thought they had earned the pay-raises up front?
Actually, my idea was to promise the pay increases. This assumes that knowing you're going to get them won't affect the joy of getting them very much. But that could be wrong.
Also, the idea is not to prompt working harder because people actually think they're getting raises, but to make people you've already hired happier, on the assumption that happier people produce more/better.
Actually, my idea was to promise the pay increases.
OK. I stand corrected.
What do you do if, halfway through the year, the corporate board tells you to cut spending? Do you keep your promise to the employees, and risk offending the board? I've had similar continuous-pay-increase ideas, myself, but I'm not sure about promising them up front.
I like the idea of paying partly in equity, by the way. That way the employees are actually part of the company (co-owners), and they can genuinely feel they are building something that is theirs. Maybe the equity payments could be increased over time.
"What do you do if, halfway through the year, the corporate board tells you to cut spending?"
Whatever you would otherwise have done. If you have to lay people off, you lay people off. Since the yearly pay is known in advance, the only odd bit is the schedule of payments... if you have to lay someone off, you've actually spent less than you would have if all their paychecks are the same. But the details of all this aren't really the point -- I'm sure they could be worked out, if someone wanted to try it. I may if I hire employees at some point.
Paying partly in equity seems nice, but it seems even less sustainable than distributing an employee's paycheck unevenly.
Paying partly in equity seems nice, but it seems even less sustainable
In a company that were not growing, it would indeed be unsustainable to pay in equity (unless the value fo the equity were continuously shrinking. But why would a startup company not be growing (valuation rising)? Unless it is a failure, isn't it normally growing?
I think he's saying that a cushy job can lead to a mediocre career. Once you get used to high pay, perks, and having it easy, it gets extremely hard to leave for a risky (but likely more rewarding) opportunity. That situation is the norm at many large corporations in highly regulated industries (government contract, aviation, etc). My first job was just like that, and I'm so glad I left.
But, if Tom takes 3 months to complete a new feature/analysis and turns in reams of code/analysis, there's no easy way for me to tell if he was super-productive and finished 6 months of work in 3, or if he over engineered a solution and stretched a month's work into three.
Even worse, Tom doesn't even know how productive he is. He'll think that whatever solution he ends up with is the best possible.
Code reviews, pair programming, etc. all work to alleviate this information black hole. But they just end up stratifying information workers since, over time, workers will rise to better companies in accordance with their competence. And this will just worsen the information disparity. If most of your coworkers are at the same level of competence as you, no one will be able tell you how good you really are.
Incidentally, based on anecdotal evidence, I believe this stratification will start at the bottom and work its way up. And that it is now about 25% complete. I've actually visited a company whose core business is sorting (mailing industry), where not a single employee had a reasonable understanding of Big-O notation, any standard sort algorithms, or any standard threading techniques. I've visited other companies (healthcare industry) that have over a million lines of VB6 (and they like it!) with no plans to transition for at least a decade. Another does most of its work on IBM Mainframes (VSE), and in spite of constantly running up against the limits of the platform, had never even heard of or investigated z/VM, LPARS, or Hipersockets (they thought I was making stuff up when I started to tell them about how all these features would make their lives easier). They scare away most decent programmers, and people who become decent run away. And, all the engineers at these places think they're great.
Anyways, done ranting. I just finished interviewing around the country (eventually found a great company), and was really surprised by the fact that companies with dozens/hundreds of programmers apparently didn't have one good programmer.