Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Artificial Intelligence and the Future of Labor Demand [pdf] (economics.mit.edu)
146 points by Dowwie on March 25, 2019 | hide | past | favorite | 33 comments


I think he's trying to make the argument that AI need not be limited to automating tasks that directly replace human jobs but can also augment human labor in a way that enables people to do new things that couldn't previously be imagined. It's a variation on the old economics story of how the bulldozer does not replace humans with shovels, it enables those humans to build skyscrapers instead of being limited to building shacks.

He's making a valid argument in the abstract. However, like so many authors from outside the field, his concept of AI is primarily informed by works like The Singularity Is Near and Superintelligence, and completely disconnected from the current state of the art and its development for the foreseeable future.


Can you elaborate on how his concept of AI is disconnected from reality?


Currently AI is based around optimizing something, but AGI has no specific thing to optimize around. Human intelligence has a lot of issues with for example mental heath because it’s not so easy to optimize for general intelligence.

Fiction glosses over that stuff and generally paints AI as some extremely rational extremely intelligent system. But, rationality may be more of a social construct than we assume with many or even most AGI turning to criminal or deviant behavior.


His concept of AI seems to be more AGI in the sense of abstract reasoning and creative problem solving applied to problems in one domain where the AI can then accumulate the lessons learned and apply them to another domain. As opposed to the current reality which is mainly stringing classifiers together.

I work as a consultant, and I get these sort of questions from my clients all the time. "Can't we just load the (unlabeled) data into the AI solution and just have it 'figure things out' without training it?" "Can't we take the model that we already trained (on one very narrow domain) and use it (on some completely different domain) without going through all that again?" People want unsupervised transfer learning with a whole lot of a priori knowledge baked into it.


Interesting paper. They say that markets fail to bring forward some types of "reinstating" AI and they give three examples, education being one. But I think otherwise when it comes to education AI. Lots of people are trying to develop that kind of thing, an AI that assesses and tailors to individual learners' needs. And I think so many are trying to do that because they recognize what a huge market there will be if they can deliver results. I think the market will yield that one. Then they mention healtcare, which is such a quagmire of regulations and legislatively enabled special interest groups that how can one call that a market. If it really was a market it would already look very different than it does now and might even incentivize the "right kind of AI".


I don't see where the assertion made in the abstract, that current labor demand stagnation and wealth inequality is due to AI, is evidenced in the body of the paper. And I'd find it very surprising if it were true.

Anecdotally, I'd also argue against the thrust of the paper. New roles are being created as a consequence of AI. One could argue that a significant proportion of data science is a direct consequence, for example.

That's not to say that it won't be problematic. It's not at all obvious that the occupants of jobs that will likely disappear through AI automation (call centres and drivers in the immediate future) are the people that will be employed in the new fields. However, this strikes me as a version of the "Detroit problem" rather than something new.


There is also the matter of scale. The number of jobs being automated away is vastly greater than the ones being created. See Yang2020.


pg.3 - this section also references, amongst others, this paper by the same authors: https://economics.mit.edu/files/15254


This paper fails to separate out the effect of disemployment effects caused by offshoring, muted demand and robots.

I've seen this in a few other papers on this topic too (e.g. Ball state university, 2017)

I'm also a bit suspicious of their rationale for not looking at German data (German adoption of automation isn't much different, but the disemployment levels are wildly different).


I saw that and that would be fair if they were saying that automation was a principle driver (which seems reasonable). It would also gel well with saying that more focus on automation will just make things worse.

It's the statement that AI is causing the current issues that I find unsubstantiated. Unless they're including current manufacturing robotics as AI which is quite a stretch.


> Most AI researchers and economists studying its consequences view it as a way of automating yet more tasks. No doubt, AI has this capability, and most of its applications to date have been of this mold — e.g., image recognition, speech recognition, translation, accounting, recommendation systems, and customer support. But we do not need to accept that this as the primary way that AI can be and indeed ought to be used.

I don't agree with the premise that automation is necessarily "the wrong kind of AI". Empowerment of workers as it is described in the article as a means of creating "many new, high-productivity tasks for labor" mostly requires automation of simpler routine tasks to be delegated. In that sense, I think the main problem is making AI tools accessible to the same people whose task is automated in the first place.


I know Daron from his famous book called 'Why nations fail'. He is an exceptional writer has an exclusive background on economics. It looks like he's also upgrading himself for this paradigm change and his paper looks very interesting.


I am sympathetic for those who are already working and may be left stranded by advances in AI. But I am skeptical of economic incentives which may lead to the birth of too many people in an attempt to preserve the status quo. So assume for now that AI will indeed lead to a decrease in the demand for human labor. Eventually that decrease will lead to a decrease in the human population.

Right now the explosion of the human population, which will peak sometime in this century or the next before declining, is destroying earth's natural habitats. I see the decrease in demand for human labor as a good thing in the long run, although there are lots of unfortunate short-term effects which hopefully we can mitigate.

The decrease in global human population is already being test-run in a few countries, and the effects are mild so far: very low unemployment rates, stable or declining property prices. One worry is paygo pension systems, but that is a small price to pay for what we have to do in the end. I simply do not understand the fear of a falling demand for labor and a falling population.


"... although there are lots of unfortunate short-term effects which hopefully we can mitigate."

You paper over a world of demons and degradation with such neutral words. People want to live. People want to have children, see them grow, see them thrive, and have children of their own.

Your neutrality only makes sense if the rewards of a new age were more equally spread out. But both of the social, and hedonic, treadmills drive human valuation insane if an elite is constantly pumping a vision and an aspiration that is mocking when compared with ground level humanity.

What you are really saying is "I hope there is a massive war so excess human capital stock is cleared and the relative valuation of each human is raised for a generation or two."

What else is there for superfluous human capital to do but to shred itself apart when even entry into the games of social mobility seem closed off?

To hammer the point, humans can be literally "richer" but much more miserable than in a society where everyday cognition is less stressed by the need to push forward in a rat race.


> What you are really saying is "I hope there is a massive war so excess human capital stock is cleared and the relative valuation of each human is raised for a generation or two."

Your biases are showing or you're being hyperbolic to an unreasonable degree. The person you're referencing simply says they believe there could be a future where the world needs less Human labor and they hope we can mitigate the problems that would come with shifting from a labor based economy to whatever comes next. To assume they meant massive war instead of something like Universal Basic Income is silly, they suggested no such thing.


> What you are really saying is "I hope there is a massive war so excess human capital stock is cleared and the relative valuation of each human is raised for a generation or two."

They referenced Japan in their post (which has a fairly steady but slowly declining population due to slowing birth rates) so to insinuate they were hoping for war to decrease the population is way off the mark.


I found your comment here lucid and insightful. Do you blog anywhere?


Although I agree that the human population will decline in the next two centuries at some point (mainly due to wars and even mandatory birth control policies), don't think for a second that the current over population is because of human labor demand. I would attribute that to people loving having sex, better access to medicine and not enough education about the consequences off not using a form of birth control.


I think you are more likely to see much more policies directed to encourage having children than "mandatory birth control". While only a few countries remain with a very high reproduction ratio, and it drops down wast with economic development. You can see some lectures by Hans Rosling on the topic. If anything underpopulation will be way more pressing issue than overpopulation.


> So assume for now that AI will indeed lead to a decrease in the demand for human labor. Eventually that decrease will lead to a decrease in the human population.

I am not sure why human labor is related to human population. Suppose automation replaces ALL necessary human labor, does that mean human population should go to 0? Human labor doesn't exist to justify humanity, it is there only to provide for some necessities and for self-actualization. If the necessities are provided by automation, then more power to humans.


The McKinsey Global Institute has published/invested a lot on the topic and have a good set of podcasts: https://www.mckinsey.com/mgi/overview/the-new-world-of-work-...


>>> recently, the US government has been more frugal in its support for research and more timid in its determination to steer the direction of technological change

Decoupling innovation from incentive. The danger being that hard tech funding potentially forms a "black hole". With limitless funds poured in without any further illumination into the problem.


Is unemployment the problem ? Or the redistribution system itself ?

It's sad that Universal Basic Income and 20h work weeks have been theorized for decades, but never reached the political debate.



is this completely speculative, contingent on the far fetched notion of AGI? which sectors of the labor market have until now been appreciably affected by ML? how many image labelers has ML put out of business? quite the contrary right? lots of people in southeast asia now working in labeling sweatshops?

the only industries i see potentially affected (~10 years) is commercial driving.


The paper doesn't cover speculative AGI stuff at all (or driving for that matter) but focuses on the applications of existing technologies (image recognition, ML classification, robotics) approaches to different fields.

The general conclusion is that some technologies will replace labour and some will augment it and increase human employee usefulness, but that public policy should be directed towards enhancing the prospect for the labour augmentation over the labour replacement devices.

(It's not that long and quite readable for an economics paper, although it doesn't aim to provide any evidence for its assertions either.)


And packaging/warehouses probably... but problem is that even this limited change will have much wider effect on economy because for instance self-driven trucks don't stop to eat in restaurants.


It will be a while until we'll get to the point of self-driving trucks, at least by looking at my brother's job (he's a trucker who has been driving all over Europe, from Spain up to Turkey).

For once, how do you deal with theft? Because an unattended truck stopped in the middle of nowhere carrying stuff potentially worth tens if not hundreds of thousands of euros is going to attract a lot of bad people. Second, how do you deal with the myriad of specific unknown unknowns that happen daily in the life of a truck driver? How would an AI truck navigate the unmapped dirt-roads of Normandy in order to get to a potato field? (a job that my brother has had to carry out last year). And there are other countless examples like this.

And I'd add a third option, that is that a truck driver is more than just a plain driver, he has to deal with lots and lots of paper-work, especially in a place like Europe which still has lots of borders in place (even the "soft" borders are not that "soft" when you're carrying stuff around).


How does a truck driver deals with theft currently? Are potential thieves discouraged by presence of a human because penalties for harming humans are higher than mere theft?


> How does a truck driver deals with theft currently

I can tell you what he did when some fuel thieves tried to steal some of the fuel from his truck in no-where Bulgary: he took an iron bar from inside his cabin and started chasing those thieves, accompanied by an Hungarian fellow truck driver who had noticed the thieves in the first place.

> Are potential thieves discouraged by presence of a human because penalties for harming humans are higher than mere theft

Yeah, that’s how things stand right now in these parts of the world (I live in Romania). An unattended house in a village will most certainly attract some unwanted eyes, much more so compared to a house where people live day-to-day. The same goes for trucks, or for anything with value, really.


It's interesting to see a marxist/materialist analysis of labor and technology in a mainstream academic paper.

Technology and automation increase the total productivity in an economy, but only people who own these technologies benefit from the profits.

For example, say that Uber succeeds with creating self-driving cars in a particular city (but Lyft does not). People in that city might benefit from having cheaper Uber rides - but is that benefit offset by the number of people who can no longer make a living driving for Uber? Let's say those drivers now only drive for Lyft - in the economy of this city, these Lyft drivers now compete with self-driving Ubers that have a much lower cost of operation because Uber doesn't have to pay minimum wage.

The paper mentions a couple areas where AI would help increase productivity and lower costs in a way that benefit working people as a whole - in healthcare, education, and augmented reality. I think AI could also be very useful for people who are disabled.

However, in a capitalist economy, the biggest incentive for creating AI is to reduce labor costs - compare something like a warehouse worker, who must be paid every week, with a warehouse AI robot. After the initial investment developing this warehouse AI, the company that owns this warehouse AI has a source of productive labor with a very low operational cost, much lower than employing warehouse workers.

The incentive for developing AI that can replace physical human labor is huge - and humanity as a whole could reap the benefits if we collectivized owenership of these technologies, rather than allowing a few shareholders to extract profit from their perpetual labor machines.


>>Technology and automation increase the total productivity in an economy, but only people who own these technologies benefit from the profits.

That is not true. It lowers consumer costs, reducing living expenses and making it easier for more people to come to own technology. Look at smart phone ownership rates since 2007. It went from 40 million people owning one to 2.5 billion owning one today.

Thanks to the proliferation of automation, in the form of factories with mechanized manufacturing processes, smart phones are much more affordable today than in 2007.

A smart phone is a computer with a camera and microphone that enables and automates many tasks, meaning the automation of smart phone manufacturing is increasing the automation technology that people own.

>>However, in a capitalist economy, the biggest incentive for creating AI is to reduce labor costs - compare something like a warehouse worker, who must be paid every week, with a warehouse AI robot.

Lower labor costs per unit of output means more people can afford services and products, including products that allow them to automate their own personal production.


Interesting topic, but I would have liked it more if only there was a single trend graph.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: