“A Nobel laureate on the economics of artificial intelligence” (MIT Technology Review, March/April 2025):
For all the talk about artificial intelligence upending the world, its economic effects remain uncertain. But Institute Professor and 2024 Nobel winner Daron Acemoglu has some insights.
Despite some predictions that AI will double US GDP growth, Acemoglu expects it to increase GDP by 1.1% to 1.6% over the next 10 years, with a roughly 0.05% annual gain in productivity. This assessment is based on recent estimates of how many jobs are affected—but his view is that the effect will be targeted.
The full paper is available for download as a PDF.
The news gets better:
“We’re still going to have journalists [especially in Gaza where food, health care, education, and shelter are all paid for by US/EU taxpayers via UNRWA?], we’re still going to have financial analysts, we’re still going to have HR employees,” he says. “It’s going to impact a bunch of office jobs that are about data summary, visual matching, pattern recognition, etc. And those are essentially about 5% of the economy.”
If “artificial intelligence” includes self-driving, I’m not sure that the effects on the economy will be small. As of 2016, supposedly about 3 percent of jobs were for drivers per se (CNBC). As anyone who has taken an Uber or Lyft can attest, many of these folks speak no English. If their driving jobs disappear, at least some percentage of them will be on track for the lifetime full welfare lifestyle (public housing, Medicaid, SNAP/EBT, and Obamaphone).
Related: Mindy the Crippler is preparing for the stock market panic when people realize that AI is fizzling…
Related:
- “Did Paul Krugman Say the Internet’s Effect on the World Economy Would Be ‘No Greater Than the Fax Machine’s’?” (Snopes; answer: Yes, in 1998)
Garbage in (pseudo code prompts) garbage out (stack overflow) so far. Someone has to develop ever more complicated pseudo code prompts & someone has to keep putting examples in stack overflow. If Teslas are just replaying recordings of humans driving, how is the algorithm going to continue improving if no more humans are driving?
It’s relatively well understood problem and there are methods to combat that. For example, Google developed model which diagnoses diabetes (if I remember correctly) using picture of the eye better than human doctors.
Do you think it’s fair to compare the AI results for diabities detection to the average of a group, G, of human doctors? I’m assuming that’s what you mean.
I think that it must be compared to the result of a discussion between a group of a human doctors, because that’s what the AI has for training.
In fact, the right way would be to take a group of doctors, make them have a discussion for some time, then let them reflect on the discussion and then make them come to a single conclusion. If that result is better than AI then we can say AI is better than G’s intelligence.
I do agree that in practice a doctor has to look at the eyes for only a small amount of time and then come to a conclusion and there as you mentioned AI supersedes all individuals in G. But, IMO, in that type of comparison we’re not comparing AI with human intelligence, because then we’re not using the fact that humans communicate with each other and change each other’s opinions and also that we can reflect on things and improve our understanding.
Of course it’s a more than fair comparison. This test costs $.50 for AI and $5000 for the consortium of doctors you propose. (You assume that consortium is better, btw, I don’t know if it actually is)
There is a new capability recently, harbinger of things to come: AI learned how to distinguish genders based on retinal photos. Humans don’t know how to do it.
SK –
Since gender is fluid, eyes must also be fluid if AI is able to detect gender that way.
Other than shorting Nvidia, how can we short “AI”, that is Large Lying Machines (LLM)?
It is quite easy for LLMs to replace economists from academia.
lion and Thersnoboks, I have some thoughts on other shorts for AI (if that is your inclination), but wondering if you could articulate a bit about your short thesis? Please educate those of us who don’t work directly in the industry, but who are very interested why you are negative on AI generally. A balanced upside/downside thought process would be super interesting (guessing your downside arguments are more likely and/or prevail based on your comments, but would be great to hear you articulate both sides ). I will follow up with shorts.
Nice try, ChatGPT
It’s not even funny anymore. People in IT are getting fired right now because AI replaces them. There will be 0 jobs, nada.
Every month it gets better and better. For what it will lead to read AI 2027 – they actually modeled it, and results are grim.
Rearranging the deck chairs on our AI super-yacht? Personally, I can’t wait for Eliezer’s new book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All”.
https://www.amazon.com/gp/product/B0DZ1ZTPSM
Available September 16, 2025? He should have used AI to write it and finish yesterday!
Microsoft and Google now say that 30% of their new software is AI generated.
Acemoglu has a big reputation, but all his stuff seems dubious to me.