he general question of AI's effect on economic growth is out there in the world. In this case I'm responding specifically to the recent debate in Asterisk magazine between Matt Clancy and Tamay Besiroglu. They defined explosive growth as "a minimum of tenfold the annual growth rate observed over the past century, sustained for at least a decade." Besiroglu argued explosive growth was possible, Clancy said it was unlikely. James Pethokoukis did an excellent summary of their debate on his Substack.
To set terms, by their definition of explosive growth we're talking about whether AI can generate growth in GDP per capita of about 20% per year for 10 years. That isn't going to happen. I even think Clancy's allowance for something like a 10-20 percent chance of explosive growth is wildly optimistic.
My reasoning revolves around competing feedback loops. Working for AI is a positive feedback loop where it adds to our ability to do R&D, which increases productivity, which lets us apply more AI to R&D (in addition to other things), which increases productivity, and so forth. Working against AI are a set of factors that lead to big negative feedback loops. I'll get into the details of those, but generally speaking it means an increase in the number of ideas generated by AI lowers the growth rate of ideas. Because AI is bound to some (even minor) extent by physical constraints (servers, energy) that puts something of a cap on the positive feedback loop, and I'll argue that the scale of the negative feedback loops are quite large.
Beyond the feedback loops, there is the additional issue of demand. Even if the positive feedback loop wins out, it isn't obvious to me explosive growth in ideas translates into explosive growth in measured economic output. This is not a mechanical relationship.
AI and ideas
Besiroglu and Clancy work within a standard framework that ideas drive economic growth, and that ideas depend on resources put into R&D. By standard I mean that this is literally the textbook model of idea generation and growth (by the way, 4th edition of that textbook coming very soon). By "resources put into R&D" I mean really any efforts to generate new ideas, and not just activities that get formally classified as R&D expenses by firms. Scouting locations for a new Starbucks is idea generation. Deciding to re-assign a manager to a struggling branch is an idea. "Doing R&D" is just shorthand for "spending time thinking about how to use inputs more productively."
Connecting ideas to R&D efforts is standard. But as I'll get to below, there is an additional step connecting ideas to productivity and hence to economic growth. For now, focus on the R&D resources to ideas connection.
If you want to increase the growth rate of ideas by a factor of 10, then you need to scale up the resources applied to R&D by a factor of 10, roughly. Normally we think of those resources as the number of people assigned to doing R&D work (or scouting locations, etc.), which makes scaling up by a factor of 10 very hard. There are about 1.5 million people in the US now classified as doing R&D. Scaling up by a factor of ten to 15 million means shifting 13.5 million people out of their existing jobs/tasks and into R&D. And that doesn't even count the people doing informal R&D we'd have to shift around. This kind of scaling seems very unlikely.
The potential benefit of AI is that you can substitute chips for people. Adding 13.5 million instances of ChatGPT seems way more plausible than adding 13.5 million researchers. This still requires real resources. The AI instances "live" on GPUs that require electricity, have to be racked somewhere, and require physical internet connections to each other and to us. We already know that things like ChatGPT have a per-query cost thousands of times higher than a standard Google search, and that OpenAI is buying NVIDIA GPUs as fast as they can come off the assembly line. Moreoever, a bunch of AIs talking to themselves does us no good. We need people who are capable of interacting with them to both pose interesting questions, tweak prompts, and extract answers.
Regardless, there is scope for a positive feedback loop here. Spinning up millions of AIs to assist with R&D means a higher growth rate of ideas. Faster growth of ideas means faster growth in productivity, and faster growth in productivity means we can produce more of the resources needed to spin up AIs, like chips, energy, server racks, and AI engineers. That speeds up idea generation, productivity growth, and so on, allowing for the possibility of explosive growth.
Negative loops
Working against this positive feedbook loop are several negative factors.
The first thing I have in mind limits the scope of the positive feedback. This is the issue of duplication in R&D. These AIs are going to be spun up by firms using them to pursue profits. A lot of those firms are going ask their AI researchers similar if not identical questions. Both Eli Lilly and Pfizer are going to want to invent a next-generation statin (or whatever). They are not going to share their findings with one another, nor even admit to one another this is precisely what they are working on. Which means the loser in this process will have wasted a bunch of AI resources.
That kind of duplication is true now, of course, as human R&D workers often chase down similar ideas. But the problem exists nonetheless, and arguably might be worse with AI. One of the odd features of having time-limited humans run R&D is that they are idiosyncratic in what they have time to read and research, meaning that they are less likely to chase down identical paths. But if all these AIs are getting trained up on similar data (e.g. all of PubMed) then they are all operating with the same set of information. Even with perturbations, this seems likely to mean more similar answers. Even if I'm wrong about this (and that's very possible) the duplication due to firms being proprietary is real. This puts a cap on how strong the positive feedback loop can get. It doesn't eliminate the positive loop, just dampens it.
An actual negative feedback loop comes from the "ideas-are-getting-harder-to-find" problem. Bloom, Jones, van Reenen, and Webb (2020) looked at several distinct industries and found that despite vast increases in R&D resources, the growth rate of ideas didn't go up. Think of Moore's Law. Since 1971, R&D spending by chip companies went up by a factor of 78. But the growth rate of transistors on computer chips has remained almost perfectly in line with Moore's prediction of doubling every two years. 78 times the resources put towards generating new ideas, but the growth rate of ideas has not budged. The implication is that each time we doubled the number of transistors, this made the next doubling even harder to figure out. Bloom and co-authors find this to be true across lots of industries (pharma, ag, tech, etc..).
AIs don't make this go away. They allow you to throw more resources at the problem more quickly (the positive loop), but that just means the next idea is going to require even more resources. The negative feedback loop appears to be strong. Compared to the entire economy, the situation with transistors and chips is relatively benign! A back of the envelope calculation from their paper would suggest that keeping idea growth at an explosive 20% per year might require R&D resources to grow at like 100% per year. You might think, sure, AI can do that! Yes, the theoretical power of an AI might double each year, but we're talking about doubling the real resources (chips, energy, space, technicians, bandwidth, etc.) that these AIs require to run. That is a massive commitment of real resources that isn't obvious can be met or maintained.
The real problem
That brings us to a key problem, which is something like a negative feedback loop. While you might be able to generate explosive growth in ideas, that does not mean you can generate explosive growth in productivity. The translation of ideas to productivity matters, and there are reasons to believe that this translation is not proportional. Increasing the growth rate of ideas to 20% doesn't mean productivity growth will go to 20%. It might not even push productivity growth to 2%.
Last year I reviewed a paper on "additive growth" by Thomas Philippon. He documented that productivity growth appears to have additive as opposed to exponential growth. His work doesn't invalidate the idea that R&D resources lead to idea growth, or that ideas grow exponentially. His work provides a clue to the apparent relationship of ideas and productivity. Additive growth implies that as ideas grow so does productivity, but that there are severe diminishing returns to those ideas.
More to the point here, productivity growth is inversely proportional to the level of productivity itself. This is a stark negative feedback loop. Even if the positive feedback loop of AI on idea generation overcomes the negative feedback loops of duplication and ideas-get-harder-to-find, there is still this looming issue that the productivity growth rate appears to decline as productivity goes up. Why does this happen? Philippon doesn't offer a precise reason, just documents the facts.
You have to be careful here. This additive growth problem isn't saying that AI can't generate new ideas, or that those new ideas are not productive. It isn't saying that AI can't increase the growth rate of ideas. It is telling us that historically turning those ideas into productive use is hard. I speculated in the earlier post that this is because integrating new ideas into the network of existing ideas gets harder and harder as ideas get more complex. I suspect that a significant reason is also just human inertia and uncertainty. We presume that firms and people will obviously accept and implement new ideas to make themselves more productive immediately. But a passing familiarity with people will suggest that this is not the case.
Before I went to grad school, I worked for several years as a "consultant", meaning I was a contract programmer. I worked on a job for a big airline at one point while they were introducing e-tickets. These are a great idea compared to paper tickets. Incorporating them into the existing booking and accounting systems of the airline was a multiple year job. Why? In part because we had to fold this good idea into a morass of legacy systems; each of them had been good idea at the time. We were taking electronic records from the shiny brand-new electronic ticketing system and forcing them into a mainframe system written in COBOL. It took us a year just to map out and understand the hand-processing procedures for paper tickets that were taking place in a giant industrial park so that we could replicate the procedure for e-tickets. This whole project probably could have gone faster, but the number one objective of the airline people overseeing the project was to not break any existing systems. The airline wasn’t going to shut down for a year while we figured out an elegant solution to their e-ticket accounting system.
AI being able to do things faster than a person in a way a person cannot wrap their head around generates a lot of possible benefits. It also is going to be the reason a lot of firms and people deliberately slow the implementation process down. AI doesn't solve the human inertia problem, which means this kind of negative feedback loop from additive productivity growth will almost certainly keep growth from becoming explosive.
Bonus pessimism
The last issue with explosive growth in AI doesn't really come from a negative feedback loop. It comes from the fact that economic growth depends on ideas and preferences. There are two ways to "spend" an increase in productivity driven by new ideas. You can use it to produce more goods and services given the same amount of inputs as before, or you can use it to reduce the inputs used while producing the same goods and services as before. If we presume that AI can generate explosive growth in ideas, a very real choice people might make is to "spend" it on an explosive decline in input use rather than an explosive increase in GDP.
Let's say AI becomes capable of micro-managing agricultural land. There is already a "laser-weeder" capable of rolling over a field and using AI to identify weeds and then kill them off with a quick laser strike. Let's say AI raises agricultural productivity by a factor of 10 (even given all the negative feedback loops mentioned above). What's the response to this? Do we continue to use the same amount of agricultural land as before (and all the other associated resources) and increase food production by a factor of 10? Or do we take advantage of this to shrink the amount of land used for agriculture by a factor of 10? If you choose the latter - which is entirely reasonable given that worldwide we produce enough food to feed everyone - then there is no explosive growth in agricultural output. There isn't any growth in agricultural output. We've taken the AI-generate idea and generated exactly zero economic growth, but reduced our land use by around 90%.
Which is amazing! This kind of productivity improvement would be a massive environmental success. But ideas don't have to translate into economic growth to be amazing. More important, amazing-ness does necessarily lead to economic growth.
Another way to conceive of this limited translation of ideas to economic growth is as a manifestation of the heterogenous impact of AI. AI can ramp up R&D, but there are industries that will be more amenable to AI-fueled innovations than others. Maybe phrama and some tech applications and others can really lean into this AI and accelerate their own idea growth rate. But for others it won't have as big an impact. How exactly is the restaurant industry going to leverage AI to increase productivity at 20% a year? Don't forget that the government accounts for about a fifth of economic activity. While it might hypothetically have a lot of gains from AI, what are the chances it actually applies them?
Regardless, let's say that 50% of the economy is subject to AI-based explosive growth in ideas. And let's say that this actually involves a 20% growth rate in productivity. The other 50% of the economy experiences the normal 1%-ish gain in productivity. What's the aggregate growth rate of productivity? About 10.5%, which just barely gets you over the line for the definition of explosive growth. The actual calculation is a little more subtle than that, as it depends on the impact of each industry on others, but this is the right scale.
What happens next? Well, that depends on what we do as consumers in response to the dramatic change in relative prices that this disparity in productivity growth creates. The AI-fed products will become very cheap relative to the non-AI products. To keep growth explosive, or to ramp it up even further, the response would have to be that we spend more on AI-fed products and less on non-AI products. In other words, we have to be willing to treat AI-fed products as substitutes for non-AI products.
I find that hard to believe. As we've seen with historical declines in the relative price of things like clothes, cars, and appliances, we use the savings to spend less on them and more on relatively expensive things. We are all William Baumol.
The non-AI industries are likely going to continue to increase their share of spending in the economy as we take advantage of the AI-fueled savings on those products. What happens then is that the weights in the aggregate productivity calculation tilt away from AI. While AI-fed stuff starts at 50% of the economy, it quickly drops to 40%, 30%, 20%, and so on. The faster the productivity growth, the faster the decline. This then drags down the aggregate growth rate of productivity. When AI-fed products are only 25% of the economy, for example, then even keeping productivity growth at 20% in that area leads to only 5.75% aggregate growth in productivity. No longer explosive.
It's all good!
5.75% growth in aggregate productivity would be A-MAZE-BALLS. Even 2% growth aggregate productivity would be incredible! I'm not dismissing the chance that we could see substantial growth in aggregate productivity. But the concept of explosive growth described in the debate between Clancy and Besiroglu is just not plausible.
For every positive feedback loop kicked off by AI, there are significant negative feedback loops in play. These are not things that AI can muscle through, because they are either inherent to the scope of ideas or are based on the fact that ideas have to get implemented and used by people. Further, AI also doesn't change the fact that our choices on how to "spend" the benefits of AI are just that, choices. We may not want to translate those ideas into growth, we may want to translate them into fewer work hours or saving natural resources.
This does nothing to deny that AI could generate incredible ideas that fundamentally change our lives. But the translation of those ideas into economic growth is not straightforward and unlkely to reach explosive rates.
So happy to see someone else write on this topic, and obviously I share your view more than I disagree. Still, having talked with proponents of the explosive growth view a lot, I wonder what you would say to the following?
1. Doesn't "ideas are getting harder to find" already include implicitly include the leaky connection between ideas and productivity? And quantitatively, if you can 1000x your research population, doesn't it kind of blow this issue out of the water, at least for a significant transition period?
2. Would your answer change if we also had robots that have basically human ability to sense their environment and move objects in 3D space (and assuming these robots are not that costly to build)?
An important point here is that R&D isn't really where I'd expect to find growth from AI; *automation* is. one of the most common use cases for AI is *quality inspection*, another is *document processing*. Also I think asking ChatGPT for ideas is a dumb model of both innovation and AI. One thing ChatGPT will definitely not give you is a new idea. That's on you.
One really important thing AI systems might help a lot with is *testing* new ideas through simulation. This is a big big deal; there's a very good paper about how the apparent slowdown in pharma innovation may be because our technology has improved immensely in identifying possible drug targets and generating candidate molecules but much less in evaluating them, not least because researchers' incentives are skewed towards discovery more than evaluation:
https://www.nature.com/articles/s41573-022-00552-x.epdf?sharing_token=UAd7xkgoc3sGOe1KIkhqh9RgN0jAjWel9jnR3ZoTv0NCj65ouIhd_KrJ7CxCFmbJ2TFq0lOfa404SWvMspmI5HUyItjPqmmnyWXClFZb-miSYwYal_WrrGSIEXhlXlOsdbeagcaR77R65JnT5n-db_cugkiD4npkm_W7d_Bvdqk%3D
Another one is that there was nothing about Gordon Moore's original prediction that said it wouldn't require lots more R&D effort. The R&D effort is what delivered the results. Moore's law is some combination of an empirical observation, a planning assumption, advertising copy, and a source of mystical inspiration (especially if you work for Intel, which company treats it like Sikhs do their holy book - the book itself is considered a guru). But really importantly, Moore originally stated it in terms normalized *for cost*, the number of transistors for given expenditure. It took a lot of R&D but the gains were worth it; transistors have not become more expensive.