10 Comments

So happy to see someone else write on this topic, and obviously I share your view more than I disagree. Still, having talked with proponents of the explosive growth view a lot, I wonder what you would say to the following?

1. Doesn't "ideas are getting harder to find" already include implicitly include the leaky connection between ideas and productivity? And quantitatively, if you can 1000x your research population, doesn't it kind of blow this issue out of the water, at least for a significant transition period?

2. Would your answer change if we also had robots that have basically human ability to sense their environment and move objects in 3D space (and assuming these robots are not that costly to build)?

Expand full comment

An important point here is that R&D isn't really where I'd expect to find growth from AI; *automation* is. one of the most common use cases for AI is *quality inspection*, another is *document processing*. Also I think asking ChatGPT for ideas is a dumb model of both innovation and AI. One thing ChatGPT will definitely not give you is a new idea. That's on you.

One really important thing AI systems might help a lot with is *testing* new ideas through simulation. This is a big big deal; there's a very good paper about how the apparent slowdown in pharma innovation may be because our technology has improved immensely in identifying possible drug targets and generating candidate molecules but much less in evaluating them, not least because researchers' incentives are skewed towards discovery more than evaluation:

https://www.nature.com/articles/s41573-022-00552-x.epdf?sharing_token=UAd7xkgoc3sGOe1KIkhqh9RgN0jAjWel9jnR3ZoTv0NCj65ouIhd_KrJ7CxCFmbJ2TFq0lOfa404SWvMspmI5HUyItjPqmmnyWXClFZb-miSYwYal_WrrGSIEXhlXlOsdbeagcaR77R65JnT5n-db_cugkiD4npkm_W7d_Bvdqk%3D

Another one is that there was nothing about Gordon Moore's original prediction that said it wouldn't require lots more R&D effort. The R&D effort is what delivered the results. Moore's law is some combination of an empirical observation, a planning assumption, advertising copy, and a source of mystical inspiration (especially if you work for Intel, which company treats it like Sikhs do their holy book - the book itself is considered a guru). But really importantly, Moore originally stated it in terms normalized *for cost*, the number of transistors for given expenditure. It took a lot of R&D but the gains were worth it; transistors have not become more expensive.

Expand full comment

Great piece.

The only part I disagree with is "We may not want to translate those ideas into growth, we may want to translate them into fewer work hours or saving natural resources."

Was it Keynes who predicted we'll have a 15 hour work week by now?

Expand full comment

I read a working paper that assessed the history of the industrial revolution(s) and figured it took about 60 years for a scientific discovery to become a substantial industry. In the case of the current AI systems, we've got a remarkably well-defined date for when they were discovered to the point of working on sample problems. This suggests it will be around 2080 before they significantly contribute to productivity.

Expand full comment

The example I saw was interchangeable parts. Eli Whitney scammed the Continental Congress with his rigged demo in the late 18th century. Thirty years later, his son and Sam Colt managed to deliver, and thirty years after that the machining industry started to adopt what they called "armory practice". Nowadays, the big breakthrough has been DRM limited non-interchangeable parts.

Expand full comment

Reality is structured like a well-balanced videogame

Expand full comment

*I'm liking this post for the excellent use of a truncated superlative.

Expand full comment

This is a bit simplistic. The problem isn't a lack of ideas. We have a surfeit of ideas. We have more ideas than ever. Read any academic research blog on the problem of getting funding to try out new ideas, the caution of funding agencies, the slow growth of funding, the neglect of less trendy areas of research and so on. If all AI can do is help us generate ideas it is, at best, worthless.

Look at GLP-1 drugs. They were first developed in the 1990s and demonstrated in trials to help diabetics and people wanting to lose weight. Why did it take another 30 years before they became widely available? That's easy, someone at Pfizer made a resource based decision and decided that non-injectable diabetes drugs would get more resources and GLP-1 would get less. Right now, GLP-1 drugs seem to be a good idea, but in the early 1990s they didn't. At no time was the problem a lack of ideas.

The best uses of AI I've seen are in using AI as an interpolation tool. When you have lots of data, it does a pretty good job of interpolating. So, it's good for protein folding and optimizing alloys. If might improve research productivity thanks to this, but that's about trying out ideas, not coming up with them.

Expand full comment

Great post, Dietz, as always! Here's a background paper that works through some of these issues in some simple models, with Philippe Aghion and Ben Jones: https://web.stanford.edu/~chadj/papers.html#AI

Expand full comment

Ideas are like shoelaces: the more you have the less you use of what you have, unless you are also growing extra legs. Of course!

Thanks for directing us to the asterisk debate. It was a pure pleasure. Fun to watch Clancy and Besiroglu channel Solow and Romer (and Say) - all the more so because I was pretty sure, after following this blog for a long time, that a Coach Corso imitation was forthcoming and the subjects of demand and I/O-level interactions and runaway duplication might come up. Great stuff.

Very glad to hear that a 4th Ed. is forthcoming. Can't wait.

Expand full comment