Which data points qualify as true recession indicators?
The yield curve, a comparison of short- and long-term interest rates, was scary recently, but does not suggest super high recession fears at the moment. The Sahm Rule raises alarms when there’s a sudden relative spike in unemployment, and it hasn’t technically been triggered yet despite incrementally worsening employment in recent months.
But how does the Bloomberg Rule work? That’s the one that lays out how many times a Bloomberg Q&A article with a normie economist can contain variations on the word “worry”? The one from Thursday with Jason Furman, which is specifically about an AI bubble, contains a deeply troubling 14 worries. I don’t want to be alarmist, but I’m not liking the data, folks.
Jason Furman is as normal as economists get: He’s a Harvard professor. He was chairman of the White House Council of Economic Advisers under president Barack Obama. In October he was on the podcast of conservative New York Times opinion columnist Ross Douthat to talk about this same topic, and that interview only contained one “worry.”
So why the spike?
Furman says “I’m more worried about the financial valuation bubble than I am a technological bubble” in his first answer. This seemingly gets at some kind of granular distinction—as if the tech might be fantastic, but the companies can be overvalued anyway, and the second thing is the real problem. But what he says next sort of makes it sound like we should all worry about both equally:
“To justify financial valuations, you basically need two things: the technology works really, really well, and you can make a profit from that. The two threats to valuations are that we hit diminishing returns and a lot of the different scaling laws that have applied to date don’t apply in the future. Moreover, I don’t know that every scaling law translates economically. Every time your microchip in your computer gets two times as fast, you don’t write Word documents two times as fast or respond to emails two times as fast. In fact, a lot of that is almost like excess capacity that is building up in our computers, and that could be what happens in AI, even if it follows the law.”
That arguably describes one of the biggest AI stories of the year. When OpenAI released the GPT-5 model in August, whether or not it was a worthwhile incremental step or not, ChatGPT users clearly didn’t see enough of an upside to balance out the fact that they didn’t enjoy talking to it. OpenAI upgraded the model people were using as a friendship substitute, and it didn’t suddenly get exponentially warmer and more insightful. It arguably just had “excess capacity.”
If you’re still confused as to where the line is between a tech bubble and a valuation bubble, don’t worry because Bloomberg’s interviewer Shirin Ghaffary says she is too. Furman elaborates a bit, saying that separately from valuations, there are also “hundreds of billions of dollars a year being spent on data centers, energy and the like,” and that this is “an actual, real activity.” He compares this to internet infrastructure being built out during the dot com craze. But he continues:
The thing that would worry me is if it just ended up not working and adding to productivity. Right now, we’re seeing AI mostly on the demand side of our economy.
He later adds:
We do not have a US economy that is firing on all cylinders. We have a US economy that is firing on one cylinder right now.
These are two important things to keep in mind about how normie economists see AI right now. Saying AI is on the demand side may feel silly—how much AI do you demand on a day-to-day basis? If you’re like me, zero to very little. But that’s not the demand he’s talking about. Think of the global economy as one giant, worryingly empty Home Depot. AI being on the demand side means it’s one giant, voracious customer in the global Home Depot buying enough drills, cement bags, and ladders to keep it in business for the time being.
But AI can’t just be the only big-spending customer in the global Home Depot forever, and what it’s going to build with all that stuff has to drive enough economic activity to drive other customers—in fact more customers than ever—into the global Home Depot so they can build things too.
If we go back to that ChatGPT incident from this year, while that is a big part of why people use consumer-grade AI, it turns out not to be a particularly good example of the type of use case Furman thinks could drive growth. He also downplays the idea that AI is going to clobber employment in pursuit of efficiency, nor that this is a major risk or ever could be (“At every point in time that people have thought that in the past about this employment question, they’ve been wrong,” he says).
Instead, Furman’s crystal ball contains a very murky image of what AI is supposed to do to keep the economy afloat:
“People in the wild are just slow and kind of complicated and figure out one use case this year and the next use case the next year, and want to test it eight different ways before they deploy it. Different businesses, different industries, different sectors will figure this out at different times, as opposed to you waking up one day and there’s a big bang. I should say that is a best guess, with an extreme caveat that anything could happen.”
Your mileage may vary on how reassuring this is but my interpretation of what Furman is saying is, basically, AI will be genuinely useful at as-yet unknown times, to as-yet unknown people. That’s not a totally unconvincing prediction. The really worrying part, though, is that it has to be true.
