Avi Goldfarb | The Disruptive Economics of AI

Date

Thursday, Jun 05, 2025

Time

10:00 a.m. PT

Location

San Francisco, CA

Topics

Artificial IntelligenceLabor Markets

Transcript

The following transcript has been edited lightly for clarity.

Kevin Ortiz:

Welcome everyone, thank you for joining us today for our next EmergingTech Economic Research Network, or EERN, event. I’m Kevin Ortiz and I serve as co-head of EERN, along with Huiyu Li, our senior advisor in our economic research department. I’m pleased to kick off today our third EERN event of 2025.

For us here at the Fed, the EERN initiative gives us insights into how developments from emerging technologies and particularly artificial intelligence can affect productivity and the labor market across various sectors of the economy. For example, I recently had the opportunity to lead a roundtable with senior business leaders in the healthcare industry. They taught us how advances in artificial intelligence or changing the medical field with early observations suggesting improvements in patient outcomes and optimized clinician workflows. These intelligence gathering efforts help inform the Fed’s understanding of emerging technologies, and together with these types of events, can provide important information about the future economy.

Now, if you haven’t been to one before, EERN events are opportunities to exchange ideas, learn about research, and share insights with those who are interested in studying the economic impacts of emerging technologies. Past events have explored such topics as job matching in the age of AI and AI’s impacts on real world productivity. Now, today’s installment of our EERN event series will explore the disruptive economics of AI. To understand AI’s policy challenges and its potential impacts on society, we’ll hear from Avi Goldfarb, the Rotman Chair in our artificial intelligence and healthcare at the University of Toronto. Following his presentation, Professor Goldfarb will discuss the research with our host moderator Huiyu.

Now, as a reminder, this event is being recorded and can be accessed on our EERN website following the discussion. Finally, please note that the views you will hear today are those of our speakers and do not necessarily represent the views of the Federal Reserve Bank of San Francisco or the Federal Reserve System. So with that, let’s begin. Over to you, Huiyu.

Huiyu Li:

All right. Thank you, Kevin. I’m very glad that we can have Avi here to share his insights on AI. Avi has been a researcher in this area for a long time and both advising on policy and also looking at the business cases in addition to research. So over to you, Avi.

Avi Goldfarb:

Okay, thanks so much. It’s great to be here as part of this. I know you talked about the disruptive economics of AI policy. The reason you guys are listening and the reason you’re here is because we slowly and then suddenly have been inundated with hype and excitement around AI. We might be on the verge of a multi-trillion dollar opportunity. It might fast-track productivity growth. And maybe we should worry about an AI-generated revolution. Now, no matter what side of that hype you’re on, it’s important to recognize that what we’re talking about when we’re talking about artificial intelligence and the business opportunities in artificial intelligence here in 2025.

Now, there’s an optimistic view where we are on the verge of machines that can do just about everything we can do and listen to us and make our lives much better. Maybe like as in older science fiction. Or we’re on the verge of machines that can do everything we can do and they don’t listen to us. And that’s where we get the dystopian science fiction like The Terminator or The Matrix. Now, I don’t have a strong point of view on how imminent artificial general intelligence machines that can really do almost all of human work is. At the same time, I think it’s important to recognize what the technology is under the hood of both today’s artificial intelligence and the AI that’s likely to be diffusing in the near future.

So when you hear today businesses adopting artificial intelligence, what they’re adopting is prediction technology. Under the hood, it’s important to recognize this is under the hood, when you hear artificial intelligence, the advances are driven by advances in machine learning, which in turn is a field of computational statistics, and so it’s prediction technology. What do we mean by prediction? I mean it in the statistical sense, as in any time you’re filling in missing information, that’s prediction.

So yes, it could be good old-fashioned statistics problems, but it could also be a large number of other problems that are really about filling in missing information. So for example, in our book Prediction Machines, my coauthors, and I frame this in an economics lens as a drop in the cost of prediction. Machine prediction has gotten better, faster, and cheaper. And from econ 101, we know that when the price of something falls, we do more of it, demand curve slope downward. And so as machine prediction gets cheaper, we should expect more and more machine prediction as part of our everyday lives.

And that’s exactly how it’s played out. The first applications of machine learning and business were things that were obviously prediction problems, like whether a borrower is going to pay back a loan. That’s arguably the oldest prediction problem in business, is somebody going to default or not? And increasingly, lenders, banks and others, have been using machine learning tools to predict whether someone’s going to pay back a loan. The insurance industry, they’re also in the business of pricing risk. They’re in the business of prediction. And increasingly, over the past decade or two, they’ve been using machine learning tools for underwriting.

Now, what’s changed in the past few years is we’ve started to recognize a number of things that we didn’t used to think of as prediction can be solved with machine prediction. Like medical diagnosis is prediction. What does your doctor do when they diagnose you? They take in data about your symptoms and they fill in the missing information, the cause of those symptoms. That’s prediction. Language translation is now understood to be solvable with machine prediction. When you’re trying to translate from English to French, what you’re doing is predicting the set of words in French that match the meaning of the set of words in English. And increasingly we’re seeing that is solvable with AI.

What OpenAI is doing, and the other generative models are doing, is also under the hood prediction. What ChatGPT is doing, it is predicting the set of words that’s most helpful, honest, and harmless in response to your query. And the image generation tools are doing the same thing. If you ask for an astronaut on a horse in the style of Andy Warhol, you’ll get an image like this. Importantly, to understand the importance of the underlying data and how the machine generates this kind of an image, recognize that it’s not a search engine. It’s not that there was some database that it pulled this image from. Instead, it was trained with images of horses, images of astronauts, and images of the style of Andy Warhol. And it combined those to predict what you’re looking for when you ask for this image of an astronaut on a horse in the style of Andy Warhol.

So as we recognize, we’re seeing more and more applications of AI, of machine prediction, to the point where many of the applications aren’t obviously prediction problems anymore. Nevertheless, the reason it’s important to understand that under the hood it’s still computational stats, is to understand the role of humans in work, and the essential complements to AI, to prediction machines.

So just like when the price of coffee falls, we buy more cream and sugar. When the price of machine prediction falls and we start using machine prediction in all sorts of places we might not have imagined to be really fundamentally prediction problems. We are recognizing the value of the inputs into those machine predictions, which is one complement is data. And we are recognizing the value of the human role in understanding which predictions to make and what to do with them once we have them. So there’s a human role in recognizing opportunities for the AI tool and then for taking the output of the AI and figuring out what action to take. That’s what we call judgment.

To understand specifically what we mean by judgment. Let me tell you a vignette from this old science fiction movie “I, Robot.” So, in “I, Robot, stars Will Smith, he plays Detective Spooner. He’s living in a world where there’s robots all over the place. But, Detective Spooner, he hates robots. And there’s this flashback scene about why he hates robots. And it turns out that he and a little girl are in a car accident and they both start sinking into a river. And it’s pretty clear that both Detective Spooner and this little girl are about to drown. Then a robot comes along and saves him and not the girl. And that’s why he hates robots.

Turns out that because it was a machine, he could audit it, he could figure out, well why did the machine rescue her? Why did the robot save him and not the girl? And the robot predicted that the adult man had a 45% chance of survival and that little girl only had an 11% chance of survival. And so the robot saved him and not the girl because 45 is more than 11. And Detective Spooner goes on to say, but 11% was more than enough, and a human being would’ve known that. That’s what we mean by judgment. Judgment is about what we value, not the prediction, but what do we do with those predictions?

Even in this fictional world, there was some team of human engineers and ethicists and whoever else, they didn’t tell us, who encoded the values into the machine that a life is a life. The decision of that machine to save the adult man and not the little girl is a consequence of the humans who embedded certain values into the machine. That’s essential for understanding how to work with machines, the remaining opportunities for humans, and how to think about a world where machines are taking actions and responsibility in that world.

The key point is it’s not machines making decisions, machines are making predictions. Humans are embedding our values into those machines sometimes after the prediction shows up, but often in an automated way by pre-specifying and encoding those values, what matters, into the machine. Now, that means whether you’re choosing to use Grok or OpenAI or Anthropic or Cohere, whatever company you’re using, the values in that language model or the values of the leadership of those companies, I suppose, is embedded into the models. And so you will get different results for the same sorts of questions in some cases because certain values have been embedded into the models.

Now, I want to talk about the disruptive economics of AI policy. And I thought a useful starting point is what are we worried about? At least as economists. I don’t know if you remember a couple years ago there was this letter, this petition to say we should stop doing AI research. An interesting aspect of that petition is it listed four specific harms that we’re backing up the idea that we should slow things down. The first harm, should we risk loss of control of our civilization? The second harm, should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us? Third, should we let machines flood of information channels with propaganda and untruth? And fourth, should we automate a way all the jobs, including the fulfilling ones?

Okay, so these are phrased as questions, not harms, but it’s pretty clear that these were meant to be rhetorical questions. And the answer to all of them, according to the drafters of this petition, is clearly no. What I’m going to spend the next few minutes describing is going through these one by one in discussing what economics has to say about them. And in general, it’s not as obvious, it’s not completely obvious that the answer is no.

Now, with a caveat, should we risk loss of control of our civilization? This is what economics has to say about that question. In many ways, I don’t think economics has good models about what it means to have control of our civilization. And to the extent that there is rigorous economic work about it, it doesn’t say much one way or the other. So, this one is a blank slate. But for the other three questions, the answer is almost always going to be maybe or it depends.

So should we develop non-human minds that might eventually outnumber, outsmart, obsolete, and replace us? It depends what we mean by that. And there’s a number of research papers that suggest that machines that can innovate will be fantastic for productivity growth and therefore likely to be fantastic for many, many humans. There’s questions about the implications for distribution that we’ll get to later, but a machine that can replace us or outsmart us, in the sense of making innovation faster, will improve total factor productivity and mean more outputs, more of what we want with less inputs.

Now, there is a question, going back to the first thing about control of civilization, on how good does the machine have to be? How much better off do we need to be in order to accept some existential risk? Major technological change often comes with extraordinary risk. And so if AI’s potential is as big as it could be to massively improve productivity, to give us much more of what we want, cure cancer, improve other aspects of healthcare, et cetera, et cetera, et cetera, what kind of a risk are we willing to take?

And what Charles Jones does in this paper is he takes our macro models very seriously about welfare. He acknowledges, in some sense too seriously, but he says, given our macro models, how much risk shall we accept? And his first answer is a little unsatisfying, which is it depends on the functional form. So there’s reasons to take some existential risk or maybe not depending on what you assume. No surprises there. But where there’s additional insight into the model is that to the extent that AI could massively improve our health and allow people to live longer, then the aspect of our model that says with diminishing marginal utility, how much better off do we need to be, goes away.

So, to the extent that this technology is going to not just improve productivity but improve outcomes in healthcare, then we should be willing to accept much more risk than we otherwise would. So again, should we accept machines that outnumber, outsmart, obsolete, and replace us? Depends on what the machines do and maybe we can even accept some risk if the upside is big enough.

Next, should we let machines flood our information channels with propaganda and untruth? To be clear, on this side there’s nothing net good about it, but the likely scenario is not as doomsday as it might appear. So first, what’s likely to emerge is something like a babbling equilibrium. So to the extent that we can no longer trust images that we see online and videos that we see online, pretty soon we just won’t trust them. It’s not that we’re going to have massive amounts of blackmail, as Joshua posted some time ago on Twitter. Instead, it’s going to be much harder to blackmail anybody because no one will believe you. It’s useful to have reliable sources of information, let’s be clear here, but it’s not the doom and gloom of fraud everywhere.

Similarly, there’s a lot of worry that there’s this near-term risk that there’s going to be large-scale theft and people are going to be getting into bank accounts, et cetera. That’s a small-scale worry, but it doesn’t mean the whole financial system’s going to collapse and it almost surely won’t. And what I mean by that is what’s already been happening is we’ve seen an increase in new tools and innovation in tools to detect whether it’s actually you trying to get into your bank account elsewhere.

So to the extent that AI enables mimicking voices, then verbal phone confirmations are going to go away. Now, that will add an extra friction relative to the ease that we might be used to with our device to just log into our bank accounts or not, but it’s not an existential risk, although it is still sufficiently costly at the institutions themselves, they’re going to have to invest in these technologies.

To the extent that misinformation is going to create not just smaller frictions like I just described, but first-order problems, it might arise from our own flaws. So there’s this work on motivated reasoning and how humans like to read things that they believe and that could lead to increased polarization. And actually in a conference volume that we have coming out, Daron Acemoglu and James Siderius have an excellent article on that risk that AI could increase polarization in large part through motivated reasoning.

Now, the question that we economists have the most to say about is the question on jobs. No surprise here. Let’s start with the first thing you learn about labor economics in econ 101. And the first thing you learn about labor economics in econ 101 is that you have a trade-off between work and leisure. And so if you’re working, or more precisely you have a trade-off between consumption and leisure. And if you want to consume more, you have to work more, which means you get less leisure. Jobs themselves aren’t good. Getting paid is good. Getting paid is good, but work is work. It’s something we don’t want to do in general. We work in order to get paid, and with that pay in our standard economics models we get to consume. We might get some other utility as well.

And the most striking way to see this is again going to science fiction and think about the Matrix. I don’t know if you remember this movie, but if you do, you’ll recognize that every human in this movie is a battery. They have a job, they have a job from the day they’re born to the day they die, but it’s not a good job. And so the goal isn’t jobs per se, the goal is something else. This is dystopian where every human from the day they’re born to the day they die work. It’s not utopian. What we want is the ability to be fulfilled and also have leisure.

There’s another challenge with this idea of machines taking all the jobs, which is related to some work by Baumol in the 1960s that’s since been labeled, “Baumol’s Cost Disease.” And this is the idea that the parts of the economy that don’t grow quickly or that don’t have rapid productivity growth, end up becoming a more and more important part of the economy. As agriculture got more efficient, we saw more jobs in manufacturing. As manufacturing got more efficient, we saw more jobs in services and fewer manufacturing as a percentage. As some services got more efficient, we’ve seen more and more jobs in healthcare or education, which have not grown in terms of productivity gain as quickly. And so as AI gets better at many things that we humans do, in the long run there’s reasons to expect that new jobs will arise and the parts of the economy that don’t experience that productivity growth become a larger and larger fraction of what we all do.

Now, Betsey Stevenson has pointed out that that means, yes, there’s an employment question, which is if we’re working less on net, can we find meaning? If we’re getting paid and we have more leisure is there still meaning in our lives outside of our jobs? And I don’t think we economists have much to say about that except for at least personally I’d like to hope so, but nothing formal on that. But what we can talk about is the distribution of income. And even though there are reasons to be optimistic about the long run in jobs, that doesn’t mean that the income distribution will be fair. And there are reasons to be pessimistic, there’s reasons to be optimistic, but first we’ll talk about some reasons to be pessimistic.

First, inequality might increase because AI is embedded in capital and that might increase the capital share relative to labor share. And there’s some work that suggests that capital share is more concentrated than labor share. That’s not a universally settled point because of the way that pensions work, but to the extent that capital share is more concentrated than labor share, then a technology that increases the returns to capital may increase inequality. On the labor side, the last two technological changes, computing and the internet, which were both IT, like artificial intelligence, led to increased inequality. And in particular what happened was the demand for skilled labor exceeded growth and supply of skilled labor. And this is documented fantastically in Goldin and Katz’ Race Between Education and Technology and in David Autor’s work with various coauthors.

And so what you got to recognize is that to the extent this is skill biased, as in demand for skills goes up, but the supply of those skills can’t keep up or doesn’t keep up, then we should expect increasing inequality among labor. High wage workers are going to do better and better and better perhaps leaving low and middle wage workers behind. And that is exactly what we saw with computers and the internet. But it doesn’t have to be that way. And there’s a number of economists who are arguing from various point of views that it may not be that way for a variety of reasons.

So first, Erik Brynjolfsson, in an article called, “The Turing Trap,” argued that engineers have a choice. Engineers have a choice to build technologies that augment humans or to build technologies that replace humans. And if we’re augmenting humans, that should help at least the labor share relative to capital share. And implicitly, there’s a sense that that might help the low skilled relative to the high skilled, to the extent that we’re depending on who we’re augmenting, which we’ll get to in a second.

Now, I should add Daron Acemoglu and coauthors have a similar argument more focused on policy, that policy should encourage augmentation and not tax automation. This goes back to an older idea by journalist John Markoff celebrating the early computer scientists that focused on augmentation relative to the AI folks who are focused on beating the “Turing Test.” Now, in some of our own work, Ajay, Joshua, and I, my coauthors and I argue that this dichotomy doesn’t make sense because one worker’s automation is another’s augmentation.

So if you’re automating the skills that give the doctor high wages, for example around diagnosis, that could augment what nurses and pharmacists and other medical professionals can do. And so on the flip side, what happened with computers and the internet, in fact, is we augmented those of us who were good at abstract thinking, we’re good at using computers and the internet. And that in turn hollowed out many aspects of the middle class in parts of the world and increased inequality.

Now, what are we seeing with AI? The bite in terms of where the technology is going to affect inequality is going to depend on who’s automated and who’s augmented. There’s a number of papers that have come out over the past couple of years using AI in various workplaces that suggest that within workplaces in general the technology appears to help low skilled workers relative to high skilled workers. Now, on net how this is going to play out is a big open question.

David Autor has a fantastic new project where he is trying to carefully measure expertise to try to understand the impact on wages and unemployment. Part of this is a tool that automates high skilled work, expert work, and might lower wages in that job, but also massively increase employment in that job to the point where, depending on the margin you’re focused on, you’ll see that some people are much better off and some people are worse off.

An old example of that is what’s happened in the taxi industry with the arrival of navigation technology and digital dispatch. So it used to be, to be a taxi driver, it was a relatively skilled occupation and taxi drivers had high wages. With the arrival of ride-sharing apps, with Uber and Lyft, we have a lot more people employed that way. And those are in many ways people who it had been difficult to employ them in other parts of the economy, but the net wages in that industry went down, or the individual wages in that industry went down. So if you were a taxi driver before you’re worse off, if you were someone who was capable of driving, but couldn’t get the job in the taxi industry, you’re now better off.

So there’s this big picture question about whether we really want less automation or not, and it depends on how we think about these aspects of wages and total employment and which particular tasks the AI can do.

So to wrap up, there’s these big open questions about how we should regulate AI and what we should worry about, and economics perhaps unsurprisingly says the details of how we think about this technological change really matter. And some of these questions that might seem rhetorical, the answers are more nuanced and complicated than you might think.

And of course all this depends on the regulatory environment and in a conference volume we have coming out next year on the political economy of AI, a lot of it is focused on endogenous regulation, so that in the sense that the regulations themselves, what we end up doing, are in many ways determined through a lobbying process. And that can lead to increased or decreased inequality depending on various details. Thank you very much. It’s a handful of the articles that I drew on in books. It’s time for questions.

Huiyu Li:

Extremely insightful. I, in particular, appreciate the framing of prediction tasks and judgment tasks or non-prediction tasks in terms of how to think about the use of AI going forward. So I just want to take a moment to get your opinion on the adoption of AI right now. There’s AI and there’s GenAI, depending on what kind of metrics you look at, my understanding is that some metrics tell us adoption is still just at the cusp, especially with GenAI, if we look at job postings, but then labor force surveys of workers’ usage of, say, ChatGPT, seems to suggest there’s much widespread adoption. So I want to get your view on where do you think we are right now in terms of adoption?

Avi Goldfarb:

We’re still early, so we are for sure still early in the adoption curve. Part of that is that when we think about technology adoption at work, in business, we want to emphasize meaningful use. And so there’s a whole literature in the information systems world and scholarship that talks about not just is somebody using a technology, but are they using it in a meaningful way. What we’ve seen so far are, look, this technology is widely used at home and at work. Hundreds of millions people are using the various AI tools, maybe on a day-to-day basis, but often less frequently than that.

But what are they using it for and is it having a meaningful impact on business? So far the evidence is probably not. Now, this is changing week to week, month to month, necessarily when we do empirical work, we always have to look back a year or so. We’re seeing adoption, but we’re not seeing large-scale business transformation in most organizations. But I think it’s to come. I think there’s a lot of potential and excitement here.

Huiyu Li:

That is a good segue for my next question. You mentioned in the beginning that there’s hype, there’s lots of hype, and there may be real opportunities. In what areas do you think there are real opportunities for AI to transform businesses?

Avi Goldfarb:

Okay, so you can think about it industry by industry. So in general, when we think about technological change, we start with the abstraction about the technology, but in terms of the imminent business process change, it’s going to be specific to a particular context. There’s a trade-off here, which is that a lot of the industries that have already experienced rapid transformation are those that were already digitized and had already experienced rapid productivity growth since the 1970s. So the tech industry first and foremost, also other aspects of innovation. And financial services is already digitized and relatively ready for AI-enabled transformation.

Now, that’s been productive so far, but in my view, that’s not where the biggest upside is. The biggest upside are in industries like healthcare and education that had extraordinarily slow productivity growth over the past few decades. And so the problem there is the reason they’ve had slow productivity growth over the last few decades is because innovation in those industries and changing processes those industries is difficult. And sometimes it’s difficult for good reasons, messing up in healthcare is extraordinarily consequential, but it’s also difficult because of entrenched interests and lack of competition and other things.

So where does that make me land? Thirty years from now, we talk about the long run and the short run, I am quite confident that we will see AI-enabled transformation of many, many aspects of the economy. Two years from now, I’m confident we’ll see it in some places, but probably not in the places where the upside is biggest.

Huiyu Li:

That’s helpful, I agree with you. You mentioned “Baumol’s Cost Disease.” We see in an economy like the US that’s very advanced, manufacturing has increased productivity growth much faster than say services, healthcare. But that’s where the big chunk of the economy is, where productivity is not really growing and we have an aging economy. So healthcare and maintaining the quality of life with aging seems like of utmost importance.

So on healthcare, I want to get your insight a little bit more in that because we just recently had a roundtable with the senior executives from some healthcare industries here. They mentioned some opportunities, for example, in terms of taking logs during a session. They used to have human taking logs, but now you can have GenAI taking the logs and they can train these humans to do other types of jobs that are perhaps more fulfilling. So it does sound very promising, but then there are also issues about data privacy and regulation. So I want to get your insight a little bit for the healthcare industry, are the bottlenecks on the technological side or do you think it’s more on the regulation side? I guess this trade-off between risk and benefit.

Avi Goldfarb:

Yes. I think the biggest bottlenecks are on the, let’s call it the regulatory side, or the comfort with the technology side. There are some technological risks too, technology’s not perfect yet and maybe it won’t be, but I think the reason healthcare might be slower than other industries is more to do with regulation than anything else. And in particular, there’s good reasons why we want our healthcare data to be kept private, for example. Again, there’s good reasons why we don’t want to take risks with healthcare. And so if we think our healthcare system as it is, is fine, we’re going to be hesitant to do much to transform it with the technology.

Now, I think about the AI scribe that you just talked about. We have AI tools that while a medical practitioner is talking to a patient, it can fill out the “EMR,” the electronic medical record, automatically, and then all the physician has to do, or the medical practitioner has to do at the end is look at it. That should be both fantastic for efficiency and it should also be fantastic for the medical practitioner’s relationship with the patient because now they can look at the patient and not at a screen.

But there’s real risks there, which is now first everything that happens is now in that record and there’s some ambiguity about the privacy of that record. And so that has to be made clear in a regulatory sense. And I think that’s been worked on. Second, now that you have this automated record in principle, the machine could make recommendations and suggest diagnoses. That will require another set of regulatory approvals that we’ve made great progress on. Actually, the FDA has made great progress on this. Ariel Stern has an excellent chapter in a book we edited on AI and healthcare that summarizes that progress, but it’s still not quite there.

And then there’s other risks like do you want every single word to be recorded or is a summary more valuable? And by you, it might mean the patient, some of those details on the privacy side you might not want. Or it could be the hospital for fear that in a litigious world, that there might be one word that went astray. And so there’s another regulatory barrier in terms of thinking through once we have scribes that record everything, what is the burden for liability and how does that change? That applies not just to healthcare, that last point, it applies to financial services as well. For example, in principle, like right now in many cases, investment advisors take notes whenever they talk to a client and then summarize those notes that AI tools are already good enough to do that, to summarize a call, and even to take perfect notes on everything that was said during the call, near perfect notes.

But it’s not clear, under our current regulatory environment, it’s probably not optimal for financial institutions to follow those rules or to use that tool because the regulatory environment assumes implicitly that the notes are a summary and not word for word. And so if any word might be wrong, that could create a problem. So there’s a real liability aspect to it that we’ve got to get over the regulation there.

Huiyu Li:

And when we have business contacts here, we even have cases where they’re using GenAI to automate generation of business contracts because it’s working so well and lawyers are expensive, I guess. There’s a question of how much is enough due diligence that the business has conducted, whether they’re responsible for some of the errors, like one word error in the GenAI-generated text.

I want to switch topics a little bit to talk about the value that was encoded in the algorithm that’s underneath the hood. I haven’t seen that being discussed a lot. Do you think that any discussion about what values are the correct ones will happen just naturally as society will somehow converge, or actually there needs to be more regulatory oversight because these things are under the hood a lot of the times and people just don’t know?

Avi Goldfarb:

In my view, as long as we have, or continue to have, a robust competitive environment for AI, then different people will choose AIs with different values that match the needs they have. A business AI is going to be more informal with a focus on honesty and that might be better. So on truthfulness. And certain personal AIs might just, harmlessness, making you feel good about yourself might be the kind of values you want. And so as long as we have a robust competitive environment, it’s not obvious to me why we would want regulation to step in and say what values an AI should have, because I don’t think, or at least for me, given my economics expertise, it is not at all obvious how we could choose what those values are universally. And so, again, the caution is as long as we continue to have this competitive environment, but if we do, then I don’t see a real reason to have a government role in embedding values in the AI.

Huiyu Li:

And one last question for me before I switch to the audience is about AI agents. I see that coming up a lot in discussion. Could you put in the framework of prediction versus non-prediction or AI as prediction how should we think about AI agents?

Avi Goldfarb:

So what the agents are often doing for you is going to be filling in missing information. So for example, you may have a service that gives you an agent for planning your next vacation. What is planning your next vacation? There’s a whole bunch of information that needs to fill in. It needs to tell you where you want to go, needs to figure out the airline and the hotel, or the car, the driving trip from the hotel, it needs to buy tickets, needs to map out routes. And fundamentally, all those things are filling in missing information.

Now, it’s not intuitive to think about that as prediction, but here’s why it matters that you recognize it’s prediction, which is first, it’s going to use data that it accesses through the internet, through what it understands about you and through other people’s choices to predict what you might want. And second, and more importantly, it’s not going to be perfect. This is not a deterministic thing. It might say, “I recommend you take a vacation in Florida,” but it’s not like that’s a 100% accurate recommendation. There’s uncertainty around it.

And as you interact with the agent, you need to give it its parameters on how much, given the uncertainty you’re willing to delegate and allow the decisions to be automated, and how many times you want to get back in the loop to make sure that the judgment embedded in the machine is the kind of judgment you want. So for an expensive vacation, you probably want to approve to say, “yeah, I want to go to Florida, not San Diego.” Where for things that are relatively low stakes, you may be more comfortable automating. And that depends on an intuition that we recognize these are prediction tools and there’s some uncertainty around it. That’s where judgment comes in.

Huiyu Li:

Okay, let’s turn to our pre-submitted questions. Actually in the audience here we have many interns. So right now we have a summer intern program. We have people who are still in universities, undergraduate degrees, coming here to figure out what they want to do with their lives. So some of the questions we get from the registration, we actually got a lot of questions, about 200 questions in fact. Many of them actually ask about what young people should do as they think about what skills to learn, how to prepare themselves for the job market. One question is like what skills should young people develop to stay competitive in the AI-driven economy?

Avi Goldfarb:

Okay. So I think the most important skill is an ability to learn and to learn new skills. That sounds like a cop out, but I’m a little humble here and I’ll explain why I said that, which is when I started doing research and giving public talks around AI, people are asking me that same question, “what skills should I develop or my kids develop?” And I’d say things like this was say 2017, 2018, “it seems like we’re about to have autonomous driving pretty soon. And image recognition tasks like radiology seem to be in trouble, but writing and coding seem to be inherently human.”

But if you look at the world today, the AIs actually got pretty good at writing and coding, and the autonomous vehicle, maybe that’s imminent still, but it’s not quite where we expected it to be and there’s still plenty of radiologists around. And so it’s just a little bit of recognizing that the technology is changing fast and the specific skills needed will change with the technology, but the one thing I am confident about is the skills you’re training for today are going to be different than the skills we’re going to need in the workforce 10, 15 years from now.

And so, learn how to learn is another way of saying the 20th century model of your education is done some point between ages 18 and 25, and that’s the skillset you’re going to use for the rest of your career? I think that model isn’t going to play out in the future, and instead we’re going to continually have to learn new skills. And so putting yourselves in situations where especially when you’re young you are a little uncomfortable and you don’t know exactly what you’re supposed to do, and having to learn quickly to get up to speed in order to succeed is going to be, just strikes me as very important and useful.

Huiyu Li:

Thank you. So regarding learning, another question we received is what are opportunities to use AI tools to enhance student learning and improve effectiveness of instruction?

Avi Goldfarb:

There are lots. Okay, so used properly or used with the right motivation, maybe the best way to put it, AI can massively enhance the efficiency of education because you essentially now have a tutor that you can ask questions and get answers to and learn what you did wrong on something so that you can do better. Now, there’s real risks here because for a lot of things, certainly through high school and for a good chunk of undergraduate education, the AI models are excellent in some parts of graduate education. And so in principle, you could just cheat on your homework if you’re learning a new skill and the AI will help.

But if you’re motivated to learn, you can get feedback on mistakes you’re making and how to improve. And some of that you can already do with off-the-shelf models from OpenAI or Anthropic or elsewhere. And then there’s a number of startups and small companies that are arising to help you with your education. Some of them are AI tutors that are sold directly to students or their parents. Some of them are AI course support that are sold to professors and universities. But there’s been real innovation there and it allows you to push the frontier in what you can do much more.

So for example, in my classes, for the MBA students I teach, some of them have good technical skills in calculus, et cetera, and some of them don’t. And so historically I was always a little hesitant to post technical readings. But now I am confident that at least for the technical readings I choose, that ChatGPT and Claude can explain the intuition to smart people very well. And so I’m now much more comfortable pushing them to do harder things because to the extent that they don’t have the training, they can tool up on it at least the things they need to know quickly.

Huiyu Li:

Okay, thank you. So switching gears, another question is about how organizations should adopt AI. Organizations are very complex, you have many different types of jobs, many different types of interactions and AI could come in different ways. So we received many questions of that flavor. How should organizations think about adoption?

Avi Goldfarb:

So this was the core theme of our second book, “Power and Prediction.” Perhaps the most important thing to recognize, at least most important insight from our work, is often when you adopt technology in an organization, it’s a point solution and it’s relatively straightforward to look at your existing workflow, find some part of that workflow that the tool can help, like a prediction problem within an individual’s workflow, take out the human in that task, drop in the machine, and keep the workflow the same. And these point solutions are useful and they incrementally improve productivity as they diffuse.

So one layer of what’s happening, and a lot of probably what’s already been happening are these point solutions which is, oh, we currently have a call center process and we’re going to put an AI into that call center process to make our call center workers more efficient. There is another solution which is what we call a system solution. And this is, when you look at the history of economics technology, massive productivity gains tend to happen by changing the workflow, by figuring out what the technology enables you to do that was impossible before. But those system solutions are hard and they take time.

So a lot of the organizations that are going to be at the frontier to really figure out how to use AI well are going to reorganize their processes in ways that we can think of as system solutions. The example from history is based on a Warren Devine Jr. paper in the Journal of Economic History from 1983 where he showed that electrification of factories didn’t improve productivity until people realized that electricity wasn’t cheap power, what electricity was distributed power. And so you could put the machines in different places. That changed factory workflows from a lot of back and forth to the modular production system that you might think of as the quintessential 20th century factory.

And we’ve seen similar things in other industries, like I’ve already talked about taxis. So digital navigation along with digital dispatch enabled a new kind of personal transportation industry where just about anybody who could drive was as good as a professional driver and that transformed that industry. In advertising we saw something similar happen between 1998 and 2005, where prior to the 90s, advertising was an industry that was sold on charm, like the TV show Mad Men, which was in many ways like a caricature of the old advertising industry. But there were some truths there in the fundamental role of salesmanship and charm in getting people to buy advertising was fundamental.

But then we started to have good predictions about who a user might be and what they might want, and those, largely with machine learning tools, data more generally, led to the modern ad tech industry where most advertising today is not sold through these personal relationships and charm, but it’s sold through algorithms. That was a massively profitable, technologically driven, system transformation. And so with AI, we’ve got those two examples, but we’re likely to have more.

Huiyu Li:

How do can firms or actually existing organizations really change their structure in the dramatic way?

Avi Goldfarb:

From the historical record we see both. So often it is new firms or small firms that figure something out that become big firms, whether it’s like Ford Motor Company or Google, or Netflix in the case of video, but sometimes old firms figure out how to reinvent themselves. Apple figured out how to reinvent itself in the early 2000s. It could go both ways and there’s lots of management books about how to do this thing. They’re a little bit outside my expertise. When we think about disruption, there’s a lot more smart people outside your organization than in it, no matter how big your organization is and how smart your people. And so there is often a really good chance that somebody outside will figure out a way to disrupt what you do in time to technological change. Rebecca Henderson and Clay Christensen have different versions of this. It’s summarized actually nicely in my colleague Joshua Gans’ book, “The Disruption Dilemma,” if you’re curious.

Huiyu Li:

One last question. How might policymakers be able to mitigate the potential disruptions to the labor market?

Avi Goldfarb:

So there’s lots of different proposals on the table. Right now it’s not obvious that we need new tools, but to the extent we’re worried about disruption, we need to make sure that the social safety net tools that we do have are robust. So what I mean by that is there is a real risk in the short term of labor market disruption, particularly in certain industries. So if suddenly we figure out how to automate the process of auditing, then thousands of people whose jobs are around auditing could be looking for new roles. If suddenly we figure out how to automate many aspects of computer programming, then many computer programmers will be looking for new roles.

And so we want to make sure when that happens, that the social safety net is sufficiently robust. Depending on your politics, you may think it is already too strong, you may think it’s not strong enough, but I don’t think there’s anything different here, just that the social safety net, to the extent that there is a risk here, will be stretched. And so you want to make sure that it can deliver what you think it can deliver. And that’s true in terms of income supports and healthcare and education, et cetera.

Huiyu Li:

Thank you very much, Avi, for a very insightful talk. I’ve also been inspired to watch “The Matrix” again or “I, Robot” again. Alright, thank you very much. Hopefully we can have you in the San Francisco Fed in person at some point.

Avi Goldfarb:

That would be fantastic. I’d love to.

Huiyu Li:

Okay. Thank you very much.

Summary

Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Healthcare, and Professor of Marketing, at the Rotman School of Management, University of Toronto, delivered a live presentation on the disruptive economics of AI on June 5, 2025.

Following his presentation, Professor Goldfarb answered live and pre-submitted questions with our host moderator, Huiyu Li, co-head of the EmergingTech Economic Research Network (EERN) and research advisor at the Federal Reserve Bank of San Francisco.

You can view the full recording on this page.

Stay in the know

Subscribe to notifications of future EERN events and our monthly SF Fed Newsletter.


About the Speaker


Speaker’s Related Research