-
James M. LindsaySenior Vice President, Director of Studies, and Maurice R. Greenberg Chair
Ester Fang - Associate Podcast Producer
Gabrielle Sierra - Editorial Director and Producer
Transcript
LINDSAY:
Welcome to The President's Inbox, a CFR podcast about the foreign policy challenges facing the United States. I'm Jim Lindsay, director of Studies at the Council on Foreign Relations. This week's topic is governance of artificial intelligence.
With me to discuss the capacity of the United States government to lead in creating a new framework for regulating artificial intelligence is Kat Duffy. Kat is a senior fellow for digital and cyberspace policy here at the council. She has more than two decades of experience operating at the nexus of emerging technology, democratic principles, corporate responsibility, and human rights. She recently co-wrote the piece, "Defending the Year of Democracy: What it Will Take to Protect 2024's Eighty-Plus Elections from Hostile Actors" for Foreign Affairs. Kat, thank you for coming on The President's Inbox.
DUFFY:
Happy to be here. Thanks for having me.
LINDSAY:
So Kat, artificial intelligence, otherwise known as AI, is the big story in technology, sort of burst on the scene a little more than a year ago, and it has promised great benefits for society, but a number of people in the tech community and outside it have warned about the potential malign consequences of artificial intelligence. So there's been a lot of talk about the need to create a governing structure, a regulatory structure for AI. Obviously it's a technology that operates here at home, but across borders. So there's a lot of talk about international governance, so can you give us a sense of where that debate is today?
DUFFY:
Certainly, happy to. I would say AI has burst not only onto the technology scene, but as you said onto the global governance scene. In ways that, I have to say, having worked in the tech policy space for twenty-five years at this point, I've never seen a technology capture the imagination and the panic of governments to quite the degree that I have seen AI capture it in the past year or two.
I think right now you're seeing a lot of people theorizing around a triad of tryout of AI governance with China strongly regulating its AI systems and its models, and its testing. Europe having just passed the draft version of the EU AI Act in December, which follows a long pattern now of Europe having led in rights centric and risk focused digital governance. That includes everything from the data privacy, regulations, GDPR, which the EU passed a long time ago at this point.
LINDSAY:
This is something people experience a lot when they go on the internet and all of a sudden it's prompted to ask them if they want to accept the cookies or manage the cookies.
DUFFY:
All the cookies.
LINDSAY:
That's thanks to the European Union.
DUFFY:
Precisely. And the European Union also in the past couple of years passed new laws, the Digital Markets Act and the Digital Services Act, which are the leading laws in the world around platform governance at the moment.
LINDSAY:
I just got to ask you right then and there, what is platform governance?
DUFFY:
It's a great question. So platform governance is how people are thinking about the ways that digital platforms are online platforms, and this generally refers to social media companies.
LINDSAY:
So we're talking X, Facebook, Instagram.
DUFFY:
Yes, but also things like Wikipedia, right? So there's a range of digital platforms that can be included, but essentially platforms that operate on the internet that are providing information is one way to think about this. And so Europe has really led the way in coming up with regulations around privacy, around what companies have to do in terms of content moderation, transparency reporting, and other actions they have to take to keep users in the EU safe.
Now, the European Union is such a large market, that by extension this works to some degree as a sort of global governance model because it changes the incentives for the companies in terms of what they invest in and what they report on.
And then you have this sort of third going back to the triad of governance. You have this third approach, currently in the United States, we've never even managed to pass a federal data privacy law. So the U.S. is extremely far behind the curve in terms of having laws that regulate things like social media platforms and a fair number of areas of emerging technology. I want to be careful here because there are all sorts of areas of emerging technology like biosynth, where we have existing protections in place and things like the FDA. So I want to be careful not to conflate too much.
LINDSAY:
Understood. There's some differences across technologies.
DUFFY:
And so in the U.S., overwhelmingly we haven't seen Congress regulating in this space. And in terms of AI, although most of the AI in the United States is a result of federal government investments over decades, this current batch of AI and where we are has been overwhelmingly funded by the private sector. And there is this real concern that the development of AI has been so divorced from government that the government now has very little control over how that AI is being developed and rolled out into the world.
From a governance standpoint, what it has meant is that arguably the most transformative technology of this generation in the United States, which is probably the global leader in the creation of AI, overwhelmingly that is being governed by corporate governance models, not by government governance models. And corporate governance is designed to manage investor risk. It is not designed to manage or to govern societal risk, and in fact, maybe antithetical to it.
And so right now the United States exists in this sort of limbo. We have a new executive order that came out from the Biden administration that tries to put some constraints in guardrails around AI development. We have the CHIPS Act which came out, which helps put some constraints around things like chip production. We're seeing some approaches through the executive branch to import and export controls on some of the components of these technologies, but we do not have the same type of federal or national governance structure for AI that we would see-
LINDSAY:
Certainly not when it comes to rights in privacy as you would have in Europe.
DUFFY:
Certainly not, certainly not. We also don't have what many would perceive as the barriers to innovation that you have in Europe because of the need to align with so many regulatory requirements, and so there's a healthy tension there.
LINDSAY:
Well, that brings to mind the old joke that the United States innovates, China imitates, and Europe regulates. I want to go back to what you just said about the U.S. approach because a lot there and if we can unpack it. I guess where I'd like to begin is on this question of why it is not in societal interest to just let a thousand flowers bloom, to let the market sort this all out? I guess I'm channeling my libertarian friends who basically say, "Government should just get out of the way and great things would happen." What's wrong with that perspective?
DUFFY:
I think there are lessons that we can take from platforms here and the way that social media rolled out in the world. The company's job is to make money. It's to give shareholder value, or if you're talking about private equity, to return value to their investors, money to their investors. When you have a technology as transformative as AI that's going to roll out across a society, then to some degree it should at least I think be the government's job to look at where the market is almost guaranteed to fail because there isn't going to be a strong market return for that investment, and make sure that you're filling the gaps.
So we know that technologies, that AI in particular, will potentially scale bias, will potentially scale existing sexism, existing racism, areas where we have research gaps and data gaps, languages that aren't as well-known or covered in digital spaces as a language like English. And so while it is important I think for the United States to protect space for innovation, we also have to acknowledge that the market leaves all sorts of people out in the cold. And if we want a technology that can serve society, then we need to think of society as a whole and how that technology can be helped to serve it.
There's a lot of room when we talk about governance, so much of the conversation right now is around carrots instead of sticks, but there's so much room for creative, interesting, and proactive government engagement here in addition to just regulation or executive orders.
LINDSAY:
And I would imagine there's also the risk with AI, the ability to create so-called deep fakes that could be potentially very destructive to a functioning democracy because people will see things that look to be real but are in fact fake.
DUFFY:
Yes. I think it's challenging because like any shift in expression, when people first saw movies, they weren't clear on if they were real or if they were fake, right? This isn't really a new question in terms of whether people can trust the information they're seeing. I think the larger question here that we're going to be grappling with in the short term is this issue of the liar's dividend, which is that if everything can be fake, nothing can be true. And so it's the generalized distrust that I think we will increasingly see in all types of media, in all types of information, and the ability to question that. That I suspect is going to be pretty new both for the U.S. and for other countries, and that I'm not sure we're societally prepared to grapple with.
We also in the United States specifically protect free expression as a fundamental bedrock of our nation in a way that other governments don't. Other governments have ministries of information and maintain much greater controls on their information environments. I think most Americans would view controls on expression as fundamentally anti-American. And so the question is going to be through this electoral cycle. If we see the same type of explicit, pornographic, AI-generated images of female politicians, for example, who are running for office that we just saw of Taylor Swift, is that truly free expression in our minds? Is that the expression that we should protect? By the same token, if we see really funny images that have been generated by AI that fall squarely within our understanding of parody, is that expression that we want to protect? Both of those are the same thing from a technical standpoint, but they come from different places and they mean different things in terms of what we value societally.
LINDSAY:
And somewhere along that line, it's pretty blurry. I know that some people's satire is another group's outrage.
DUFFY:
Absolutely. We've always said in the United States that I don't have to agree with my enemy, but I'll fight to the death my enemy's ability and right to say what they want to say. I mean, the ACLU has defended the KKK.
LINDSAY:
The classic case, Skokie.
DUFFY:
Exactly. So I think that there are going to be real questions for us to grapple with in terms of how we think about the First Amendment and the degrees of it in this new digital age.
LINDSAY:
Let's dive into that because one of the things you said that has struck me is that Congress has been unable to legislate or chosen not to legislate in this area. I think that may be partly because of the difficulty of doing so, partly because the speed with which AI has developed and the overall inability of Congress in recent years to come together in a bipartisan fashion to pass legislation. The consequence of that, of course, is it has pushed most of the effort onto the executive branch. President and executive branch have all kinds of authorities. You are trained as a lawyer. As you know, lawyers can read into old statutes, powers, and authorities that perhaps the authors of that legislation never imagined or intended. But my sense is that administrations have tried to operate in this space. They've come face-to-face with arguments. They're exceeding their constitutional authorities, particularly because of the issues involved with the First Amendment. Help me understand that.
DUFFY:
It's very true. What we have seen, it was very interesting. Recently, there was a highly publicized online safety act hearing on the Hill with the Senate Judiciary Committee grilling these top five CEOs from Meta and Discord and TikTok. One of the things that kept striking me was the number of senators who said, over and over again, "We haven't been able to pass any laws because you keep blocking us." And that struck me as such a tremendous self-own in a congressional hearing.
And what was really remarkable to me about it was the number of senators who were attacking the CEOs for the fact that their profit model involves data harvesting. Meanwhile, Congress has failed for decades to pass the federal data privacy law that would have, if not eviscerated, significantly impacted that revenue model. And so that disconnect between, "We don't like how you're making your money," and also, "We haven't done the work to change the requirements around how you are allowed to make money and what citizens are allowed to give up in order for you to make money."
LINDSAY:
Just to be clear though, when we talk about data harvesting, it's that when I go on and use my phone and I visit places and I like things or not like things, social media companies, everywhere we go, is sort of harvesting that information and it's traded on a market. And that can be used to target you as an individual with promotions, ideas, suggestions, and the like. Correct?
DUFFY:
Precisely. And this is a sort of ecosystem that has been deemed surveillance capitalism, and it very much reflects the fact that people's data is now essentially a commodity in the United States.
LINDSAY:
Americans seem to be very comfortable with companies knowing a lot about them. I assume Amazon knows everything about my eating habits, but very uncomfortable if the government knows something about them.
DUFFY:
Absolutely. In a way that is, I think, somewhat baffling but also suggests that we've done a poor job of teaching Americans exactly what is happening with their data and those implications. I think that old adage, "if the product is free, you are the product," is a good one for people to remember.
LINDSAY:
So as we think about this, the administration, particularly the Biden administration, has tried to do some things. My sense is, and again, you are the lawyer, that complications have arisen because of some recent court cases.
DUFFY:
So we have a few cases in front of the Supreme Court right now that I'm certainly keeping my eye on, and that I think depending on how they come out, could really fundamentally impact how the U.S. can leverage the fact that we are a leader in the development of AI and can really impact our soft power as well in terms of the draw that we offer to the world to be, this is the country where you want to come and build. This is the country where you can create amazing things.
The first is a case that is really looking at what we call the Chevron doctrine, and the Chevron doctrine has been around for decades. And it essentially says that Congress passes a law and then the executive branch has the authority to interpret that law because Congress can never pass a law that covers every single eventuality. The executive branch then has the right to interpret that law. And so long as that interpretation is reasonable, the courts, the judicial branch, will give deference to the executive branch's interpretation of a statute at least--
LINDSAY:
And Chevron in here is referring to a case that involved the oil company Chevron.
DUFFY:
Yes, from many, many years ago. And so this has become... Now it's called the Chevron doctrine. And so we have cases in front of the Supreme Court that have challenged this, and oral arguments occurred in January. And what's been really interesting is thinking about if the Supreme Court were to come back and say that in fact the courts are the best arbiters of a statute and of legislation and not the executive branch. Well, if you think that legislative branch has moved slowly, think about the judicial branch. Imagine every single case that involved understanding, even if, say Congress did pass an AI act. Imagine if every single question around the AI Act had to go to a lower court for resolution. It would fundamentally stymie innovation, clarity, business's understanding of what the law is. We would have disparate decisions across the judiciary.
So there's a real need, I think right now for the executive branch to be able to have at least clear interpretive authority around laws if we are going to move at the speed that we're going to need to in terms of adopting AI and governing AI.
LINDSAY:
This sounds like it could almost be the "lawyer self-employment act," as I think of my son who just took the bar, or my son who's completing his second year in law school. If Chevron is overturned, that would seem to create a lot of business for lawyers, which gets to your point about the transaction costs of getting anything done.
DUFFY:
It's true. Although I think all the lawyers will then be terrified that an LLM is going to replace them.
LINDSAY:
Understood. Everyone is at risk to artificial intelligence.
DUFFY:
So that's one area that the court is deciding on, and we expect that ruling to come down in June. The other are in what we call jawboning cases, essentially Murthy v. Missouri.
LINDSAY:
Tell me what jawboning means.
DUFFY:
Jawboning is essentially a government using its power to coerce a company into doing something, and it's a fuzzy line in terms of what that means.
LINDSAY:
My suggestion is you're jawboning.
DUFFY:
Yes. So I spent five years in the leadership of a multi-stakeholder initiative that looked specifically at confidential reports of the largest tech companies and how they dealt with various global government demands for content takedowns for data... And I can tell you that the leading American companies are very clear on what is coercion. Because when you have governments raiding your local offices, arresting your local employees, seizing your data on your servers, seizing your computers, filing lawsuits against you in order to get you to give them information or take down a certain type of content on your platform. That feels pretty clearly like coercion. Right?
In the United States, there's a real question right now because of these lawsuits and what's in front of the Supreme Court in terms of where the line needs to be drawn between the federal government advising companies of information, the federal government persuading—I shouldn't say the federal government, but the executive branch persuading companies—essentially using the bully pulpit and the executive branch coercing companies into doing something. And this was specifically at issue, looking at misinformation and disinformation regarding COVID and regarding the past elections.
There are many, many, many different opinions on where the line should be, if a line was crossed, who might've crossed it, how it might've been crossed. My personal opinion is that the government actors are well aware that the companies are the only ones who get to make decision because the companies have their own First Amendment rights and that the companies having dealt with so many other governments around the world are also deeply aware of their right to say no to the U.S. government. That's my personal take.
That said, moving into the digital age and moving into AI, it is going to be imperative that the U.S. government be able to work closely with our private sector in order to understand what's going on, to do stronger cybersecurity, to understand how to build out AI, to understand the technicalities. And right now the very unclear standard for what would be a First Amendment violation, essentially what would be the government being coercive versus what would be perhaps even a First Amendment concern, which goes more to persuasion, right? Where is that line?
And in clarifying that line, do we also get into a realm where anything a government actor says to anyone at a private company involving tech is suddenly having to go through twelve layers of legal clearance? I don't think we're headed there, but our ability to have a free flow of information and a constructive dialogue with the private sector in this space is going to be key to our ability to understand AI and to move it forward. And so between that and the executive branch's potential future diminished capacity to govern or interpret in this space, these are high-stakes cases for AI and for digital governance.
LINDSAY:
My understanding is the big case now in that is Murthy V. Biden?
DUFFY:
Murthy v. Missouri.
LINDSAY:
I stand corrected. Murthy v. Missouri. Could you give me just a short description of what the case is about? And then I want to explore the consequences depending upon how the court rules.
DUFFY:
Yes. So the case was aimed at a perception that different executive branch agencies, ranging from the FBI to the Department of Homeland Security to the State Department over to the White House, that they were, during Covid...That members of the executive branch were essentially telling social media companies what information they could and could not keep up. Part of this had to do with the CDC as well trying to work with the companies to--
LINDSAY:
Center for Disease Control.
DUFFY:
Sorry, the Center for Disease Control, yes. Trying to debunk and clarify information about vaccines and about public health practices and COVID treatment. It also came into play in terms of the 2020 election and information and disinformation happening in that election. So for example, if you remember Sharpiegate, which was this concern in the federal election, that Sharpies were being handed out to electoral centers, and if you filled in the dot with the Sharpie, it wouldn't be read. This was a very big rumor. It was a lot of information that was flowing around during the elections. And so Sharpiegate is a good example of the type of information that is at issue in this case.
LINDSAY:
Ok so help me understand what happens here, because we have an example in which a government agency may go to a social media platform and suggest they take something down because it is misleading or destructive, but we have legal cases that may narrow that. So how would the United States be able to deal with misinformation and disinformation? I ask that in the context of the fact we are in an election year, there's likely to be a lot of misinformation and disinformation. Some of it coming from overseas designed to sow division in the country. Will the U.S. government be able to respond to the threat it faces?
DUFFY:
So I think there's two answers to that. There's the near-term and then there's the long term. The near-term answer is that the Supreme Court, in this case, a district court judge enjoined many executive branch agencies from being able to-
LINDSAY:
Does that mean barred? Prohibited?
DUFFY:
Barred, yes. Prohibited. Many-
LINDSAY:
I didn't go to law school, so take it easy on me.
DUFFY:
I only play a lawyer on podcasts. The district judge in the first review of this case basically came back with a decision that barred many U.S. executive branch agencies from having communications with social media platforms on a range of topics. There was also a whole section of that that said, "Here's where they can talk to the companies." The challenge is that those two sections were so broadly written that they frankly fight each other. And so it's abundantly unclear from that decision what is allowable and what is not. At least if you're a federal lawyer and you are really lawyering hard, right?
Now, it's gone through an appellate process. This is a very convoluted situation. Where we are now is that the Supreme Court in this fall, basically said, "Yes, we will review this case. Until we come back with a decision on this case," which will be presumably in June, "Until we come back with a decision on this case, the injunction has stayed. Pretend the injunction doesn't exist."
LINDSAY:
You are no longer enjoined.
DUFFY:
"Carry on, you are no longer barred, you are no longer enjoined. This injunction doesn't exist." Here's the issue. From everything that I've heard, the executive branch agencies are still overwhelmingly operating as if that first injunction is in place. And so we have Meta on record, for example, saying that, "U.S. government stopped talking to us in July." I also know from colleagues who are inside of different agencies that basically their communications channels with social media companies to talk about some of these issues, and just to clarify, practices have basically shut down.
So we have a really significant chilling effect that has happened right now inside of the executive branch. I should add, part of the reason that chilling effect is effective is not only because of these cases, but because the majority leader of the House Oversight Committee has also been subpoenaing many people inside the executive branch. And so I think that has also created some atmosphere of personal fear among executive branch employees as well.
LINDSAY:
So does that mean that we're now in a situation which we're essentially relying on social media platforms themselves to police what appears on their platforms?
DUFFY:
Well, to be clear, we've always relied on social media platforms to police what appears on their platforms because they have First Amendment rights. They're companies and the government can't tell them what to say or what not to say and what to host and what not to host, with the exception of some very specific categories, things like child sexual abuse material. So we've always relied on the companies.
I think what instead we're looking at is a situation where we have an election year where we know dozens of digital platforms will be vectors for bad information and potentially for foreign influence. And in this moment, we may actually be the only government in the world that thinks it can't go to those social media companies to talk about those things. Basically, every other government can write them a strongly worded email. And so this chilling effect is very concerning in a year, not only when we have our domestic elections, but when going back to the market's question, when we have close to eighty potentially elections happening around the globe.
Many of those elections are happening in countries that are of very little revenue or political interest to the companies. And so historically, part of our diplomatic power has been our ability to sort of raise a flag with the private sector to say, "Hey, we know you're not monitoring this situation in Cameroon terribly well, but ethnic violence is rising and you're not moderating the speech in a way that is reflective of your policies as we understand them. And we would strongly encourage you to pay more attention to what's going on there and to up your resources in responding to this crisis." And that is something that we've been able to give our allies, as a government, is our access and our ability to really try to get the companies to focus on something they wouldn't otherwise look at.
LINDSAY:
Just as a technical matter, Kat, you may not have knowledge of this, but how good are these social media platforms and actually taking stuff down when they decide to take it down? I get a sense sometimes that if stuff goes on the internet, it never goes away.
DUFFY:
Well, the companies can only control what's in the company's orbit, and so it's always challenging because again, I think we... It's important not to say social media companies and think Meta, think Facebook and Instagram. There's a really vast number of platforms that are out there.
LINDSAY:
And you're including things like gaming platforms as well.
DUFFY:
Yes. But also Discord, Reddit, Wikipedia, Instagram, WhatsApp, Nextdoor, Telegram, VKontakte, if you're looking globally, WeChat, TikTok is another good example. When you're looking, especially at the American companies, if there is content they really want to take down, they can work incredibly quickly to find it, to identify it, and to take it down. We have seen this happen, for example, with live-streaming of shootings. The platforms can literally pull that down within minutes, if not hours across the platform. They have to focus but they have the tools to do that.
LINDSAY:
But it can jump to other platforms where it'd be hard to chase.
DUFFY:
Exactly, all you have to do is record that. And so it will jump from platform to platform to platform. And so when you have a platform like X now, which has eviscerated its approach to content moderation, and where we just saw explicit deep fake images of Taylor Swift go viral, and not be taken down for hours, despite the Swifties army attempt to get them taken down more quickly. It's sort of like the weakest link. So Telegram, for example, is famous for not really policing content.
You also have now a wider range of far-right alternative social media sites that were founded basically with the idea that content moderation is censorship. And so you have spaces there in particular for domestic extremism to really propagate, and then it tends to go off of those platforms onto other platforms, and then even if it gets blocked, it just boomerangs back.
LINDSAY:
I want to go back to where we started on this question of the capacity for the United States to lead an effort toward a global governance structure for AI. Given everything we've just discussed, can the United States do that or are these rules going to be set by the Europeans and the Chinese? Even if America is home to the latest innovations?
DUFFY:
I think we can play an incredibly important role. I also think that we have to be more creative than thinking about AI regulation as the only way in which we govern AI. Because governance to me is all about creating incentives as well as creating constraints. We are, for so many scientists and engineers and entrepreneurs, an incentivizing environment. We are an enabling environment for innovation and experimentation. We also, I think, need to have two things in mind.
There is a false dichotomy right now that having any guardrails is somehow a constraint on innovation, and I fundamentally reject that premise. When you put boundaries on any creative process, you also just encourage creativity in different ways, and you can streamline investment, you can make it more efficient, and you can waste fewer resources and you can move faster. So there's all sorts of things that guardrails put in place that can speed innovation. So I encourage everyone to reject that particular dichotomy.
But beyond that, what we've learned from years of internet governance is that the United States has not done the job it should have in working for expansion of connectivity and expansion of digital governance and digital systems that are more inclusive of what many people call the global south, but which I call the global majority. So lower income countries. We have really seen China just eat our lunch, for example, in terms of connecting Africa. Africa is covered now in Huawei fiber and Huawei hardware because China's Digital Belt Road Initiative really saw what countries were looking for, and it gave it to them at an affordable price. We still have a third of the world not connected to the internet. Technically it's close to 70 percent, but is connected. We have 30 percent to the world that still doesn't even have connectivity.
There's a lot that we can do to get out ahead of this in terms of being a help, listening to lower income countries, really hearing what those leaders need, really trying to help them improve their capacity and trying to help them improve their engineers and scientists access to things like standards bodies. Thinking more about connectivity and how we go the last mile. There's a number of ways that I think we can truly influence AI governance that don't just involve sort of a treaty at the UN or the G7 Hiroshima Process, although those will also be important.
LINDSAY:
On that note, I'll close up The President's Inbox for this week. My guest has been Kat Duffy, senior fellow for digital and cyberspace policy here at the Council. Kat, thank you very much for joining me.
DUFFY:
Always a joy, Jim. Thanks so much.
LINDSAY:
Please subscribe to The President's Inbox on Apple Podcasts, YouTube, Spotify, wherever you listen, and leave us a review. We love the feedback. The publications mentioned in this episode and the transcript of our conversation are available on the podcast page from The President's Inbox on CFR.org. As always, opinions expressed on The President's Inbox are solely those of the host or our guests, not of CFR, which takes no institutional positions on matters of policy.
Today's episode was produced by Ester Fang, with Director of Podcasting Gabrielle Sierra. Special thanks go out to Michelle Kurrillo for her research assistance. This is Jim Lindsay. Thanks for listening.
Show Notes
Mentioned on the Episode
Kat Duffy and Katie Harbath, "Defending the Year of Democracy," Foreign Affairs
Podcast with James M. Lindsay, Matthias Matthijs and Daniela Schwarzer July 9, 2024 The President’s Inbox
Podcast with James M. Lindsay, Robert D. Blackwill and Richard Fontaine June 25, 2024 The President’s Inbox
Podcast with James M. Lindsay and Michelle Gavin June 18, 2024 The President’s Inbox