糖心vlog官网 faculty members Danielle Allen and Mark Fagan explain the guidelines, guardrails, and principles that can help governments get AI right.
Danielle Allen and Mark Fagan, say that when tested, thoughtfully deployed, and regulated AI actually can help governments serve citizens better. Sure, there is no shortage of horror stories these days about the intersection of AI and government鈥攆rom a municipal chatbot that told restaurant owners it was OK to serve food that had been gnawed by rodents to artificial intelligence police tools that misidentify suspects through faulty facial recognition. And now the Trump Administration and Elon Musk鈥檚 so-called Department of Government Efficiency or DOGE say they are fast-tracking the use of AI to root out government waste and fraud, while making public few details about the tools they are using and how they鈥檒l be deployed. But Allen and Fagan, say that while careless deployment creates risks like opening security holes, exacerbating inefficiencies, and automating flawed decision-making, AI done the right way can help administrators and policymakers make better and smarter decisions, and can make governments more accessible and responsive to the citizens they serve. They also say we need to reorient our thinking from AI being a replacement for human judgement to more of a partnership model where each brings its strengths to the table. Danielle Allen is an 糖心vlog官网 professor and the founder of the Allen Lab for Democracy Renovation. Mark Fagan is a lecturer in public policy and faculty chair of the Delivering Public Services section of the Executive Education Program at 糖心vlog官网. They join PolicyCast host Ralph Ranalli to explain the guidelines, guardrails, and principles that can help government get AI right.
Policy Recommendations
Danielle Allen鈥檚 recommendations |
---|
|
Mark Fagan鈥檚 recommendations |
---|
|
Episode Notes
Danielle Allen is the James Bryant Conant University Professor at Harvard University. She is a professor of political philosophy, ethics, and public policy and director of the Democratic Knowledge Project and of the Allen Lab for Democracy Renovation. She is also a seasoned nonprofit leader, democracy advocate, national voice on AI and tech ethics, and author. A past chair of the Mellon Foundation and Pulitzer Prize Board, and former dean of humanities at the University of Chicago, she is a member of the American Academy of Arts and Sciences and American Philosophical Society. Her many books include the widely acclaimed 鈥淭alking to Strangers: Anxieties of Citizenship Since Brown v Board of Education,鈥 鈥淥ur Declaration: a reading of the Declaration of Independence in Defense of Equality,鈥 and 鈥淛ustice by Means of Democracy.鈥 She writes a column on constitutional democracy for the Washington Post. Outside the University, she is a co-chair for the Our Common Purpose Commission and founder and president for Partners in Democracy, where she advocates for democracy reform to create greater voice and access in our democracy, and to drive progress towards a new social contract that serves and includes us all. She holds PhDs from Harvard University in government and from King鈥檚 College, University of Cambridge, in classics; master鈥檚 degrees from Harvard University in government and King鈥檚 College, University of Cambridge in classics; and an AB from Princeton in classics.
Mark Fagan is a lecturer in public policy and former senior fellow at the Mossavar-Rahmani Center for Business and Government at Harvard Kennedy School. He teaches Operations Management, Service Delivery via Systems Thinking and Supply Chain Management, and Policy Design and Delivery in theSchool's degree programs. In executive education, he is the faculty chair for Delivering Public Services: Efficiency, Equity and Quality. In another program, he teaches strategy and cross boundary collaboration. The focus of his research is on the role of regulation in competitive markets. He is presently spearheading an initiative at the Taubman Center for State and Local Government that examines the policy and associated regulatory impacts of autonomous vehicles. He leads efforts to catalyze policy making through Autonomous Vehicle Policy Scrums, cross sector policy design sessions hosted by governments from Boston to Buenos Aries to Toronto. Fagan earned a masters degree in city and regional planning at Harvard University and a BA at Bucknell University.
Ralph Ranalli of the 糖心vlog官网 Office of Communications and Public Affairs is the host, producer, and editor of 糖心vlog官网 PolicyCast. A former journalist, public television producer, and entrepreneur, he holds a BA in political science from UCLA and a master鈥檚 in journalism from Columbia University.
Scheduling and logistical support for PolicyCast is provided by Lilian Wainaina. Design and graphics support is provided by Laura King and the OCPA Design Team. Web design and social media promotion support is provided by Catherine Santrock and Natalie Montaner of the OCPA Digital Team. Editorial support is provided by Nora Delaney and Robert O鈥橬eill of the OCPA Editorial Team.
Preroll: PolicyCast explores research-based policy solutions to the big and complex problems we鈥檙e facing in our society and our world. This podcast is a production of the Kennedy School of Government at Harvard University.
Intro (Danielle Allen): I also would encourage governors, in state houses, to set up鈥擨 like to think of it as an office to achieve the greatest government of all time, the greatest version of democracy鈥攕o the GOAT office as a counterpart to Doge, which means that it would be an office really making use of AI to support openness, accountability, and transparency. I think that could help us actually have more responsive institutions.
Intro (Mark Fagan): So my number one is sandboxes. In the regulatory space these days, it鈥檚 become a very common term. What it means is experimentation in a controlled environment to see what works and doesn鈥檛 work. If I were the policy lead. I would allow organizations, governmental organizations, to test different ideas to see what works.
The second is ... I do think that the EU, with its risk-based approach and the pyramid, actually has some usefulness for us as well. We may define what is too risky differently than they do, and I think that鈥檚 appropriate, but the mental model of thinking about, these are things you cannot do versus these are things, go try and see what happens. And providing some specific guidance on that, I think would be helpful.
Intro (Ralph Ranalli): From a municipal chatbot that told restaurant owners it was OK to serve food that had been gnawed by rodents to artificial intelligence police tools that misidentify suspects through faulty facial recognition, there is no shortage of horror stories about the intersection of AI and government. Now the Trump Administration and Elon Musk鈥檚 so-called Department of Government Efficiency or DOGE say they are fast-tracking the use of AI to root out government waste and fraud, while making public virtually no details about what tools they are using or how they鈥檒l be deployed. But my guests on PolicyCast today, Danielle Allen and Mark Fagan, say that when tested, thoughtfully deployed, and regulated AI actually can help governments serve citizens better. Per usual, the devil is in the details. While careless deployment creates risks like opening security holes, exacerbating inefficiencies, and automaking flawed decision-making, they say AI done the right can help administrators and policymakers make better and smarter decisions, and can make governments more accessible and responsive to the citizens they serve. They also say we need to reorient our thinking from AI being a replacement for human judgement to more of a partnership model where each brings its strengths to the table. Danielle Allen is an 糖心vlog官网 professor and the founder of the Allen Lab for Democracy Renovation. Mark Fagan is a Lecturer in Public Policy and faculty chair of the Delivering Public Services section of the Executive Education Program at 糖心vlog官网. They join me today to talk about the guidelines, guardrails, and principles that can help government get AI right.
Ralph Ranalli: Danielle, Mark, welcome to PolicyCast.
Danielle Allen: Nice to see you.
Mark Fagan: Thank you very much for the invitation.
Ralph Ranalli: So in the public conversation about artificial intelligence these days, there鈥檚 a lot of talk about how to mitigate the effects of AI that may be negative and harmful鈥攍ike police making wrongful arrests based on faulty facial recognition technology. I鈥檇 like to use the largest chunk of our time to talk about how鈥攚hen properly applied鈥擜I can potentially help governance be better, because I think it鈥檚 important to talk about what a positive vision would look like in order for that to actually become a reality. But first I wanted to get your thoughts on why we tend to gravitate towards that negative, dystopian vision. Danielle, for example, you鈥檝e you鈥檙e written about AI in terms of what you call negative and positive human liberties, and you鈥檝e said that the thinking on AI has been organized around a focus on the negative ones. Can we start by perhaps having you explain a little bit more about those two concepts of liberties and why they are important to this conversation?
Danielle Allen: Happy to. First of all, thank you for the conversation. AI is on everybody鈥檚 minds these days, I think, and we鈥檙e all full of questions and wondering about where things are going.
I appreciate your question about negative and positive liberties and negative and positive visions of AI. I probably have to start by clarifying somewhat in that negative liberties aren鈥檛 bad things in themselves. The distinction between negative and positive liberties goes back to a philosopher called Isaiah Berlin. And he was giving a name to a distinction that had developed in the tradition of liberalism over more than a century. He was drawing a contrast between those liberties which amount to freedom from governmental interference of various kinds, so freedom of expression, privacy, freedom of association, things like that. And then positive liberties, which are the freedoms to do things, freedoms to vote, to run for office, to participate as a co-author in the decisions of a community.
And we need both for a flourishing life, in my view, and you鈥檙e right that to date, when people have thought about AI and thought about making AI safe, for example, they鈥檝e largely focused on how to make sure that AI respects the negative liberties. So how do we avoid discrimination and bias? How do we protect privacy and things like that? Those are all important things. And in addition, we have governance structures that have gotten rigid and non-responsive at all levels, federal, state, et cetera. There鈥檚 a huge amount of room to renovate our democratic institutions. And AI can definitely help us do that, can help us include people鈥檚 voices in decision making processes more fully. AI can also help us improve how we deliver governmental services. So both sides, I think, are important.
Ralph Ranalli: So, Mark, what are your thoughts about negative versus positive visions of AI.
Mark Fagan: Certainly, and I want to build off Danielle鈥檚 comment about delivering services. That鈥檚 the area that I focus on. And here, I think there鈥檚 a substantial upside to AI. The upside is AI can much more effectively do some of the key tasks that are required to deliver effective quality services to our constituents.
I鈥檒l give you a simple example. There are a set of social security judges whose responsibility is to determine whether someone is entitled to disability compensation. In order to do so, the judge has about four hours. They may have 500 to 1,000 pages to read. And then what they鈥檙e looking for are 20 different data elements from which they鈥檒l make it their determination. AI can do that really fast, and it allows them to take a more comprehensive understanding of the individual in the context to make a better decision. So I think you鈥檙e seeing that as a very tangible example, but it鈥檚 everywhere. And I think right now it鈥檚 particularly important because we are seeing government, certainly at the federal level now鈥攗ndoubtedly, it will cascade to state and local levels鈥攚here the workforce is being reduced yet the importance of delivering the services remain and therefore AI may be the way to fill that gap.
Ralph Ranalli: So Mark, I wanted to follow up on that a little bit because I know you鈥檝e written about appropriate use cases of AI for governance, and you鈥檝e written about what criteria to use to identify when and how it鈥檚 appropriate to implement AI for governing Can you talk a little bit about those criteria?
Mark Fagan: Certainly. When we think about identification of AI use cases, the starting point is really where are the pain points in the existing governmental service delivery. The approach that we鈥檙e taking is not one of how we can use AI, it is鈥攚here are their challenges in the way in which we deliver governmental services today? And then, is AI the best solution? So the criteria we鈥檙e using are number one: What is the level of benefit that you get from AI? Number two is: What are the costs? Because any introduction of a new technology has costs to it. And those costs really break down to two components. One is around the tangible cost. The expertise, building the algorithm. But the second cost goes to more of what Danielle was talking about in terms of governance around risks. And the risks are bias risks. They鈥檙e data privacy and security risks. And so it鈥檚 the weighing of those that allow us to determine where are the right AI applications at a point in time.
Ralph Ranalli: Danielle you鈥檝e talked about steering the considerations of AI towards human physical and mental well-being. And you鈥檝e used a term, I really like, which is human flourishing. It seems like a very important concept, but one that鈥檚 also pretty aspirational given where we are as a society and the way our economy is structured, for example. Can you talk a little bit about that concept of human flourishing and how it relates to this area that we鈥檙e talking about?
Danielle Allen: Sure. So, it鈥檚 an old concept in philosophy. I can鈥檛 claim any great originality in talking about human flourishing. So, if you鈥檙e thinking about it in its oldest sense, Aristotle would be associated with the idea, the notion that One of the goals of decision making is always to ensure that whatever, sort of creature or organism is in front of you is able to activate its potential to its maximal ability. That鈥檚 really the concept of flourishing鈥攗nfolding internal, innate potential.
But of course, it鈥檚 also an idea that has always had a political meaning as well. There鈥檚 a very old Roman phrase that is actually on any number of American statehouse buildings鈥斺榮alus populi supremum lex esto鈥欌攖hat means the health and well-being of the people are the supreme law. And that鈥檚 the same idea that a society is doing well when the population is thriving, when it鈥檚 healthy, when it鈥檚 expanding in number, when people are meaningfully employed and express a sense of fulfillment in their lives. So these are just very basic ideas, actually. And then, of course, contemporary philosophers have iterated on them in any number of ways.
To introduce the concept of human flourishing in a conversation around technology is really to say, look, our conversation for too long has focused primarily on profit as the main thing we think about when we think about tech development. And sure, a firm needs profit in order to thrive, itself, and grow and to reinvest in what it鈥檚 doing and the like. But at the end of the day, when we鈥檙e talking about developing tools with as transformative power as AI has, we have to think about them in the broadest context, and that really is to ask the question of where and how can these tools support human flourishing. It鈥檚 a lot like what Mark was saying, right? You know, what are our challenges? What are the pain points that we have as a society, as human beings? And can we actually steer tech development towards solutions to those problems?
Ralph Ranalli: So, the Organization of Economic Cooperation and Development, the OECD, has established a set of principles for AI implementation. They are fairness, transparency and explainability, security and safety, and accountability. Do those go far enough in your mind, or is there more needed there?
Mark Fagan: The basic concepts are the right basic concepts. The question is: How does it get operationalized? It鈥檚 easy to use words like transparency, but in reality, what does that mean? Who has the ability to see how the algorithm is making its decisions? Who has the knowledge base to say the way in which it is designed and operating is actually problematic? So I don鈥檛 focus as much on, I don鈥檛 have a concern about the concept, it鈥檚 how you operationalize it. And we have at least a first cut at that, with the EU AI Act, where they鈥檝e built a pyramid, and they鈥檝e said, at the top of the pyramid, these are things where the risk is simply too great, and we won鈥檛 accept those types of algorithms. And then it cascades down to the bottom saying those are fine. So there is a beginning of that, but it is very nascent.
Danielle Allen: And so I would offer a slightly different picture there. I think those are all good principles, but this goes back to your first question. For me, those are half of the story. All those principles are protecting our negative liberties, but they are not actually supporting positive liberties. They鈥檙e not supporting the freedoms to participate and to be a contributing member of society. So I think to have a full sense of human flourishing, we need a few more principles.
For example, I think human complementarity in AI development is key. That is to say that we鈥檙e complementing human capacities rather than replacing them is an important feature that we should be aspiring to. I would also introduce pluralism, or sometimes plurality, people put it as. We have multiple kinds of intelligence among human beings, multiple kinds of intelligence among machines. There isn鈥檛 some singular intelligence that we鈥檙e trying to surmount and create a super intelligence. So I think we need technologies that actually activate, support that human pluralism, and again, complement it. That would be a starting point. Also, principles that are around mental health and well-being. So health beyond just safety.
Ralph Ranalli: You mentioned that there are different kinds of intelligence. are also different kinds of AI tools. We tend to use AI as this catch all for what are actually a whole basket of technologies. You鈥檝e got natural language processing, you鈥檝e got neural networks, robotics, you鈥檝e got machine learning, computer vision. What particular tools do you two think are maybe best suited to help governance? Either at the national, state, local level鈥擨 think the state and local level is becoming increasingly important and in the current environment. Which of those tools do you think are best suited for governing? Mark, I know you come from the service delivery perspective. Would you mind maybe starting?
Mark Fagan: Happy to. I鈥檓 actually going to start with an example that takes all of the components that you described and does them in one activity, and that鈥檚 autonomous vehicles. So as you know, I鈥檝e been in this space for quite some time, and it is an example where you take a combination of everything you described, from vision, to neural networks, to robotics, essentially replicating what you have learned how to do over a very long period of time, and having ones and zeros make that set of decisions.
The reason I start there is it鈥檚 an example of a recipe. It isn鈥檛 just optical recognition. It isn鈥檛 just using robotics. It鈥檚 bringing it together. That happens to be a bit of an extreme example, but I think that idea of recipe is how we should be thinking about the application of AI in service delivery more generally. So, if the issue that you鈥檙e thinking about is how am I going to provide insight for taxpayers on questions that they have about the tax code in preparation of their submitting their returns, I want to be able to create a chat bot that enables them to ask that question to have a reliable answer. I also want it to be 24-7 and I want it to be in multiple languages. Again, you can鈥檛 do that with any of the individual items you described. It鈥檚 creating the recipe.
Ralph Ranalli: Danielle, do you might weighing in on which AI tools might be most helpful for governing?
Danielle Allen: Of course, yeah. I鈥檒l give you an example from the context of representation and how it works. So it鈥檚 not the service delivery part of government or democracy, it鈥檚 about: How do people participate? Do they experience themselves as being heard? And what I can do is also point to a good example of a human complementing use of AI.
So, anytime you have a divisive political issue, you鈥檒l have a whole lot of people who have views about it of different degrees of intensity, and no single person is in a position to get a kind of bird鈥檚 eye view of the actual shape of opinion- including surprising points of potential agreements that are below the surface a little bit buried. We have for decades used public opinion research that typically means sending out a survey with a few multiple-choice questions. You have to kind of cram your point of view into a small menu of options. That鈥檚 no longer a limit we have to live with. We can now draw on natural language processing. In order to let people just, in fact, communicate in their own words about how they think about something and then use AI to sort through those comments to find points of synergy that might be buried and not visible to any person having a small network of communications.
In other words, you can have a very large social graph supporting discovery of potential solution sets for problems. There鈥檚 a tool called POLIS, which has been used very successfully in Taiwan that does this and is able to depolarize tricky political questions. There are other features built into the system. For example, you鈥檙e not allowed to produce your own comment until you鈥檝e actually commented on a few comments by other people. So there鈥檚 a sort of bridge building and social connection creation aspect of it as well. These are all things that we just plain couldn鈥檛 do before that AI now makes possible.
Ralph Ranalli: Yeah, that sounds like it could make just regular old polling a very blunt instrument by comparison.
Danielle Allen: Very blunt by comparison. Exactly. Yeah.
Ralph Ranalli: Can we bring democracy into the conversation? Because Danielle, I know democracy is a space that鈥檚 very central to your work, can you talk a little bit about other ways that AI might be helpful in terms of the democratic process and maybe places where we need some pretty stiff guardrails.
Danielle Allen: Sure. So, I mean, I think there鈥檚 a lot of experimentation going on right now. As you know, we are suffering from a problem of disintegration of local news and a news ecosystem that doesn鈥檛 give us a healthy diet in support of democracy. I think there is some good work underway right now to actually reinvent how social media platforms are structured and organized in order to deliver a healthier information ecosystem. So there鈥檚 a great paper by Audrey Teng and Glen Weyl on that subject. And they, among others, with an organization called Project Liberty, have been putting together a people鈥檚 bid for TikTok, which involves a pretty visionary approach to how to restructure that platform so it鈥檚 actually productive of a healthy information ecosystem. So, I think that鈥檚 the kind of thing that, where we definitely need to take advantage of technology to address some of the key challenges that we have. And in some sense, by naming that solution, I鈥檓 also pointing to one of the key problems, which is that currently, the way AI is functioning is degrading the quality of our information ecosystem. So, solution and challenge are both there in the story of AI.
Ralph Ranalli: Mark, thoughts on democracy and AI and how it can help?
Mark Fagan: I would echo very much what you just said about the ability to get people鈥檚 opinions, to enable people in a more effective way to be able to share information, and certainly the ability to synthesize information, find where there鈥檚 commonality, find where there鈥檚 differences and work through that.
Ralph Ranalli: Danielle, you wrote about something that I found very interesting, the shorthand for it is the Japanese model. And I read that, on the one hand, there are aspects of cultural in Japan, including what you can call a certain ingrained aversion to risk, which can inhibit their adoption of AI and their ability to advance the technology. But on the other hand, there are other attributes of Japanese culture, like an emphasis on stringent, exacting ethics, that could in the long term actually be a competitive advantage. Can you talk a little bit about the Japanese model, why it鈥檚 interesting and, why it may have applications for other countries going forward?
Danielle Allen: Sure. So this is a part of a research paper that a team of us published where we reviewed approaches to governing AI across a lot of different national contexts. So, U. S., Europe, Japan, Israel, Brazil, Africa broadly, and China as well, I suppose, was in the mix. The reason we did this was because we wanted to help people in the U.S. see actually that there isn鈥檛 a sort of single given framework for how AI is regulated, that every country is introducing values that structure priorities.
And yes, in the Japanese case, there is a strong focus on 鈥 you could almost call it a communitarian picture of well-being. So, that is very distinct from the U.S. and the European case, where the focus is, again, on negative liberties, which are more oriented towards the individual and the protection of the individual from certain kinds of interference. Again, those are important protections, but at the end of the day, this is a tool, this is a technology that is going to work its way through every aspect of our lives, completely through society, through the economy, through political institutions. And so we really want to think about the overarching purpose of that integration. And I do think that the Japanese case, which has a sort of strong focus on values that are defining of the Japanese way of life, provides an important model.
And if I were porting that over into the U.S. case, again, then that鈥檚 where I say, it鈥檚 not just that we want to be protected in our private sphere, we do also want to make sure that the tools continue to enable us as participating democratic citizens. So, we would be better off with approaches to technology development that decentralize power, that actually also decentralize how capital is operating in technology development, so that we avoid the concentrations of power that themselves undermine democratic processes.
Ralph Ranalli: Mark, speaking of Japan, you鈥檝e used the example of what they鈥檙e doing at Narita Airport as a use case for various forms of AI. Can you talk a little bit about what they鈥檙e doing there and does any of it make you uncomfortable? For example, I know they鈥檙e using facial recognition. Can you talk about that a bit?
Mark Fagan: Sure. So, the operational model is that when you have your first point of contact upon arriving or departing from Narita, your facial recognition is identified, and that is essentially used throughout the process. The intent is to do two things. One is to streamline the process, make it easier for people to move through the airport efficiently, but also from a security perspective, as every airport has those two objectives.
In terms of how I feel about it, I鈥檒l personally say I think it鈥檚 actually a good thing. I tend to be more on the side of security, and in terms of the liberty that I鈥檓 giving up by allowing my face to be used, I鈥檓 not sure that is materially different than me showing my passport with my image and then that being scanned and looking at me. But it does raise a broader question, and it ties to the international description that Danielle just provided. And that is, we have a fundamental tension, and the tension is between fostering innovation in a country and protecting its citizens.
And you see that tension playing out very differently in different geographies. Where in some geographies you鈥檙e seeing it oriented more towards protecting the constituents, the citizens. And I would say the EU AI Act is an example of that. And in other geographies you鈥檙e playing more towards the innovation space and saying, I鈥檓 willing to accept some risk because I don鈥檛 want to stifle the innovation. And it is the political decision makers, those who have been hopefully elected to make the decision, who are actually playing out that trade off. And you see very different approaches and you are likely to see that as the technology continues to evolve.
Ralph Ranalli: I think both of you in some ways have written about the difference between high-risk technologies and ones with minimal risks? Is the combination of AI and governance just inherently a high-risk technology in a high-risk space? I鈥檓 thinking of the use of algorithms in criminal justice settings, for example, which has been shown to be problematic. And that鈥檚 high risk because people鈥檚 liberty is at stake. If that鈥檚 true鈥攖hat there are inherent high levels of risk on both sides of the equation here鈥攚hat鈥檚 the proper approach to integrating those two things?
Danielle Allen: So, I do make a distinction between different levels of risk and between those which are in a sense on par with existing risks in our institutions and then those which introduce new, call them existential or macro systemic kinds of risk. And for the former, I think most of our existing institutions and agencies actually have the capacity to deal with them, even including the criminal justice case. Though, the challenge is to have human capital on hand, hired in agencies or legal offices and so forth with appropriate understanding. And in order, I think, for us to do what we need to do for the more conventional kinds of risk, for example, around discrimination and bias, it鈥檚 important that we be able to pre-audit tools, to have knowledgeable experts, legal and technical experts, in public sector agencies who can understand the deployment and the use case, and be then tracking as things unfold in a sort of pilot phase before they鈥檙e launched to full deployment. So I think that kind of cascading or sort of agile approach to development is better than the sort of waterfall where you build the whole thing and then just roll the whole thing out. So that is for managing that level.
I think the harder part is the unknown about the kinds of capacities that are available once you have human intelligence replicable at scale. The push of a button produces the ability to coordinate and create things that as of now unimaginable to us. And so for that, that鈥檚 really a question of how the frontier AI firms are leading development. I think they ought to be licensed in the same way that our national labs are licensed, for instance, and that there should be relatively close reporting out of what they鈥檙e discovering as they proceed.
Mark Fagan: If I can come back to the question about the courts themselves, given the technology we have today, I think one way to think about it is segmentation.
If you think about the criminal justice system. It has a set of administrative functions, and it has a set of substantive functions. On the administrative side, AI has great potential to make the system more efficient for the judges, for the lawyers, and for the people in the system. An example would be being more effective in scheduling. You can look at a particular case, and with a great degree of accuracy, determine what is the likely amount of time that that鈥檚 going to take. And therefore, schedule more effectively. On the substantive side, I have historically had a perspective of there鈥檚 some real limitations here, and I don鈥檛 want, I wouldn鈥檛 myself want my outcome determined by an AI algorithm. The one way to think about it is, there鈥檚 a phrase that people talk a lot about, which is having a human in the loop. And the idea there is you鈥檝e built an algorithm, but there is some subject matter expert. And now a flip of that is having AI in the loop, where the dominant approach is having a human making the determination, however, being aided by, and as you pointed out earlier, Danielle, the idea of complementarity to a person being smarter about it.
Now, I鈥檒l give you one quick example. I know a judge who is in a civil court, not in the United States, and deals with a lot of auto accidents. And his job is to determine who was at fault. And what he said to me was: 鈥淚 do that, first of all, by looking at the people, talking to the people, understanding who they are, what they鈥檙e thinking.鈥 But then he reflected, and he said, 鈥淏ut you know what? I am not an expert on physics. And I cannot look at the images of the crash and know what really happened. But an AI probably can, because it鈥檚 looking at thousands of these.鈥 That鈥檚 an example of putting AI in the loop, with him making the decision.
Danielle Allen: That鈥檚 a great example, I want to say, and I appreciate that clear, concrete case of the human complementarity approach. That is exactly what I鈥檓 trying to articulate.
Ralph Ranalli: So we talked about human flourishing, but unfortunately, much of AI development has been about the fiscal flourishing of big tech, corporations, hedge funds, etc. And Danielle, I know you鈥檝e said that a hard question we鈥檙e facing is: How can you have a regulatory or government structure where corporations serve public goods? And not one where all these public goods are just converted for private gain. Are we at an inflection point where we need to start questioning whether democracies are preeminent over corporations or corporations have become preeminent over democracies? And I think a lot of the headlines these days probably do support the fact that we may be at that inflection point. Can we get your thoughts on that?
Danielle Allen: Well, this goes back to the important Milton Friedman article about the case for business, and the case he made that any person who鈥檚 got a fiduciary role in a business who thinks about anything other than profit is misunderstanding the nature of the role. I think that that is a limited picture ultimately. And that the challenge with it is that it externalizes too many of the costs of business success. And in particular, I think it is really important that firms recognize that they thrive best in stable rule of law environments.
And I would also add, in my view, democratic environments鈥攄emocracy as a political form, not an economic form鈥攊s itself a spur of innovation because freedom is a spur of innovation. So, in that regard, I do think that businesses can and should understand their purposes as about profit, yes. But also the environment that sustains their profit makes it possible and should, for that reason, consider their role in developing technology as having that expanded version of what it means for them to maintain their own health.
Ralph Ranalli: Mark, what do you think about whether we need to fundamentally reexamine the role of corporations in society and government and democracy?
Mark Fagan: I鈥檓 going to step back and pick up on something that Danielle said before I come to that lofty question. She made the important point that, when an industry is going through its flourishing period, there鈥檚 real value to having stability. And that stability often comes through regulation. So, I鈥檝e spent a lot of time in the railroad industry. And it turns out the first regulator in the United States was the Interstate Commerce Commission. And it was not started to protect you and me. It was to protect the industry itself. And it did a very good job of that for a long time鈥攅ventually it outlived its value. I think it鈥檚 a great example of there is a role that both supports the corporation as well as the citizenry in these industries that are going through rapid innovation. And I think that鈥檚 where I would answer or sit on that issue.
Ralph Ranalli: So, turning to the news and the things that are going on in Washington now with the Trump administration and Elon Musk鈥檚 Department of Government Efficiency. In broad strokes, you could definitely make the case that there, there鈥檚 a move away at the federal level from regulation in the public interest to the point where, people who are employed in the federal government as regulators are themselves being seen as an inefficiency. And more broadly mere fact that people are being paid to work for the federal government at all is itself inefficient. In that environment, is there a space, maybe at the state or the local level, where some this experimentation in productive regulation and this creation of a thriving, complementary AI and government relationship can be created. Is there still a space for that?
Danielle Allen: There certainly is a space, I mean, maybe I鈥檒l just back up for a second and say, for starters, that I am appreciative of the DOGE effort in so far as it is making a strong case for the importance of modernizing how government operates, and I think there is a lot of value in that case, and it鈥檚 correct as a case. The question is how we do it. And we are watching certainly both the power of AI to establish reach and management of data across the system of government, but then that also raises questions about how power operates in a democratic context, and for me raises questions about checks and balances and what are the appropriate mechanisms to have in place so that the power that鈥檚 available thanks to AI is also actually embedded in our Constitutional structure.
Consequently, it seems to be important that there be experimentation at the state and local level so that we can see that there鈥檚 more than one way of drawing on technology to modernize our government, again, both in the delivery of services and in how people are able to participate in representation. So it鈥檚 an important moment, and we should all be paying really close attention, and yes, I think the need for experimentation at the state level is very significant.
Mark Fagan: I鈥檇 echo that. Again, drawing back to the autonomous vehicles world, to date, the states are the determinant of what is and is not acceptable in terms of autonomous vehicles. And we see very different approaches being taken. In California, the model is much more of a heavy regulatory hand, and if that generates less economic development, so be it. And you see something very different in Arizona. where you鈥檙e seeing it鈥檚 very important to innovate and bring that industry to the state. And so, you鈥檙e seeing multiple different experiments, and I think it鈥檚 a great example of what you do get from that. We鈥檙e also seeing that in the AI space in terms of how government is using AI, because different states have taken different approaches and even municipalities as well. So, I think there is a lot to be learned, and we have the benefit of having so many opportunities for experiments to be able to look at.
Ralph Ranalli: So let鈥檚 turn to policy. I鈥檓 going to ask you for some specific policy recommendations, and I would like them to be in our frame about shifting the paradigm from this negative focus on just preventing the harms of AI to a positive-focused one that encourages a marriage of AI and governance in a way that lifts people up, lifts democracy up, and benefits the greater good. So if you were in charge of the levers of policy power for a day and were able to implement what you wanted, what would those policy recommendations be?
Danielle Allen: Well, I鈥檒l throw out one I mentioned already. There鈥檚 a people鈥檚 bid for TikTok underway. We don鈥檛 know what鈥檚 gonna happen with TikTok鈥攊t鈥檚 supposed to be banned, and then the ban鈥檚 been challenged by the White House. And there鈥檚 another path, which is that, again, a group of technologists and policymakers have come together to develop a model for an alternative approach to social media platform that supports pro-social behavior and supports a healthy information ecosystem. So I would love to see the people鈥檚 bid for TikTok go through. That would be great.
And then I also would encourage governors, in state houses, to set up鈥擨 like to think of it as an office to achieve the greatest government of all time, the greatest version of democracy鈥攕o the GOAT office as a counterpart to DOGE, which means that it would be an office really making use of AI to support openness, accountability, and transparency. I think that could help us actually have more responsive institution.
Ralph Ranalli: Mark, a couple of policy recommendations.
Mark Fagan: So my number one is sandboxes. In the regulatory space these days, it鈥檚 become a very common term. What it means is experimentation in a controlled environment to see what works and doesn鈥檛 work. If I were the policy lead. I would allow organizations, governmental organizations, to test different ideas to see what works.
The second is, I do think that the EU, with its risk-based approach and the pyramid, actually has some usefulness for us as well. We may define what is too risky differently than they do, and I think that鈥檚 appropriate, but the mental model of thinking about, these are things you cannot do versus these are things, go try and see what happens. And providing some specific guidance on that, I think would be helpful.
Ralph Ranalli: Well, this has been really interesting and, I鈥檝e learned a lot during this conversation, and I want to thank you both very much for being here.
Danielle Allen: Thank you. Thanks for the great questions.
Mark Fagan: Thank you very much. Appreciate it.
Outro: (Ralph Ranalli): There is no one we appreciate here at PolicyCast more than you, our listeners and subscribers. You鈥檙e the reason why we take on important topics, talk through the issues and complexities, and鈥攎ost importantly鈥攐ffer research-backed, real-world policy solutions that can make our society and our world a better place for everyone. That鈥檚 why we鈥檇 like to hear from you. If there鈥檚 a topic you鈥檇 like us to discuss, an 糖心vlog官网 professor or visiting scholar you鈥檇 like to hear from, or if you have feedback on our podcast, please let us know, and here鈥檚 how. You can leave a review on Apple Podcasts or your favorite podcasting app, or you can email us at policycast@hks.harvard.edu, or you can email me directly, host and producer Ralph Ranalli, at ralph_ranalli@hks.harvard.edu.
Please join us for our next episode, when renowned Supreme Court expert and 糖心vlog官网 Professor Maya Sen talks about what the courts can鈥攁nd more importantly what they can鈥檛鈥攄o to defend the rule of law in America鈥檚 Trump 2.0 era. Don鈥檛 miss it. And until then, remember to speak bravely, and listen generously.