ÌÇÐÄvlog¹ÙÍø

In this Wiener Conference Call, Dean Jeremy Weinstein discusses emerging technology and its intersections with democracy and higher education, including:

  • How technology may help solve our everyday problems, but could come at the cost of undermining our social goals
  • The consequences of rolling out AI to the world almost overnight
  • How technology can generate negative externalities without proper guardrails
  • Although algorithms are powerful and efficient, why we should be concerned about fairness, discrimination, agency and oversight
  • Building an ethical muscle and an ethic of technology among the next generation at ÌÇÐÄvlog¹ÙÍø

Wiener Conference Calls feature Harvard Kennedy School faculty members sharing their expertise and responding to callers’ questions. We are grateful to the Malcolm Hewitt Wiener Foundation for supporting these calls, and for Malcolm Wiener’s role in proposing and supporting this series as well as the Wiener Center for Social Policy at Harvard Kennedy School.

- [Narrator] Welcome to the Wiener conference call series featuring leading experts from Harvard Kennedy schools who answer questions from alumni and friends on public policy and current events.

- Today we're very fortunate to be joined by Jeremy Weinstein, who is Dean and Don K Price professor of public policy at Harvard Kennedy School. I'm also extremely proud to note that Jeremy is an alumnus of the school. He received his PhD here in 2003. His academic expertise spans topics including migration, democracy, and the rule of law and political violence. Beyond his scholarly work, his experience includes senior roles in the US government. He served as both director for development and democracy on the National Security Council and as deputy to the US Ambassador to the United Nations during the Obama administration, Jeremy joined the Kennedy School in July from Stanford where he led the Stanford Impact Labs and the Immigration Policy Labs. In recent years, Jeremy has been working on the intersection of technology and public policy, including co-authoring books, "System Error: Where Big Tech Went Wrong and How We Can Reboot" his thoughts on how to balance the benefits of new technologies with the challenges they pose will be the focus of today's call. I wanna note that Jeremy is speaking in his personal capacity as a scholar who's worked on these issues and not on behalf of Harvard University. We're very fortunate that he's here to share his expertise with the Kennedy School's alumni and friends. Jeremy, over to you.

- Ariadne, thanks so much for that kind introduction and I'm thrilled to join with all of you in this Wiener conference call, my inaugural participation in the Wiener conference call. I want to thank the Wiener family for their generous support of this program. I'm gonna share a few slides. Let me get them up on the screen, gimme a thumbs up if you can see them. Terrific, we're all set. Today I'm gonna share some thoughts and reflections drawing on the book that Ariadne described and really, you know, almost eight years of teaching at Stanford and researching at Stanford in Silicon Valley, thinking about the intersection of technological change and democratic politics. And I think this is important to share in part because I'm stepping into a new role five months in as dean at the Kennedy School, you know, a world class and leading institution in the study of public policy and government. And I think it's apparent to everyone on the call that thinking about the appropriate role of our political institutions in governing in this moment of technological change is an issue that naturally should be at the forefront of my mind as Dean. So I'll start with a bit of framing and take you through remarks for about 25 or 30 minutes and then open it up to questions. So here's a "System Error: Where Big Tech Went Wrong and How We Can Reboot". And if you read my bio, you might ask yourself the question, why is this scholar of foreign policy, international relations, and political and economic development in Africa writing a book about technology policy? And to answer that question, I really need to begin with my time in the US government, and in particular my role as a deputy to the US Ambassador to the United Nations. In that role, I sat on something called the deputies committee, which in the US government is the principle foreign policymaking body at the White House. It meets multiple times a day across a broad range of issues, and it's where the deputy cabinet secretaries come together to talk about the most important foreign policy issues of our time. It was in the context of my service, you know, as Deputy UN ambassador and on the deputies committee, that I really began to realize an extraordinary challenge that we confronted at the senior most levels of the US government, which was a gap between the technical understanding of, you know, the frontiers of new technologies among senior policy makers, and the challenges that we confronted in the policy landscape in a society that was being transformed by technological change in real time. Sometimes this played out in very specific debates, like debates around end-to-end encrypted software and how to balance the innovation potential of our economic engines in Silicon Valley with our concerns and needs around public safety, sometimes the challenges related to cyber threats, and ultimately how as the US government we should be in a position to protect not only our critical infrastructure, but also to incentivize the private sector in the United States to take the set of steps that were required to protect themselves from the kinds of vulnerabilities that the technological age represents. When I came back to Stanford in 2016, the Obama administration had come to an end. I arrived back at Stanford and this was a campus that while I had been gone, had been truly transformed by this second generation moment of technological change. Not the early computer moment, but the internet and social media and mobile applications moment. Computer science was the largest major among men, among women, among domestic students, among international students. You know, there was so much enthusiasm for technology's potential and the question for a public policy person or a social scientist was where and how do I contribute and drawing on the kind of insights and perspectives that I had as a policymaker. My first instinct was to say that issues that I've thought about at the policymaking table, the question of what are the values at stake for us as a democratic society? How do we referee things that are in competition with one another? How do we solve for a set of social goals that these perspectives and skills were skills that were relevant to computer scientists, not just to social scientists? And so I went around campus and I needed some partners and I found partners in Rob Reich, preeminent political philosopher, and Mehran Sahami, the most popular computer science professor at Stanford. And you can thank him because he's the one who invented spam filtering technology. So if you think it works at all to save you from some of the junk mail you don't wanna see in your inbox, that was Mehran Sahami's computer science dissertation. The three of us got together and began to think about in a campus that was consumed with the possibilities of new technologies, what would it mean to help our students build the muscles and the muscle memory to bring an ethical lens to the decisions that they make about the technologies that they design and deploy into the world? Give them an understanding for how to think about measuring the social consequences of the technologies that they build and give them a framework for thinking about the appropriate role of technology companies vis-a-vis political institutions that may solve for some of the broader social problems. We began to teach computer scientists in the classroom, teaching classes regularly of 250 to 300 computer science majors. We taught in the evenings professional technologists giving them an opportunity to engage in these issues as well outside of their professional roles and responsibilities. And when we were all locked at home in the context of the pandemic, we took an opportunity to write this book to share some of our perspectives with a broader audience. What I hope to do today in my short set of remarks is share with you an arc of this book, beginning with where we find ourselves now in 2024. The book came out in 2021 and then in 2022 in paperback, to give you a sense of how I think about these issues in this present moment, to offer you a perspective on why I think we confront some of the challenges that we do in the present moment. What are the kind of historical drivers that put us in the position that we are now sort of navigating some of the real tensions between innovation and social harms of technology? And then to offer you a framework for thinking about the way forward. So let me begin with where we find ourselves now. And I really wanna start with a story. Stories are often illuminating and engaging. And so I want to tell you a story about Joshua Browder who arrived at Stanford as an undergraduate. Like many inspired undergraduates, he felt an urgency to make an impact in the world and the impact in the world that he wanted to make was by designing and rolling out launching a startup. Like many founders, he was motivated by a personal pain point and his pain point was that he really disliked parking tickets. Parking tickets he felt were a tremendous annoyance as a high school student growing up in England, he must have gotten a lot of parking tickets. I don't know whether he was late to school on a regular basis, but he found himself accumulating unpaid bills from parking tickets. And he had an intuition that by using the tools of computer science and machine learning, he could help people efficiently get out of parking tickets. And what this meant was using the fact that now you can contest parking tickets online, you have to fill out a set of forms, you have to make a set of claims. And if he could fill out those forms in an automated way and learn over time what are the most effective counter claims to make, you could actually get people out of parking tickets in a systematic way. He called this company Do Not Pay, and after his freshman year, he went out for a seed round of venture capital and raised a significant sum of money and ultimately dropped out of Stanford to launch the company Do Not Pay. Now why do I start with this story? I start with this story because on the one hand, parking tickets are annoying, so are speeding tickets. So are all sorts of other fines and constraints that we may face in society. And so helping people get out of parking tickets may in fact be a noble cause. On the other hand, we have parking tickets for a reason, right? And, in fact, we have parking tickets for many reasons. One reason that we have parking tickets is we often reserve spaces near buildings for people who are differently abled to enable them to access that physical space far more easily. And so we give people parking tickets if they park in a space without permission. Sometimes we have parking tickets because if you live in a place where you get snow and ice, you need to clear the street to enable people to progress and not to have the sewage system backed up by debris. So people need to park on one side of the street and then they need to switch to the other side of the street. I remember this from living in Washington DC. So you get a parking ticket if you don't move your car to enable street cleaning to happen. Sometimes we use parking tickets because we actually wanna reduce congestion in city centers. So you limit the number of spaces that are available to people for parking in order to reduce emissions, in order to reduce congestion, social goals that we might wanna solve for. And then interestingly, in the case of the UK, fines for parking tickets also are used to support the updating of road and other physical infrastructure. I mention all of those things because Josh Browder and the startup Do Not Pay was not solving for our social goals, right? It was solving for a personal pain point that Josh Browder and other people experience. And it reveals some of the challenges of technological change, right? The opportunity to solve for a problem that causes annoying fines for lots of different people may in fact undermine some of the social goals that our regulatory architecture, our rule of law, has set in place as the parking ticket example reveals. Now, if that feels a little bit cutesy and micro for you, I think part of what you need to understand about this ambitious agenda that Joshua Browder has is that it goes beyond parking tickets. The Do Not Pay is a preview of Joshua Browder's broader interest in transforming the way the legal profession works in replacing human lawyers with robot lawyers, with AI driven lawyers. Now my wife's a lawyer, I very much appreciate the value of the legal profession, but I also understand from her own experiences that there are important legal functions that absolutely would benefit from the use of artificial intelligence tools. And we see lots of firms engaged in those practices now. In particular things like document review, which is what first year and second year associates spend an awful lot of time focused on. But I do think as we think about Josh Browder's broader ambition, we need to grapple with what's at stake when we remove human beings and human judgment from something as fundamental as the rule of law and the administration of justice in the United States. So from parking tickets to a transformation of the rule of law, thinking both about the benefits of new technologies, but also their costs. Now bringing us fast forward to the present moment, we know that we live in a moment of large language models. The expression of large language models in most of your lives is chat GPT or Claude, or other large language models that have probably been a great deal of fun for you to play with to see what these powerful tools enable us to do by virtue of learning from the mass of historical data and knowledge to predict the next word, which is effectively what they do, and to generate letters and memos and summaries of literature and sonnets that you want to provide as a holiday card to someone. It's an absolutely extraordinary technology that's been developed, and it's a preview of both advances in artificial intelligence and what some call artificial general intelligence as the next technological frontier. Now this is a new technology that gets rolled out like so many others. First, it's rolled out in a kind of experimental way. Let's see how people interact with it. OpenAI, when they rolled out chat GPT didn't expect it to take off like wildfire. They really were positioning themselves to kind of learn about its use cases and some of the risks that it posed to society. What happened in fact was that it generated massive enthusiasm overnight. And in an effort to take advantage of that enthusiasm, OpenAI scaled up its compute capacity to deliver this product that people were so enthusiastic about to large of people almost immediately. What happens when such a technology is rolled out to the world almost overnight? Well, I'm a parent of two teenage boys, and so one of the first things that happens is that Silicon Valley rushes to commercialize these technological advances and tries to solve for pain points that we know are important, especially for young people. Like, do I want to read Shakespeare tonight and write that term paper for tomorrow? Because if one can offer flawless grammar and effortless writing at a moment's notice, all of a sudden this really difficult thing that all of us had to struggle with in school, the ability to read and interpret, to express our thoughts, to communicate it in compelling ways, all of a sudden is something that can be solved for with a new technology. Overnight, this becomes a challenge for hundreds of thousands and millions of teachers who without any advanced warning, are now teaching in a classroom being transformed by technology without free time, without additional resources, without that same innovation potential, being focused on how to help people use this new technology in responsible ways. Instead, they have to figure out how to do it in the classroom in real time at that moment. Now we may see that Chat GPT and other large language models become the calculator of the future, but if they do, we need to do so in a responsible way that helps us adapt and evolve our educational infrastructure to take advantage of these new technologies and to realize their benefits. That's a micro example of a consequence of this new technology. At the more macro level, if you're spending any time in Silicon Valley, you've probably heard the expression P . And if you remember back to your economics or statistics, class P in this case means probability. This is the probability of doom. So one consequence of large language models are kind of the consequences in our educational environment and in our classrooms. But at a much more macro level, those who are behind these technologies are interested in far more significant societal consequences, which go by the shorthand doom, right? This is the existential risk that these new technologies might pose to society, the kind of runaway potential separate from human oversight and accountability, that these new technologies because of their capability may simply outpace the ability of human beings to control them. So it's then no surprise given that power that's being unleashed, that someone like Sam Altman, the CEO of OpenAI will show up at Congress, will show up in multiple capitals around the world and say, we need to guard against the potential risks of these new technologies. And this is the moment at which we find ourselves where governments are being asked to step up in important ways, to think with industry about the appropriate regulatory guardrails. And where are those regulatory discussions happening? Well, the bottom right hand picture is Washington DC partisan gridlock has really stood in the way for decades of meaningful regulation of the tech industry in Washington. While most governments around the world have already adopted some form of national privacy law to deal with the amount of personal information that large technology companies now have about us as individuals, the United States is one of the only major countries that doesn't have a national privacy law. So Washington DC simply hasn't been an important regulatory capital. The important regulatory capitals have really been Brussels in the lower left, Beijing in the upper right, and Sacramento, California in the upper left, right? These are the environments in which we see most of the proactive forms of thinking about regulatory oversight unfolding. What can we expect of this next administration before I turn back to the historical origins of this moment? I think it's hard to say. On the one hand, the vice president JD Vance, has been an enthusiast for raising and addressing concerns about the concentrated economic power in a large number of technology companies. In fact, he's been an enthusiastic fan of Lena Kahn, one of President Biden's chief antitrust enforcers. On the other hand, there's tremendous enthusiasm for the Trump administration that comes from a set of Silicon Valley venture capitalists. One of those venture capitalists is Mark Andreessen. Mark Andreessen is the author of a manifesto, which I'll speak about in a moment, that communicates enthusiasm about the potential of unfettered innovation to advance our society and to improve the human condition. Where we end up in this conversation is very hard to say at the present moment. The Trump administration appointed a new AI czar recently, but again, it's hard to know between appointments at the White House, appointments at the key regulatory agencies and appointment at the Department of Justice, how it is that the Trump administration will pursue its technology policies going forward. So let me turn from the present moment really to the book, and the book offers a story with historical perspective about how we find ourselves in this present moment, benefiting in extraordinary ways from technological advances that have changed how we work, how we live, how we relate to one another, but also grappling with a set of social consequences that from many people's perspectives are so damaging that they demand a response and potentially government action. To think about this moment, you really need to begin in some sense with the founding optimism and disruptive potential of the computer science age. This is an excerpt of that Tech Optimist manifesto. It was written by the venture capitalist Mark Andreessen. He was the founder of Netscape earlier his career, one of the real pioneers of the worldwide web and the internet moment. And he felt a couple years ago the need to re-express in a public way what it is that the potential of technological change offers to all of us in society. You can see in his language, our civilization was built on technology. Our civilization is built on technology. We can advance to a far superior way of living and of being. It's time to raise the technology flag, it's time to be techno optimists. The view here that scientific advance paired with the power of venture capital in the market is what enables these step change improvements in the human condition. And when we raise concerns about the potential social harms, when we think about constraints on this innovation potential from this perspective, those stand in the way of the transformations of the human condition that technology makes possible. Of course, this is one view, it's one view in an active conversation about how to handle technological change. Another view and a view that we articulate in the book is that the technological change that makes possible these tremendous improvements in our lives are like so many other market activities that they solve for, you know, a specific problem. They generate products that we consume, but they may generate what economists call negative externalities, right? Ways in which the action of market driven and profit motivated firms produce benefits, but also generate harms that often you need to solve for in important ways through some sort of government oversight or regulation. The classical example of an externality, of course, is pollution. A manufacturing process powers all of the things that we consume, but it may in its process generate byproducts, byproducts in water, byproducts in the air. And ultimately companies don't price those negative externalities into the price of what you consume. And as a result, you need to realign their incentives and you can do so through taxes, you can do so through regulation and oversight to make sure that we benefit from those products, but we also breathe clean air and breathe clean water. That dynamic of market driven innovation and change, but also a set of negative externalities is not unique to the manufacturing industry. It's not unique to any industry at all. And in fact, it's part and parcel of this moment of computer science driven technological change as well. A set of technologists designing products, leading companies, fueling these companies with venture capital who wanna optimize technologies benefits, including for their own commercial returns. And what we would expect is a role of regulators and regulator is just another word for our political institutions, our democratic institutions, to play a role in minimizing technology's negative externalities. But the problem of the current moment is that we really haven't built the muscle memory to think about the appropriate development and use of guardrails around new technologies that we're in a moment where for more than two decades we've been optimizing for technology's benefits without, in a meaningful way, developing regulatory guardrails to address technology's harms. So why has that happened? And the book argues that that has happened really for three key reasons. The first reason is that technology companies and those who oversee them and finance them largely come from the field of engineering and the field of engineering. And computer science is built around a mindset that we call the optimization mindset in its mathematical representations, in its algorithmic representations, in the machine learning models that are built, optimization means something very specific. That you're gonna choose one end goal that you're solving for, and you're gonna design the most clean and efficient strategy for producing that end goal. And with the compute power and the innovation that we have in our own technologies, we can solve for these end goals more efficiently that at any point in human history. But the optimization mindset introduces a set of challenges. On the one hand, you might be solving for something efficiently that might not be something very good for the world as a whole. We call that the problem of bad ends, bad goals or bad objectives. We can think about the development of new weapons systems as systems that may optimize for the mass scale of human destruction, right? Nuclear weapons probably fits in that category. It solved a near term problem in World War II, but ultimately we spent generations building guardrails around it. And we have to ask ourselves, was that the right technology to build in the world? Was that the right goal to optimize for? A second challenge is the problem of finding measurable proxies for good goals. Facebook's goal and its mission statement is to connect people. That's an extraordinarily valuable and worthy goal in society. The measurable proxy for which Facebook has at times optimized is time on the Facebook platform. And there's a distance between what you're optimizing for, that measurable proxy, and the meaningful human connection that Facebook aspires to in its mission statement. But perhaps the most consequential tension that comes up with the optimization mindset is the problem of multiple and conflicting valuable goals. So take something like a technology like Signal or WhatsApp, which many of you probably have on your phone. It solves in its design for privacy. Privacy is a hugely important value in society. But if we only solve for privacy, our ability to solve for other values that we care about, say protecting children from harm, protecting society from coordinated criminal or terrorist activity, these are values that we're trading off when such technologies get introduced into the world. And that may be the right trade off to make, but it's not clear that that trade off should be made in the boardroom of a technology company versus in a broader conversation in our politics about what kind of society we wanna live in. So issue number one that gets us into the current moment is the problems of the optimization mindset. You take that mindset and then you pair it with a financing model. And that financing model is venture capital. And venture capital is an extraordinary financing model because it helped to overcome the challenges of legacy financial institutions by being able to fuel small amounts of capital to inspiring entrepreneurs and give them the flexibility to pursue sort of product market fit without having to worry so much about sort of generating revenue quickly. But the challenge of the business model for venture capital is that it depends in important ways on achieving market dominance as quickly as possible in a small number of companies that realize outsized returns for your whole portfolio. So you're gonna spread a lot of money widely to lots of different startup founders, and as soon as you find something that hits, you wanna lock in that market dominance, you wanna realize those network effects and those scale effects. You wanna make it difficult for anyone else to compete in that landscape because that's what enables the venture capital model to realize its outsize returns. You hear things like fake it until you make it. Blitzscaling is Reid Hoffman's favorite term. You wanna scale as quickly as possible. Now that's extraordinary in terms of solving what it is that people want from these products. But if you're interested in solving for the social consequences of these products, this pace of change, which is accelerated not just by the technologies themselves, but also a model that incentivizes scale so rapidly puts government in a difficult position to identify and respond to these social consequences. So optimization mindset, the venture capital model that drives pressure to scale. And the third piece of the historical story is government, in particular, the government in Washington DC's explicit decision not to craft regulations to provide guardrails for this pathway to the information super highway that was seen in the mid 1990s, but instead to craft what was called a regulatory oasis around new technologies. The idea during President Clinton's administration and continued through Bush and Obama was that we need to step away from putting any constraints on this moment of technological change. Whether on access to personal data, on market concentration, you know, on oversight and accountability for the consequences of technologies, on legal liability for user-generated content. We're not gonna put in place any constraints because we want the American economy to be able to pursue this technological frontier at the speed that will enable us to grow. And it may very much have been absolutely the right bet to make in 1996. It may have continued to be the right bet to make over many years of this period of technological change because of course the US economy has evolved, scientific progress has driven this extraordinary expansion of the US economy and outsized returns in terms of improvements in GDP and wealth. At the same time, it's part of what explains why we're in a moment now where text negative externalities are widespread and largely unaddressed, right? So we've benefited from technology in all of these extraordinary ways, but it is not unfamiliar to any of you that we are grappling with a media landscape in which it is hard to separate fact from fiction. In which the health of our information ecosystem, which is key to our role as civic agents in a society, and the function of our democracy, is challenged by the new landscape of content and the difficulty that we have in the absence of legacy media institutions of exercising or building any structure around what we know and understand. You think about filter bubbles and echo chambers, misinformation and disinformation, that is something that is a broader social consequence of the social media companies that have also empowered people to speak in ways that were never before possible. You hear about things like surveillance capitalism, the rise of a surveillance society, all of these free services on our telephones that are harvesting our personal data that enables us to get terrific recommendations about what we should buy for the holidays, for our partners or for our children because it has a sense of what we care about and what our partners care about. But on the other hand, all of that information that makes us personally identifiable and deeply exposed, that's in the hands of the private sector and in some countries in the hands of the public sector about which we have little knowledge and little oversight. A third externality, algorithmic bias and discrimination. So many decisions in our daily life are now made by virtue of algorithms, right? Algorithms are powerful and efficient and they may in important ways be able to make better decisions for us. In the healthcare system, in the personal lending system, in all sorts of different environments. On the other hand, we should justifiably be concerned about fairness, about discrimination, about due process, about agency and oversight. And so in our rush to embrace algorithmic decision making, what about these other values that are so important? And then finally, and perhaps one of the most talked about externalities relates to shocks to the labor market. You know, if the prior age of automation was one that largely affected people at the bottom end of the income distribution, and we think about globalization and the social consequences of globalization, we didn't do a great job of responding to those negative consequences that were highly geographically concentrated and concentrated at particular ends of the income distribution and the skills distribution. The AI age is gonna generate a parallel disruption, but it's gonna be a disruption that's far more distributed geographically and far more distributed across both blue collar and white collar jobs. That is a social externality that we are beginning to see and that will grow over time. And there's a question of who has the responsibility to address it. So where do we go from here? A few final thoughts on this and then I'll open it up to your questions. The big message of my teaching over so many years and of this book is to say that technology is not something that just happens to us. It's not like a wave that washes over us as we stand at the beach, that we're unable to shape its direction and we really need to simply accept it as it is. Because ultimately technology and technological change, it's something that happens in society, it's something that happens in political communities. It's something that can be shaped and influenced and it can be shaped and influenced by people who build technologies. It can be shaped and influenced by people who govern technology companies in the boardroom. It can be shaped and influenced by people who invest in technology companies, but it also can be shaped and influenced if you have no role in technology itself. But you call yourself a user of technologies, or more importantly, a citizen of some political community in which technology companies operate. Now, sometimes you'll often hear that the response when you get concerned about technology's harmful effects is to disconnect. It's to go off the grid, it's to delete Facebook, or to delete Uber. It's to simply step back from some of these platforms. All of you know how hard that is to do in practice. If you don't love Zoom, you're still gonna be invited to a Zoom meeting every single day, five times a day. So these technologies are ubiquitous in our lives. But I think the bigger point that I hope you take away from my remarks is that even if you separate yourself from an individual application or an individual device or piece of hardware, the consequences of these technologies in society are not something that you can avoid. That the information ecosystem, the health of our democracy, is still shaped in important ways by Facebook. Whether you have a account on Facebook or not, which means you have a stake in basically our social response to this process of technological change. So where do we go from here? And the answer isn't simply individual action, right? It's gonna be a multifaceted strategy, but one that is more systemic. Think back to the creation of the automobile. When the automobile is rolled out, it changes our lives in extraordinary ways. It enables people to live separate from where they grew up, to live far away from where they work, to traverse long distances, almost effortlessly. It remakes the map, it remakes our economic sectors, it remakes our social life. But we also know that automobiles are dangerous. Automobiles are a weapon. Automobiles can be used irresponsibly. And so like any other technology, we build a set of guardrails around their use. We build lanes, we build speed limits, we have safety belt restrictions in place. We think about lights at night, and we're in the newest moment of solving for safety with the automation of automobiles. So we think about some combination of what happens in the market and some combination of kind of regulatory guardrails that enable us to use these new technologies responsibly. How do we get there in the current moment? It's gonna depend on a whole set of things coming together. There's no one answer to this question. The first really begins with the individual, right? As I described in my teaching at Stanford, we need to be in the business of cultivating and ethic of responsibility in computer science, upheld by social sanctions, potentially legal sanctions. When we think about the maturation of different sectors, take the life sciences, you ultimately evolve these sectors in a way that you recognize the social consequences of people playing these roles. And so the other transformative technology of the moment is Crispr, it's gene editing. But gene editing comes of age in a highly developed architecture of the life sciences, not only with internal ethical and scientific norms, but also regulatory, you know, oversight that comes and is rooted in the field of bioethics. We have nothing comparable in computer science. We have something called the ACM Code of Ethics and Professional Conduct, high level non-binding, really kind of an early stage effort to think about what it means to support that ethic of responsibility. Where is that ethic of responsibility really gonna be cultivated? Well, it's gonna be cultivated in part through higher education and in part through fields themselves. This is a picture of me teaching during the pandemic, this class on ethics, technology, and public policy to hundreds of students via Zoom. The work that we were doing in the classroom is about building that ethical muscle, building that understanding of how to measure social consequences, building that appreciation for the role of our politics in shaping the process of technological change. This was at Stanford where my focus was on technologists. At the Kennedy School, I have to think about training the next generation of public leaders. Many will serve in the public sector, some will serve in the private sector. There'll be critical agents, not only of using new technologies to solve public problems in exciting and important new ways, but also people who help to write the rules and to the guardrails that are developed. So that's the first thing. Building that kind of ethical muscle and an ethic of technology among the next generation. Second is really the issue of self-regulation, right? Not just at the level of individuals who are building things, but really thinking industry-wide about how we build an architecture that enables those who are on the technological frontier and will always be well ahead of where our democratic institutions are to anticipate potential harms and to try and solve for them inside the structure of companies and industries. This is an example of Microsoft's commitments as part of President Biden's work to advance safety, security, and trustworthiness and artificial intelligence. This was an early example of that kind of industry-wide coordination around AI. And I expect we'll see more of this going forward. I think you'll continue to see a tremendous amount of action around issues of market concentration. Ultimately, the question being how do we balance the extraordinary scale effects that are possible from the concentration of data and compute power in a small number of companies, while also preserving the space for smaller tech companies, for startups to challenge those behemoths to respond to different preferences in the market, to advance the frontiers. This is gonna be an arena of contestation going forward. And perhaps most importantly, where I want to end and what I think I speak to now in my role stepping into a new school environment and with new responsibilities, is that we need to build a set of public institutions that are capable of governing new technologies. We simply don't have the technical know-how in Washington DC, in Sacramento, in Brussels, or in Beijing. We haven't invested in the adaptation of our regulatory mechanisms so that they can be smart and experimental and adaptive over time. And the pace of technological change demands something different of governments than they've been able to do over the past few decades. DSIT is an example of the British government's efforts to reimagine its regulatory architecture. I expect you'll see this unfold in so many other countries around the world. And at the same time that each country does it on its own, we're gonna need to be thinking about global collaboration. We're gonna need to be thinking about what are the guardrails that we need to set in place that are not country by country or company by company, but really are guardrails that are global. You can see some of this in the conversations around commitments to human oversight of the use of nuclear weapons, a commitment made by the US and China the last time the two presidents were together. And I think you can expect over time the articulation of more coordinated guardrails that raise the floor in some sense on the oversight that exists for some aspects of this extraordinary moment of technological change. With that, I want to thank you for your time and your patience in listening to me share these perspectives rooted not only in my experience as a teacher and a researcher, but sort of being a part of this moment of extraordinary development in Silicon Valley. I'm thrilled to join this community and happy to take your questions.

- [Ariadne] Thank you so much, Jeremy. We're gonna open up the session as Jeremy said for questions. So please turn your cameras on, what we already did, such good people, to ask a question. We'd like you to please use the virtual hand raising feature of Zoom and I will call on you when it's your turn to unmute yourself and to ask your question. We'd like to ask that you keep your question brief. And finally, everyone on the call would really appreciate it if you could share your Kennedy School affiliation. I'll kick things off by asking a question that was pre-submitted by Manisha Begar, who is a 2023 graduate of our mid-career MPA program. And Jeremy, Manisha would like to know, is AI technology a boon or a bane to democracy?

- So thanks Manisha for the question. I don't know if you're with us, but I appreciate you submitting it in advance. And I think the answer with each particular moment of technological change, each new technology that comes is that there're gonna be extraordinary benefits that come from it, but they're also gonna be a set of social consequences that we need to address. So AI tools have extraordinary potential to enable us to solve public problems in new and creative ways. And one of the things that most excites me in this present moment is thinking about how to use advances in technology to help people we train at the Kennedy School and our graduates, but also those that we partner with to take advantage of data, to take advantage of these tools to make better, smarter and more effective decisions about the allocation of public dollars. That I think is a boon to democracy because part of what we confront at the current moment is a lack of faith in our democratic institutions, a concern that those democratic institutions aren't really delivering what people expect of them. At the same time, for a whole variety of reasons that I've begun to share with you, there are challenges that this AI moment represents for democracy. Some of that relates to that problem of the information ecosystem that has been transformed by the social media landscape, but also will be transformed by the ability of AI to generate large amounts of content almost effortlessly and to flood the information environment with things that one can't distinguish from high quality information sources. AI also represents this kind of significant disruption to the labor market that could have massive political and social effects and undermine further the support for democratic institutions going forward. So boon and bane, as with everything else and what that demands of us is not ignoring the boon or ignoring the bane, but recognizing these are things that we need to hold together at the same time and think both about how we use the market and how we use the state. By that I mean our political institutions to solve for our social goals.

- [Ariadne] Thanks so much Nick Elich, I'll call on you next. Please introduce yourself.

- Hi, it's great to meet you. I was a MPA ID and a Stanford MBA and I'm a exited startup founder. Really appreciate the comments. Thought it was all, you know, really spot on. I touched lot on AI and I was just curious if you could touch on crypto as well, just in terms of having, you know, what seemed like viable alternatives to fiat currency that are being actually supported by the federal government rather than cracked down on. Seems like a pretty unique moment in history.

- Yeah, thanks so much Nick. We're obviously at a moment where there's tremendous interest and enthusiasm in digital currency. Lots of experiments that are unfolding, obviously values that are going through the roof. We have in the backdrop of our mind some of the challenges that we've confronted in this landscape. You know, in recent years, the collapse of FTX, the kind of vulnerability that has been exposed. I think the lens that I've brought to this, you know, and I don't have the same depth of expertise that I do in other areas, but the lens that I brought to this is to make sure that as we explore what these new technologies offer us, and that's being done, again, as you said, not just in the private sector, but also by federal reserve banks, right? By all sorts of, you know, people who are engaged in in the financial architecture that we remember what it is that we have tried to solve for with financial regulation, how we think about systemic risk, right? To the financial system. Because ultimately as we experiment with these new technologies, new currencies, things that erode the power of centralizing institutions, we may be realizing benefits from that. But sometimes the architectures that we've set in place, or architectures that we set in place because we were concerned about systemic risk, because we were concerned about the misuse of financial institutions to pursue ends, you know, that ultimately were ends, that our democratic institutions decided were not valid, like, you know, financing terrorist organizations or criminal organizations. And so we're gonna need to find some meeting of the minds as we pursue this kind of innovation frontier in a way that solves for those systemic risks as well. And that has to be an important part of the conversation between the crypto innovators, right? And those who occupy positions of authority in central banks, you know, ministries of finance and treasury. And I think that's really part of the productive conversation that is happening now around digital financial instruments and the like.

- Thank you, Gary. Gary Grim from Car Center's Advisory Board.

- Hi, I'm on the Car Center Advisory Board and I also support the Sorenstein Center journalist resource. Can the ethical guardrails that you talked about in your presentation be implemented and perhaps coerced by policy when the real motivation for most industries is profit? I'm reminded of Musk's $56 billion compensation package, which he doesn't really need, but he didn't say, oh, I'll just turn that back and use that for ethical concerns rather than take it all myself. Can you comment on that please?

- Thanks, Gary. I think the market has been one of the most extraordinary generators of improvements in the human condition that we've seen in history and the models that have developed over time that figure out how do we allocate appropriate responsibilities between our public institutions and the private sector to produce products, to generate technologies, to enable us to realize what it is that we aspire to consume, and ways in which we want to improve our health and our wellbeing that has been enabled by a powerful and influential private sector and financial return is what enables those capital investments, those R&D investments that have positioned us for this extraordinary improvement in the human condition that all of us benefit from. What I hope you take from my comments is that I think the response to the challenges that new technologies pose isn't to stop the advance of technology. In fact, I think that would be a massive mistake. It's to recognize in an affirmative way that with each new moment, this moment is AI. The last moment was social media. The moment before that was the internet, mobile phones and their addictive potential came somewhere in between, each of these new moments presents opportunities and risks and the response is gonna be multifaceted. I want people inside companies, and I know engineers and computer scientists care about this too, to ask themselves questions, not just about how to design the best product. What might be the consequences of that product for the society in which we live? Do I think the profit motive may then overrule making design choices that solve for our social goals? Sometimes it will, sometimes it won't. But those who are building technologies are gonna know far earlier than someone sitting in Washington DC or Brussels or Beijing, what the potential challenges are that we might confront. Likewise, industry-wide self-regulation, right? Thinking both about the reputation of companies, the ability of industry to coordinate around the challenges that they see, to share information with one another. So Gary, absolutely, government will play a role. You used the word coercive. What I think of is just basically, you know, the market has a basket of guardrails around it that enable us to both benefit from what happens inside companies, but solve for social goals. That's ultimately what we will need in the technology industry as well. We're already living that. That's what privacy laws were about. We're in a moment now of restrictions on social media for people under the age of 18 in Australia, at the state level in Beijing and other places. This is just an exercise of politics. It's a set of people saying, we love our telephones, we don't wanna lose our telephones, but we're concerned about our 16 year olds, right? We're concerned about what content they're being exposed to. Sometimes that will be addressed by companies, you know, building their own community standards and guidelines. Sometimes it will be addressed through politics, right? And politics is what produces regulatory guardrails. And I think part of the message of the book, it now may feel something that is in broader debate and familiarity, but when we wrote this in 2021, sort of making an affirmative case that this is a moment to actively think about the role of government in shaping technological change was not a very popular view. I think the externalities are sufficiently widespread now, that now the question is really which regulatory guardrails, which regulatory settings. And I think that's a conversation that all of us need to be a part of.

- Thank you, I'm gonna call on Nico Hanky and I see Ellen Divik there as well, both MPA 1990.

- Thank you for introducing us, great to see you all. Our question is really working in energy and healthcare with AI a lot. Which industries you find most inspiring in terms of having begun to developing helpful regulatory capacity? So, probably most things AI are best regulated on an industry level. And we are seeing, for example, things like FDA rules for digital apps or German for machine learning. We are seeing financial regulations, we are seeing progress of uncovering crime with crypto just recently. So I'm just wondering which industry sectors or verticals do you think are the most interesting ones we can learn from, from having already early successes in developing regulatory capacity and processes for this.

- Thanks for the question both of you, and good to see you again. I mean, I think my answer would be we are at such an experimental moment, I just don't know yet, right? I just don't know whether the steps that are being taken are likely to lead to meaningful changes and deliver some of the addressing of kind of the social consequences of what's being built. You know, some of the earliest experiments in self-regulation, for example, were in the content regulation landscape, you know, the Facebook oversight board. I had many colleagues who were involved in both the design and implementation of the Facebook oversight board. The idea was we need something beyond community standards that are created by companies. We need to bring in expertise, we need to render that expertise transparent. We need to enable people to submit ideas and to challenge decisions that are made by the company. In some sense, it was a brilliant experiment. It was to say government is not gonna act, and in fact in the United States is constrained from acting by virtue of commitments to free speech. And that may be the right thing that we need mechanisms that are outside of government, but maybe also outside of companies. But the Facebook oversight board has a tremendous challenge, which is that the volume of content moderation that happens is vastly larger than anything that a Supreme Court-like mechanism could actually carry out. And so I give you that example, or an example for example, of different corporate governance structures around AI, OpenAI's corporate governance structure that basically fell apart. Anthropic's corporate governance structure. These are all things that are happening in the self-regulatory landscape before we even get to what government does and doesn't do. But they're examples of experiments and lots of experiments are being run in different industries. One of the great things that places like the Kennedy School can do is be an environment in which all of these experiments are tabulated and understood and where the effects are measured. And I really hope that's something that we can see as a future strength of ÌÇÐÄvlog¹ÙÍø.

- [Nico] Hmm, thank you.

- [Ariadne] Thank you for that, I wanna invite, if anyone has a question, we'd love to have you ask it before we let Jeremy go. Jerry, wanna unmute yourself, please and then we'll have Wendy and thank you so much.

- Sure, happy to be here. Thank you very much. I'm Jerry Mackling, I taught these courses at the Kennedy School from the 1980s to 2016, and was very interested to see the evolution from people applying what technology could do inside their shape of reference, which tended to be, take a current process and try to automate that. It was not at the levels of should the entire institution change. And in fact, many people working in the public sector on these problems said, our biggest problem is to avoid projects that visibly fail. So we work very much with the technology community that knows something and very much to try to train these projects if asking us about what happens to those who see themselves as losers from this shift, from threatened by some of this shift that's gonna be somebody else's problem downstream. And it's pretty clear that we're downstream now, that if you do take a look at the broad history from the Civil War time in the US, there was a great technological innovations that improved productivity in society, but had a lot of downside risks as you're mentioning. And we got to the Gilded Age and a point where the inequities in society became very difficult. And if you take a look at the great progress that was made through the 20th century following the second World War, when I and a number of other people were very impressed by what Kennedy said at his inaugural, I was graduating from high school at that time. And to think that the important problems were environmental, global peace, the kind of things the public sector could do. So I'm sorry to have taken all the time setting this up, but it does seem to me that we're at a time of enormous potential. But the great threat, and one of the great benefits of a threat is people will pay attention who were not paying attention before and the possibility of things I'm quite excited and hoping that the school and the school's broader community, and, in this case, by the school, I sort of mean the entire Harvard University, not just the Kennedy School of Government. Because in many ways the negotiations and the education to reach beyond those in individual institutions or even in industries, that is the focus of change has got to be broader than the individual institution, to industries, to jurisdictions, and to the fundamental problems of government, which tend to be, and I will shut up after this.

- Let's get Wendy's question in 'cause we're almost at time.

- Okay, more production.

- Thanks, Jerry, I could not agree more. And putting this in a broader historical context is imperative for all of us. This is not the first time at the rodeo, in terms of significant and disruptive technological change. And it's important to learn lessons for what we've done right historically, and where maybe we've had some blind spots. This is different in terms of the technologies, but not different in terms of the potential social consequences. Final word to Wendy. Wendy can't unmute.

- [Ariadne] We're trying to do that, I'm sorry, Wendy. We'll, let me get you to...

- Okay, there we are. Just quickly, because I know we're at time, I mean there's a whole different webinar possible to talk about the profits and taxes flowing from these huge, huge, huge industries. And I just wanna signal that from Canada, we've had our own face offs with FaceTime, which have not been pleasant, especially for those of us who are worried about media content. And that just big difficulty, and I know so much of your talk is focused on the US experience, but around the world we've also had these issues, of course.

- Yeah, you know, I'll just double down on something I said earlier, which is that, you know, a slide that I shared, which is that regulatory power is shifting is just to underscore the point that the challenges of the social consequences of technology are not just gonna be solved for in the United States or Brussels. That each political community is gonna make decisions about what it is willing to accept and what it isn't willing to accept that represents a real challenge for technology companies over time 'cause they in many ways would prefer clarity of a regulatory structure that is broadly embraced. But in a world where Washington is constrained, by partisan gridlock in particular, I think what we can expect is lots of regulatory experiments, some in different countries, some at the state level, some at the local level, efforts to think about international coordination. And that's the kind of moment we're in for the next five to 10 years. A ton of regulatory experimentation. That's important because that is that awakening that Jerry described, a recognition that we're gonna need something from our politics and not just from our industry. It's also a great moment for a place like the Kennedy School because that's our business, right? To basically think about the role of our public institutions, how we learn from these regulatory experiments and how we position ourselves to realize these extraordinary benefits while addressing some of the harms that are so apparent.

- Thank you so much. Thank you everybody for joining this call for asking such great questions and a special thank you to Dean Weinstein for sharing his personal expertise. We have exciting calls planned for next semester. I look forward to seeing everyone back and I wish you very happy and healthy New Year. Thank you so much, bye-bye now.