vlog

ARTIFICIAL INTELLIGENCE seemed like a problem for tomorrow—until it wasn’t. The arrival of OpenAI’s ChatGPT, an artificial intelligence chatbot, in late 2022 engaged the popular imagination. But while at first much discussion seemed to be about how students might use ChatGPT to do their homework for them, the more serious implications soon became apparent, and thoughts quickly turned to the idea that artificial intelligence was evolving at a pace few had anticipated. Of course, we’ve been living with algorithms and machine learning and big data for some time—from ads that appear on our web browsers specifically tailored to our interests, to devices like Siri and Alexa that suggest answers and solutions to our daily problems, to more-serious applications such as chatbots that help us navigate bureaucracies and algorithms that suggest flight risk (and bail amounts) for people charged with criminal offenses. But suddenly AI appears more immediate, and perhaps less controllable, than before. At Harvard Kennedy School, AI within the sphere of public policy is being studied in myriad ways. Its perils and possibilities include the ethics of using AI and the treatment of sentient machines, the inbuilt prejudices of algorithms, how AI might vastly improve human decision-making, and the huge changes this new technology will bring to bear on the labor force and the economy. vlog experts—technologists and philosophers, economists and ethicists—tell us what they think about the AI revolution.

Here vlog faculty members and other experts discuss how to harness, and how to rein in, this burgeoning technology:

Icon Magnifying Glass Designer note: All the art for this article has been created by Adobe Firefly (Beta), a generative AI platform trained only on images owned by Adobe or in the public domain. The use of these images does not indicate a new approach to editorial illustration in vlog Magazine. All captions reflect the prompts used to create these images. The banner image above was generated with: “Illustration combining artificial intelligence and democracy, present day, vibrant colors.”

 

Can AI make the justice system fairer?

Sharad Goel


Sharad GoelOVER THE PAST DECADE, my collaborators and I have worked to reform areas of the criminal justice system, designing algorithms and building tools that mitigate human biases and reduce incarceration. Our work demonstrates the power of technology to bring about more-equitable outcomes, but it also exposes the limits of technological solutions to complex policy problems.

When someone is arrested for a crime, police officers write a report detailing the circumstances of the incident. Prosecutors decide whether to charge the individual on the basis of that narrative. If they do bring charges, that triggers a labyrinthine legal process that often ends with a steep fine or incarceration. If they don’t, the individual is typically released without sanctions. The decision to bring charges is one of the most consequential in the criminal justice system, with life-altering impacts on arrested individuals, their families, and their communities.

In many states, prosecutors have nearly unlimited discretion when deciding whom to charge. That creates worry that implicit or explicit racial bias may taint the process. To guard against that possibility, we built a “blind charging” algorithm to automatically mask race-related information in police reports—not only explicit mentions of race but also implicit markers, such as hair and eye color, names, and locations. We tested the effectiveness of our tool by building a machine learning algorithm that tries to guess an individual’s race from the masked reports. We also asked a human expert to do the same thing. Our blind-charging algorithm stumped both the machine and the human, giving us confidence that we had successfully stripped racial cues from the narratives.

In some cases, prosecutors need to review physical evidence, such as video footage, before reaching a charging decision, making it impossible to conceal race. To accommodate such situations, we proposed a two-step case-review process. In the first step, prosecutors reach initial charging decisions on the basis of the blinded police narratives. In the second step, they see the complete, unredacted reports, including any physical evidence, and make a final charging decision. But if they change their minds after seeing the unredacted reports, they must explain why. That helps reduce racial bias while ensuring that prosecutors have all the information they need to make informed decisions. Our research led the California state legislature to require that by 2025, prosecutors across the state mask police narratives when making charging decisions—either using our blind-charging tool or doing so manually.

Someone who is ultimately charged with an offense must usually appear in court multiple times as the case proceeds through the legal system. Failing to appear at even one of those court hearings often results in the judge’s issuing an arrest warrant. But people rarely miss court because they’re actively trying to skirt the law. More often they have a justifiable reason, such as lack of childcare or transportation, or confusion about when and where to show up.

To increase appearance rates, we worked with the Santa Clara County Public Defender Office to build a tool that automatically texts reminders to clients with upcoming court dates. We measured the tool’s effectiveness by texting a random subset of clients and comparing the outcome with that for clients who didn’t receive reminders. We found that the court date reminders reduced arrest warrants by more than 20%. At a cost of less than a dollar per case, these reminders are a promising, cost-effective strategy for improving appearance rates and mitigating the consequences of missing court.

Advances in computing are ushering in new opportunities to reform the criminal justice system. But we must recognize the limitations of technology to solve deeper policy problems. Our blind-charging algorithm can mitigate racial bias in prosecutorial decisions, but it can’t rectify unjust laws that disproportionately affect communities of color. Text-message reminders boost court appearance rates, but they don’t resolve the underlying social and financial obstacles many people face when trying to attend court. Tackling these larger, systemic problems will require collaboration among policymakers, activists, technologists, and others committed to fostering a more equitable future.

Sharad Goel, a professor of public policy, looks at public policy through the lens of computer science.

Image prompt: Woodblock, anatomically correct human heart, microchips, circuitry
 

Can AI be a better doctor?

Soroush Saghafian
 

Soroush SaghafianLIKE MANY OTHER technological advances, the tools being developed around artificial intelligence, algorithms, and data science can be used in positive or negative ways, and it is natural to fear their potential misuse. However, the possibilities for solving societal problems are endless, and the potential impact beyond limits.

Policy decisions are naturally complex and extremely challenging. The AI and machine learning branches of analytics science are great tools because they allow us to move away from opinion-based solutions and instead adopt data-driven strategies. To harness them responsibly, though, we must use them in specific ways. For example, we need to ensure that they are not trained solely on data generated by human decision-makers, who are by nature biased toward their own views.

In the Public Impact Analytics Science Lab (PIAS-Lab), which I founded and direct, we are collaborating with a variety of organizations to solve problems that can have public impact. We take a problem-driven approach, meaning that we make use of the best analytics science methods to most effectively address each unique problem. These tools come from various branches of analytics science, including operations research, machine learning and big data, decision science, statistics, and artificial intelligence, among others.

We have been using these tools to help hospitals, start-ups, public agencies in the United States and beyond, and private firms solving problems that have public impact. The tools and related collaborations with these entities have enabled us to find the best ways to save lives, improve the quality of care delivered to patients, decrease health care expenditures, reduce existing inequalities, design superior policies, and make better use of technological advances such as mobile health, smart devices, and telemedicine.

The many challenges can be illustrated by our attempts to make a meaningful impact in one of the most complex sectors—health care. That sector involves a variety of stakeholders, especially in the United States, where health care is extremely decentralized yet highly regulated. Analytics-based solutions that can help in one part of this sector might cause harm in other parts, making finding globally optimal solutions extremely difficult. Obtaining effective analytics-based solutions also requires overcoming various challenges related to data collection and data use. Then there are various challenges in implementation. At PIAS-Lab, we can design advanced machine learning and AI algorithms that perform outstandingly. But if they are not put into practice, or if the recommendations they provide are not followed, they will have no tangible impact.

In some of our recent experiments, for example, we found that the algorithms we had designed outperformed expert physicians at a leading U.S. hospital. But when we provided physicians with our algorithmic-based recommendations, they gave the advice little weight and ignored it when treating patients, although they knew the algorithm had most likely outperformed them. So we studied ways of removing this obstacle. We found that combining human intuition with the algorithmic recommendations—what is called a “centaur model” in analytics and decision-making—not only made it more likely that the physicians would give more weight to the algorithms’ advice, but also resulted in recommendations that were superior to both the best algorithms and the human experts.

The potential for centaurs is endless, and we expect that most data-driven organizations will take advantage of them in the near future. For example, a department of human services could use algorithms to help predict which child-welfare cases were likely to lead to child fatalities and raise a red flag for those cases. Human experts could review those cases and share the results with frontline staffers, who could choose remedies designed to lower risk and improve outcomes. Other examples might include systems for spotting anomalies and preventing cyberattacks, improving design components in manufacturing systems, and assisting police officers to balance their workloads and better ensure public safety. Even the latest advancements in large language models (LLMs) such as ChatGPT are inherently centaur-based, because they benefit from human feedback in their training phase.

It is appropriate to offer this new technology with humility and caution. But it would be shortsighted not to explore its virtually unlimited potential for improving public decision-making.

Soroush Saghafian, associate professor of public policy, works on applying the science of data analytics to solving societal problems.

Image prompt: Robots; democracy; crowd; vibrant colors; nostalgic
 

Will AI hack our democracy?

Bruce Schneier
 

Bruce SchneierBACK IN 2021, I wrote an essay titled “The Coming AI Hackers,” about how AI would hack our political, economic, and social systems. That ended up being a theme of my latest book, A Hacker’s Mind, and is something I have continued to think and write about.

I believe that AI will hack public policy in a way unlike anything that’s come before. It will change the speed, scale, scope, and sophistication of hacking, which in turn will change so many things that we can’t even imagine how it will all shake out. At a minimum, everything about public policy—how it is crafted, how it is implemented, what effects it has on individuals—will change in ways we cannot foresee.

But let me back up. “Hack” is a techie term for a trick that subverts computer code. In my book, I generalize it to cover all sorts of rules. The tax code, for example, isn’t computer code, but it is nevertheless code of a sort. It’s a set of rules that determine how much tax you have to pay. Those rules have mistakes, or bugs, that lead to vulnerabilities that lead to exploitation. We call them loopholes and tax-avoidance strategies, but it’s the same idea.

A hack follows the rules but subverts their intent. In public policy, think of gerrymandering, filibusters, tricks to get around campaign finance rules, must-pass legislation, and everything gig-economy companies do to skirt labor laws. None of this is new, and finding these loopholes is a human creative endeavor.

AI has the potential to change that. Someday soon, it will be able to optimize lobbying strategy, finding hidden connections between legislators and constituents. It will create “micro-legislation”: tiny units of law that surreptitiously benefit one person or group without being obvious about it. It will find new tax loopholes, possibly utilizing complicated strategies involving multiple countries. It will be able to do all these things and more, faster than any human possibly could, and deploy them at a scale and scope that no human could match. The world isn’t ready for hundreds, or thousands, of new tax loopholes, or for new tricks to make money in the financial markets. At computer speed, scale, scope, and sophistication, such hacks could overwhelm our existing systems of governance.

But not all is bleak. The same AI that could exploit these loopholes could also close them. And the same reforms that make these systems fairer for humans will also make them less exploitable by hackers—whether human or AI. But we need governance systems to be more agile. This is a bigger issue than hacking by AI, of course; it’s about governing our fast-moving technological world. The real problem of AI is less what the technology can do and more who it is doing it for. The AI that will figure out new tax loopholes, or new hedge-fund strategies, isn’t in a university and working for the good of humanity. It’s in the basement of a multinational financial corporation and working for its clients. Right now, AI technology makes the powerful even more powerful. And that’s something public policy can address today.

Bruce Schneier, adjunct lecturer in public policy, is a security technologist.

Image Prompt: Human rights; crowd; data collection; charts; graphs; global
 

Will AI change the way we think about human rights?

Mathias Risse

Mathias Risse
SOME 10 TO 15 YEARS AGO, many observers thought that China’s increasing wealth (and the accompanying rise of the middle class) would lead to more democratization and then to improvements in the human rights situation. Instead, the Chinese Communist Party succeeded in a large-scale effort to upgrade its governance system to new technological heights, building on a stupefying amount of data collection and the kind of data mining that ever more sophisticated AI algorithms make possible. And although during this same period the private sector in democratic countries engaged in the same activities, in China they led to the creation of what Shoshana Zuboff calls “surveillance capitalism” rather than to large-scale efforts to use technology to upgrade democratic governance and adjust human rights to the new realities. Such an upgrade is vital. At the very least, it will have to include two things.

First, epistemic rights—rights that protect us as both knowers and knowns—become more central to our human rights discourse. To be protected as knowers means entitlement to share in the wealth of information that modern societies generate, and that has increased enormously through the ever-growing penetration of our lives by digital technologies. To be protected as knowns means to have a reasonable level of control over our personal data. But it also means to have a voice in how the accumulated data are used to change our societies. To some extent, epistemic rights are already part of the existing set of human rights, and have been since the Universal Declaration of Human Rights was passed, in 1948. But back then the possibilities of knowing and of being known were much more limited than they have become through the waves of innovation over the past 15 years or so. So talk about epistemic rights should be much more integrated into our understanding of what human rights are all about. A solid protection of such rights in existing coding is also essential to secure protection of a distinctively human life in the event of a takeover by artificial intelligence (which many experts think is plausible well within this century).

Second, data collection should also be considered from the standpoint of social justice. Control over data will shape the future in much the same way control over machines shaped the industrial age and control over land shaped preindustrial societies. Data collection and data mining make societies legible in ways that allow for the prediction of both macro-trends and individual behavior, and ultimately permit those who control the data and the data-mining algorithms to shape behavior. The virtual realities that will become possible through Web 3.0 technologies will lead to ever more sophisticated possibilities for doing just that. Social justice requires that both data collection and data mining be subject to democratic control. What data reveal about human behavior in our society should be regulated to be generally beneficial rather than to enhance the wealth of small parts of the population or to allow for the manipulation of democratic mechanisms in their favor.

Much ingenuity will be required to ensure that democracies, with their accompanying human rights protection, remain appealing vis-à-vis authoritarian alternatives but also remain credible vis-à-vis private-sector and partial-political interests that seek to put technological innovation in the service of only a few.

Mathias Risse is the Berthold Beitz Professor in Human Rights, Global Affairs and Philosophy and director of the Carr Center for Human Rights Policy.

Image promPt: A lightbulb covered in circuitry and data
 

Will AI come for the cognitive class?

Lawrence Summers
 

Lawrence SummersSEVENTY YEARS AGO, the computer scientist and pioneer Alan Turing said that it was going to be a threshold for humanity when a machine could imitate a human being’s answer to questions in such a way that another human being wouldn’t be able to tell the difference. The “Turing test” became known as the process of testing a machine’s ability to “think.” We are somewhere in the territory Turing described right now. This is a profound moment for humanity. The printing press and electricity were huge changes because they were general-purpose technology; AI and enabled tools like ChatGPT could be the most important general-purpose technology since the wheel or fire.

This points to a profound change in the way we are all going to work. Many of us will have a kind of caddy that augments our creativity, our capacity to bring knowledge to bear, and also our accuracy. When I went to graduate school, we estimated statistical models with five parameters. Now 175 billion parameters will go into one of these systems.

Thinking about the Industrial Revolution 200 years later, we see extraordinarily positive things but also the carnage and catastrophe of the first half of the 20th century.

I am seeing more and more now that ChatGPT is coming for the cognitive class. It will replace what doctors do—hearing symptoms and making diagnoses—before it changes what nurses do: helping patients get up and handle themselves in the hospital. It will change what traders do—going in and out of financial markets—before it changes what salespeople do: making relationships with potential clients. It will change what authors and editors do before it changes what people in bookstores do.

AI, and enabled tools like ChatGPT, are going to alter our society enormously over time. I do believe that they have the potential to level a lot of playing fields. Some of the people who have been quickest to say that structural change is just something you have to live with and accept as part of modern society when it was happening to other people—many of whom wore uniforms to work—are now going to have it happen to them. It will be interesting to see how they respond.

Lawrence Summers, the Charles W. Eliot University Professor, is Weil Director of the Mossavar-Rahmani Center for Business and Government, former president of Harvard University, and former secretary of the U.S. Department of the Treasury.

Image prompt: Artificial intelligence; machine learning; computing 
 

What worries us about AI?

Sheila Jasanoff
 

Sheila JasanoffARTIFICIAL INTELLIGENCE IS THE NEW DARLING of the policy world. At Davos, it was the buzzy trend of the year, as Fareed Zakaria reported in the Washington Post in January 2023. It is a technology that seems poised to change everything. It will transform our work habits, communication patterns, consumption practices, environmental footprints, and transport systems. Built into a new generation of chatbots, AI will remake how journalists write, lawyers argue, and students respond to essay questions in exams. In the hands of rogues and terrorists, it may spread misinformation, sabotage critical infrastructure, and undermine democracy. All these expectations, both promising and perilous, call for governance, and that makes AI a critically important subject for public policy.

This past spring, the Future of Life Institute, an organization dedicated to steering transformative technologies, issued an open call for all AI labs to declare a moratorium of at least six months on the training of AI systems more powerful than GPT-4. Predictably, critics attacked the proposal as unworkable, unenforceable, and likely to hinder beneficial technology development in an intensely competitive international arena. Is a moratorium the right solution? And have critics grasped the right end of the stick in their arguments against it? My answer to both questions is no.

The rise of the digital economy over the past 30 years has shown that rapid access to information is not the only good that societies need or want. The shiny dream of Silicon Valley is tarnished today by stories of fraud and hype, rising inequality, alienation, and misinformation—in short, a reality that does not comport well with the visions of liberation fervently preached by early apostles of the digital age. So what is to be done?

Given the diversity of AI applications and their rapid development, it is clear that America’s usual approach to regulating technology, which the moratorium critics support, will fall short. Typically, U.S. entrepreneurs are relatively free to design and develop new technological systems unless they are shown to pose plausible threats to human health, safety, or well-being. Until the risks become palpable, self-regulation is the order of the day. Many believe that this laissez-faire approach leads to more-efficient outcomes, with less chance of nipping breakthrough technologies in the bud through premature, possibly unenforceable controls. But what works for relatively self-contained technologies, such as vaccines and self-driving cars, is less well suited to the hydra-headed monster that AI is shaping up to be.

Nor is a six-month moratorium the right answer. The pause is not the problem. What, after all, is a six-month delay in the grand march of technological development? The important issue is not whether a moratorium is appropriate but what should happen during such a pause, and here history offers less than satisfying lessons.

Moratoriums have been under discussion in American technology policy since the famous voluntary restraint adopted in 1974 by molecular biologists developing genetic-engineering techniques using recombinant DNA (rDNA). Widely hailed as a success, that moratorium gave scientists both moral and technical standing to assert that they, and they alone, had the authority to regulate their own research. Subsequent breakthroughs in many technological areas, such as genome editing with CRISPR-Cas9 technology, have elicited similar calls for pauses, but with the thought that responsible scientists would be the ones who built frameworks of self-regulation during such periods of restraint.

The troubled history of genetic engineering, especially as applied to bioengineered crops, suggests that scientists of the gene-editing era construed the regulatory challenge too narrowly. It turned out that the risks people cared about with rDNA research did not relate only to accidental releases. They also involved people’s visceral sense of what was normal, what they were prepared to eat, and what kinds of agriculture seemed natural.

AI offers an even more elusive regulatory target. What worries us about AI? Is it the “A” for “artificial,” because a machine that is learning to think and act like a human blurs a line around human agency that has been fundamental to centuries of ethical thought? Or is it the “I” for “intelligence,” because we do not know whether machinic intelligence will combine analytic speed and a voracious appetite for information with judgment or compassion? Who, after all, would have imagined that the lip-reading computer HAL in Arthur C. Clarke’s 2001: A Space Odyssey would outfox and kill its human controllers?

Signing a six-month moratorium may feel good because it’s taking a stand on an issue of emerging concern. But to make a difference in how we deploy AI calls for a deeper, more prolonged engagement, one that arouses a society’s ethical and political intelligence. We need to bring AI back onto the agenda of deliberative democracy. That project will take more than six months, but it will be wholly worth it.

Sheila Jasanoff, the Pforzheimer Professor of Science and Technology Studies, studies the role of science and technology in policy, law, and politics.

Image prompt: Bird’s-eye view of a New York City block with greenery, microchip, circuit boards, circuitry
 

How can AI make cities better?

Beto Altamirano MC/MPA 2022

Alumni Perspective


Beto AltamiranoTHE WORLD’S CITIES are the beating heart of human civilization, bustling hubs of innovation, culture, and progress. From the towering skyscrapers of New York City to the ancient temples of Tokyo, cities have played a critical role in shaping our collective history and defining our cultural identity. However, as the global population continues to urbanize at an unprecedented rate—according to the United Nations, 55% of the world’s population currently lives in urban areas, and that figure is projected to increase to 68% by 2050—cities are facing unprecedented challenges. Traffic congestion, waste management, air pollution, and sustainable development are just a few of the complex issues they must navigate in the 21st century. Fortunately, AI and other emerging technologies offer tremendous potential in our quest for a more sustainable future in which cities can be smarter, cleaner, and more equitable than ever before.

AI-powered solutions have significant advantages, for example, in traffic management. Traffic congestion leads to longer travel times, increased fuel consumption, and higher emissions. Flow Labs’ AI-powered traffic-management technology has proved successful in reducing traffic congestion by up to 24% in some areas. In Utah, Flow Labs has helped optimize traffic-light timing and predict traffic patterns, resulting in shorter travel times and decreased emissions.

Similarly, Rubicon Global’s AI-powered waste-management solutions have helped improve practices in multiple cities across the United States, optimizing collection routes to reduce emissions and waste. Its technology has lowered carbon emissions and has the potential to save U.S. cities $208 million over the next 10 years through reduced disposal costs, optimized fleets, and other metrics.

Moreover, AI can be a critical tool in improving air quality, which is essential for public health. According to the World Health Organization, air pollution is responsible for some 7 million premature deaths annually. Green City Watch is a German start-up that uses AI-powered solutions to monitor urban green spaces and promote sustainable urban development. Its technology analyzes satellite images and maps green spaces in urban areas, providing policymakers with real-time data on urban greenery and air quality. Green City Watch’s technology has been implemented in Berlin, Amsterdam, Madrid, and other cities across Europe.

Gjenge Makers, a Kenyan start-up, is transforming plastic waste into affordable, durable building materials using AI-powered solutions. Kenya is facing a major plastic-waste problem: The city of Nairobi alone generates approximately 2,400 tons of solid waste a day, 20% of which is plastic. Gjenge Makers’ technology promotes sustainable construction practices by using AI to sort and process plastic waste, converting it into strong building blocks for construction.

Despite the potential benefits of AI, implementing these solutions in cities presents challenges. Their high cost can be a barrier to entry, and concerns over privacy and data security can hinder adoption. However, I believe that the benefits outweigh the challenges. To overcome these challenges, it is essential that policymakers, start-ups, and other stakeholders collaborate to promote innovative solutions for sustainable urban development. Furthermore, it is crucial to prioritize equitable access to AI-powered solutions to ensure that all communities benefit.

One effective way to do that is through public-private partnerships, which can leverage the strengths of both sectors to promote innovation and ensure fair and broad access to AI solutions. An excellent case in point is the partnership between the Atlanta city government and Rubicon Global to implement cutting-edge waste-management technology. That collaboration resulted in better waste collection and reduced emissions and decreased the amount of recyclables sent to landfills by a remarkable 83%, by adjusting the city’s solid-waste service schedule.

The potential impact of AI-powered solutions on sustainable urban development is significant. The International Data Corporation predicts that global spending on digital transformation investments will reach $3.4 trillion in 2026, with AI being one of the key drivers of this growth. By leveraging the power of AI, we can build smarter, cleaner, and more equitable cities for all. To do so, we must collaborate across sectors to drive innovation and positive change.

Beto Altamirano MC/MPA 2022 is the CEO and co-founder of Irys, a company developing AI-driven tools to increase community engagement.

Headshots by Martha Stewart and courtesy of Beto Altamirano

Get smart & reliable public policy insights right in your inbox.