Concluding a two-year initiative, researchers at Harvard Kennedy School have proposed a new risk-based framework for understanding and managing the personal and societal harms that can result from powerful, unregulated digital platforms.
The Democracy and Internet Governance Initiative’s , provides lawmakers and regulators with a framework for assessing and acting on the growing dangers posed by social media to mental and physical safety, privacy, financial security, and social well-being.
The scholars hope their approach will help policymakers break what they regard as a stalemate blocking effective governance of social media platforms in the United States and confront the litany of dangers flowing from them in recent years as big platforms have consolidated their influence.
To move forward, the researchers have developed a risk-centered matrix for weighing the potential negative outcomes from digital platforms including Meta, which owns Facebook and WhatsApp, and Alphabet, parent of Google and YouTube. “Centering risk in our efforts to govern social media can provide us with an actionable North Star,” the study declares.
“We aim to pull the conversation away from the mainstream political dialogue and towards something that can be implemented in a bipartisan manner—and through collaboration of business, government, and civil society.”
The report notes that while Europe and other countries have made strides toward effective digital governance, the United States, “has been stuck in the political web of free speech, national competitiveness, and pro-market narratives—to the convenience of many companies.”
The researchers say that a policy approach focused on identifying and acting on the sources and outcomes of those risks also could help break the partisan deadlock that now often pits free-speech advocates against those worried about disinformation and misinformation.
The strategy suggests a set of risk-specific approaches that target each threat rather than a one-size-fits-all answer, essential in a tech world with rapidly evolving tools such as generative artificial intelligence.
But confronting those disparate risks will mean taking on the “deep information asymmetry problem” in which the social media giants disclose almost no user data, arguing it is proprietary information, the report says. As a result, policymakers and analysts “are operating almost blindly.” The authors call for major changes in information disclosure requirements from tech companies to enable risk-based regulation to work—just as the major drug companies need to disclose detailed drug test data to the Food and Drug Administration.
Indeed, the researchers cite the FDA as a potential model for a similar public agency to set standards for the social media platforms that would ensure they address the risks set out in the new framework. As with drugs oversight, this approach would also draw in the companies themselves and “leverage market dynamics to encourage private sector actors and experts across civil society to lead the charge on disclosures and the development of standards.”
The seven risk categories delineated in the framework all have both individual and communal targets: for example, “social and reputational risks” can mean damaging a person’s reputation by leaking real or faked pornographic material, or it could mean broader harm to an entire community through misinformation about COVID origins.
“Today, our foreign and domestic enemies seek to weaken our democracy through the erosion of truth, the amplification of lies, and the weakening of the body politic … . It is long past time we act.”
The seven categories of risk with both individual and community-level harms are:
- Mental and physical health or safety, through cyberbullying or radicalized terrorists
- Financial, such as phishing scams or predatory loans and credit abuses
- Privacy, through personal disclosures and “doxxing”
- Social and reputational risks such as exposure to danger, harm, or loss
- Professional, such as algorithmic bias in hiring
- Sovereignty, such as Russian interference in the 2016 U.S. election
- Public goods, such as decline of robust local news coverage
The report, is the culmination of an initiative based in two Kennedy School research centers: The on Media, Politics and Public Policy, and the for Science and International Affairs.
Lombard Director of the Shorenstein Center Nancy Gibbs, the Edward R. Murrow Professor of the Press, Politics and Public Policy, joined with Professor Ash Carter, then-director of the Belfer Center, to form the Democracy and Internet Governance Initiative in 2021. Carter, former U.S. defense secretary and a longtime advocate for harnessing technology for public good, died in October 2022. The digital initiative drew on the ideas of more than 100 experts and stakeholders in working groups and interviews over two years and produced detailed working papers including one on .
In her introduction to the final report, Gibbs said she and Carter “both shared the perspective that digital platform governance is one of the great issues of our time. Today, our foreign and domestic enemies seek to weaken our democracy through the erosion of truth, the amplification of lies, and the weakening of the body politic … . It is long past time we act.”
The final report argues that the central conclusion—shifting the American approach to an individual risk-centered model for analysis and action—should help move the governance debate away from partisan divisions and toward a more dynamic approach that would engage government, corporate, and nonprofit actors in targeted, nonpartisan action.
“We aim to pull the conversation away from the mainstream political dialogue and towards something that can be implemented in a bipartisan manner—and through collaboration of business, government, and civil society,” the study says.
—
Banner illustration by Richard Mia; Inline illustration by Joey Guidone