ĚÇĐÄvlogąŮÍř

All of the choices we make have one thing in common: Our decisions are only as good as the information we have to base them on. And with the rise of disinformation, misinformation, media manipulation, and social media echo chambers, we’re finding it increasingly hard to know what information to trust and to feel confident in the decisions we make. Harvard Kennedy School Professor Matthew Baum and Joan Donovan, the research director for the Shorenstein Center on Media, Politics and Public Policy, have been building a community of researchers and creating tools to help understand disinformation, where it comes from, and—hopefully—how to make it less of a threat in the future.

Featuring JOAN DONOVAN AND MATTHEW BAUM
42 minutes and 16 seconds

Some choices are easy. Some are hard. Some are momentous, which is how many people are describing today’s U.S. national election. Yet all of the choices we make have one thing in common: Our decisions are only as good as the information we have to base them on. And with the rise of disinformation, misinformation, media manipulation, and social media bubbles, we’re finding it increasingly hard to know what information to trust and to feel confident in the decisions we make.

Harvard Kennedy School Professor Matthew Baum and Joan Donovan, the research director for the Shorenstein Center on Media, Politics and Public Policy, have been studying disinformation and the people who create it since the 2016 election and the time when the term “fake news” first entered the political conversation. Baum and Donovan have been building a community of researchers and creating tools to help understand disinformation, where it comes from, and—hopefully—how to make it less of a threat in the future.

Baum, the Marvin Kalb Professor of Global Communications, held one of Harvard’s first major conferences on fake news and misinformation in early 2017 on the heels of the last presidential election, when accusations flew that Republican Donald Trump’s electoral college victory was aided by disinformation campaigns and foreign interference. Donovan came to the Harvard Kennedy School in 2019 and is now director of the  at Shorenstein. They launched the nation’s first scholarly journal on fake news, the , earlier this year.

Hosted by

Thoko Moyo

Produced by

Ralph Ranalli
Susan Hughes

This episode is available on Apple Podcasts, Spotify, and wherever you get your podcasts.

Intro (Matt Baum): Unfortunately, knowledge building takes time. This is an urgent problem and it demands quick solutions, but it's really hard to come up with quick solutions. It's like vaccines, we don't want to go out in the field at mass level with something if we don't really know it works because we will invest a ton of research and maybe create the illusion of efficacy without actually helping the problem.

Intro (Joan Donovan): Ultimately, the point isn't necessarily to get millions and millions of people to see it. The point is to trick a couple of journalists into writing a story about it, or to trick a politician into retweeting it, or to trick a celebrity into covering it in some way. And then that's what triggers the larger coverage.

Intro: (Thoko Moyo):

Some choices are easy. Some are hard. Some are momentous, which is how many people are describing the U.S. presidential election taking place today Nov. 3, 2020. Yet all of the choices we make have one thing in common: Our decisions are only as good as the information we have to base them on. And with the rise of disinformation, misinformation, media manipulation, and social media bubbles, we’re finding it increasingly hard to know what information to trust and to feel confident in the decisions we make.

Harvard Kennedy School Professor Matthew Baum and Joan Donovan, the research director for the Shorenstein Center for Media, Politics and Policy, have been studying disinformation and the people who create and benefit from it since the 2016 U.S. election, the time when the term “fake news” first entered the political conversation. Baum and Donovan have been building a community of researchers and creating tools to help understand disinformation, where it comes from, and—hopefully—how to make it less of a threat in the future.

They’re here with me to shed some light on what happens in some dark places on this day, Nov. 3rd, a day of big decisions in the U.S. Welcome to PolicyCast.

Thoko: Matt, let me start with you. We started looking at disinformation, media manipulation, just after the 2016 elections. What's changed since then?

Matt: I think the first thing that comes to mind, and maybe the most important thing, is that we've seen the development of a multi-disciplinary research community over that period. There was very little direct research in anything that you might call misinformation studies. Certainly, there was a long history of research in closely related areas like propaganda, but nobody was really doing academic research on fake news or misinformation. Since that time we've had an explosion. It's very multi-disciplinary—political scientists, psychologists, sociologists, economists, data scientists—across the board. This means that there's a lot of research being conducted, and it's not just academics. It's also in governments and among policy shops, think tanks and non-profits organizations. It's become a major subfield of research and an unusual one, in that it really transcends the usual stovepipes of research. Lots of people with lots of different tools are interested in it.

Thoko: So, lots of people are researching it, lots of people are studying it, but we're still seeing a proliferation of disinformation and media manipulation. Particularly, at this moment that we're speaking, just before, for instance, the U.S. elections. That seems to be in the headlines. What has changed in what we understand about disinformation and how to address it in the time that we've been studying it, when we started in 2016?

Matt: Well, unfortunately, it's proven to be a really hard problem, and I don't think that we really have much of a toolbox yet. We have a few things that we really would like to see be effective and we hope are effective, civic education, educating journalists, educating students to be better consumers of news and information, to recognize misinformation, fact checking so when false information or problematic information appears correcting it, platform interventions. Facebook and Twitter this year have, to varying degrees, started to block election-related misinformation, really, for the first time, seriously. We don't yet know how well it's worked. 

There's a lot of uncertainty about how well these other interventions work. It's just, unfortunately, knowledge building takes time. This is an urgent problem and it demands quick solutions, but it's really hard to come up with quick solutions. We tend to try things, and then... It's like vaccines, we don't want to go out in the field at mass level with something if we don't really know it works because we will invest a ton of research and maybe create the illusion of efficacy without actually helping the problem, which might make the bad actors more successful. So, we have to be a little careful.

Thoko: Joan, actually, let me come to you. So, this is an urgent problem, but as Matt says, no quick solutions. What does it look like to actually study disinformation? What are the elements of it? What are the things that you're focused on?

Joan: Matt is right, in the sense that it really helps to be working across disciplines. There's no viewpoint from any singular discipline that's really going to get you what you want to see and a complete picture. So, the Technology and Social Change Project at Shorenstein is really oriented towards working across social science, particularly using the tools of investigative journalism alongside more cultural anthropology ideas about:  How do we study communities in situ? And then, also, adding in data science. The data science part is hard because you really have to know where to look. There are just oceans and oceans of information, and it's not like fishing. You really have to be much more targeted in your approach. So, we built out the media manipulation casebook as both a teaching and learning tool, so that people can see exactly how we do it. I think there's a citation on nearly every single sentence for our case studies because we want people to be able to recreate cases as they emerge and change, and especially to understand the ways in which platform companies are intervening. If you would like, I could talk about a specific example.

Thoko: That would be great because if you can just maybe illustrate for us the elements of a disinformation campaign and some of the things that you would be studying in the work that you do, yeah, that would paint quite a nice picture for us.

Joan: Yeah. Particularly, when we think about media manipulation disinformation campaigns, we have a whole different set of tactics that might be at play. One of the ways in which we'll watch a campaign develop is through the use of this technique called keyword squatting. For instance, we will... I almost said sit around, but we don't actually see each other anymore. But we'll generate a list of keywords that we think are high-value targets for misinformation campaigns, things like Biden 2020, Kamala Harris, Trump 2020. Right? And then we'll start to go around social media and look at, well, who is dominating those keywords? Is it the actual candidates, or is it supporters of that candidate? In some instances, what we find is, we'll find grift. We'll find people that are trying to sell you merch or create fake donation links and whatnot, and they're using the hashtags of the campaign.

Other times we'll find things much more insidious, like when we were looking at a campaign in 2016 that was trying to get people to believe that Hillary Clinton was in support of women going into the draft. There was the regular hashtag of Hillary Clinton, #imwithher. And then there was this other hashtag that circulating with it called #draftmywife. That seemed strange, and then we started to dig into some of the background and we found a whole camp of folks on Reddit's The_Donald that were just sitting around making memes with these hashtags that were trying to falsely associate Clinton with this particular political position. So, we try to understand both the tactic, and then also try to find where these campaigns are being organized and planned. When we don't find any evidence of organization, that's when we suspect foreign interference. If something appears online and it's got a lot of velocity and volume, we then look and see, oh, are they paying for advertising? Are they maybe employing troll farms or botnets? But a lot of these things are organized in the open because to manipulate algorithms requires a large group of people.

Thoko: Say a little bit more about that. How do you organize in public? I mean, how do you organize in an open way? How do these disinformation campaigns actually get organized?

Joan: Yeah. I've really cut my teeth in this field really looking at social movements and how different groups like Occupy Wall Street got together and figured out how to have a global campaign against Wall Street and come together. In the instances where we're looking for misinformation, a lot of what happens is, there'll be different communities of people who are ... some of them are just anti-media. They want to see CNN mess up. They want to see MSNBC mess up. They have journalists that they'll target. So, sometimes we'll watch certain journalists that are often targeted by trolls, and then try to go back into those communities and look for evidence of that planning.

So, there'll be these small communities or on chat apps like Matt looks at, misinformation that is circulating in chat apps. Sometimes it's off the platform, and this is an important feature of what happens when a misinformation campaign moves from one platform to another. So, if it moves from an anonymous message board or Reddit onto Facebook or onto Twitter, it really does what me and P.M. Krafft have called platform filtering. It sheds the context of its origins, and it's presented anew for different audiences. But these things sometimes are more like games for folks to participate in. Other times, they're born of network harassment campaigns, where they really want to see certain journalists or civil society leaders fall. So, they become very strategic about how they create misinformation and then where they plant it. Because, ultimately, the point isn't necessarily to get millions and millions of people to see it. The point is to trick a couple of journalists into writing a story about it, or to trick a politician into retweeting it, or to trick a celebrity into covering it in some way. And then that's what triggers the larger coverage, and it separates the source of the disinformation from the actual campaign and makes a new source out of the person either reporting on it or commenting on it.

Thoko: What makes journalists so vulnerable to tricksters or to these disinformers? What is it about the way journalism works that it makes it something that someone who's wishing to plant something target journalists? What's the vulnerability there of journalists?

Joan: Well, as Matt was pointing out, in 2016 there weren't a lot of people understanding the ways in which different communities would utilize the affordances of online infrastructure like anonymity and like spam, essentially, the ability to message a bunch of people at once, or advertising. Nobody was really looking for it. So, when journalists were getting tips, they didn't realize they were being targeted with tips. They would think, "Oh, well, a lot of people in my feed are talking about Hillary's health and this thing called Pizzagate. I better look into that," right? And you have an internet beat development at the same time, of journalists who are saddled at the desk. There's not a lot of shoe-leather reporting going on, and they are reporting on... Buzzfeed really made this kind of journalism popular, the man in the tweets, and that became a really fertile ground for hoaxing and scams.

Marketers were really savvy in the sense that they realized that if you could place a story in an outlet that doesn't have a lot of standards... Larger, bigger outlets are looking for novel stories, and that's where junk news has really thrived is in the fact that nobody else has the story. So, you'll start to see this cascade effect happen when you get a little bit of energy around a story, and then outlets would rush to cover it because nobody wanted to be left out, which created this entire echo effect. We're only seeing now journalists and platform companies realizing that they are still gatekeepers and their coverage matters. It legitimates these misinformation campaigns. So, they've started to roll back on that rush to coverage.

Thoko: And we'll talk a little bit more about the platforms, but let me just play back what I think I hear you saying, that a lot of the tactics that disinformers are using can be used for various reasons. So, it's not just the political. Sometimes it's just they're wanting to have some fun with journalists. Is that right, or is there always a specific political motive behind what disinformers are doing?

Joan: Usually, the tactics develop in a very joking way. They're usually pretty juvenile. When I went back and traced the history of online media manipulation campaigns that were first campaigns that were fairly popular, what I found, it was actually leftist activists pretending to be the WTO, registering a domain name as if they were the WTO, or registering gwbush.com and pretending to be representatives of George Bush back in the early aughts. Those kinds of techniques are still out there. They're still available. These techniques are available to a bunch of different industries and communication strategists, but really, when it gets down to it, if you need to lie about who you are and lie about what your intention is in order to get your point across, then we need to start setting up some guardrails to prevent that from happening, especially at scale.

Thoko: Matt, let me bring you in here.

Matt: There are a couple of dimensions that are worth mentioning here. The first is that we have a preference for information that confirms what we already believe, psychologists call it selective perception. We've known about this for more than half a century now, but we've seen it come to fruition much more effectively because the producers of information have become much more sophisticated. They've caught up to advertising. Advertisers have been doing this really for many, many decades. But it's icky to be told you're wrong and have to think about that, and decide whether or not this new information should override the belief that you came in with.

So, given the opportunity, we'd much rather expose ourselves to information that tells us we're right. Social media are uniquely ideal vehicles for being able to select into streams of information that will tell you that you're right. It like if you imagine the rat pushing the pellet to get food, it's constant reinforcement, which lights up your brain in really happy ways. It's almost like an endorphin rush, if you really want to push it down to the biological level. So, part of it is definitely that we are wired to prefer certain information over other information, as well as to dismiss contrary information, even when we receive it. So, it's like we have multi-layered defenses against being told we're wrong. So, misinformation is designed to appeal to certain people by telling them what they already want to believe.

Thoko: So, how is that playing out in, for instance, the elections in the U.S. right now? How have you seen that play out?

Matt: I mean, I think by far the most salient example of this right now is around responding to COVID and the severity of the COVID crisis. It's two completely different universes when you look at what the Democratic candidate, Joe Biden, has to say and his supporters believe about how big a problem it is, how proactive we have to be in responding to it and what policies we should pursue. And then you have the President at his rallies and all the statistics in the world that might be saying, "COVID is spiking. Hospitalizations are spiking," and all of that, seems not to penetrate at all.

Why is that? People become invested in a particular point of view, "I support President Trump. President Trump is telling me this. It would be really icky for me to have to reconcile my support for the President, who's telling me COVID is not a serious problem and it's about to go away, with this data showing me, in fact, it's getting worse.” So, instead, I dismiss it as false. A pox on all the media because the media are saying this, and I seek out whatever media outlets I can find, say, Fox News or The Gateway Pundit online or whatever, that will tell me I'm right. And, on my social media feeds, I'm likely to be surrounded by people who agree with me.

Thoko: So, is there a way of bridging that? I mean, is there a way to actually address that, or do you just accept that people will believe what they want to believe?

Matt: Well, we know what, in theory, works, which is, if you can find somebody who is a credible source willing to go against their partisan interests... If we're talking about politics, there are partisan interests. So, if you can find somebody who is the Trump orbit who will say, "COVID is actually a serious problem," in that particular example, theoretically, that would be a lot more credible to people who are inclined to believe what the President says. In fact, there's lots of experiments and psychologists and political scientists that have done a study. If you can just find credible communicators, bipartisans, if you can get representatives from both parties to say, "This is true and that's not true," that should work better. 

The problem that we have, and why this is... I said at the outset, this is such a hard problem... is that, increasingly, people will use the message in order to judge the messenger, independent of type. So, if I'm a Republican senior cabinet official, a Trump administration cabinet official, for instance, and I say, "This is wrong. The Trump administration's bad. You should vote for Joe Biden," people who are supportive of the President will say, "Well, actually, you're part of the deep state," or, "You're really a closet Democrat." Or, if it's a fact check site that says, "This thing the President said is false," well, then that fact check site is biased against Republicans, or whatever it might be. So, they go to the source and they, in effect, disarm or debunk the source rather than the message itself. This is a function, I think, of the heightened polarization of the times we're in. So, something that used to work pretty well 10, 20 years ago doesn't work as well right now. So, again, in theory, yes, we know what works with human psychology. In practice, we haven't quite figured out how to do it very well. And there's one other thing, is the people who are going to be most credible are generally the least willing to play that role.

Thoko: Right. To stick their necks out. 

Joan, let me come back to you and just explore a little more about what your work might be in this moment as you look at the U.S. elections. What are the sorts of things that you're studying or tracking, and what are you finding?

Joan: The things that we're really looking at have been related to political violence and calls for violence. One of the things that, I think... We can't talk about misinformation without talking about the context that make misinformation believable or possible and, as Matt has pointed out, polarization on particular wedge issues are where we see misinformation targeted. So, we've been trying to understand the role of the rising tide of white vigilantism going on in the United States, the ways in which issues that have come to light around the Proud Boys and organizing in the Pacific Northwest and ongoing uprisings related to Black Lives Matter. All of these things that are happening in society become political opportunities for disinformers to really try to either muddy the waters or to create confusion or to just lob false associations at their political opponents.

Joan: So, we've been trying to understand a lot about how that plays into the ways in which people understand candidates. Now, I don't know which way the election's going to go, but I certainly know that Joe Biden is not the president of Antifa, right, or the radical left. I know the radical left. They're not Joe Biden, right? So, these kinds of things that are political spin, they actually come from somewhere, right? We do have lots of social unrest out there related to what's going on and the presidency and this idea that some people believe that this is creeping fascism.

So, you see antifascist protestors in the streets in certain cities night after night. The kind of coverage that we get of those protests is just vastly different on the right and on the left in the ways in which this plays out. And then, centrist media, which we tend to forget about, is really trying to tow a line where some of the normal journalistic practices like trying to go to the source or get both sides of the story are actually being turned into vulnerabilities for their entire field, where people are losing trust in coverage of these really important issues because there's just so much misinformation out there.

Thoko: So, talk about that in the context of the big story that we've had in US over the past two weeks, which was the Hunter Biden laptop and emails. I mean, what was happening there, where you look at it from where you sit as someone who studies disinformation, misinformation?

Joan: Yeah. The sourcing of the laptop being dropped off in Delaware at a lonely repairman's shop that's just... If you can charge $85 for fixing a broken laptop, I want to know you. It's a broken laptop, right? So, the sourcing of it just stinks of tradecraft. It stinks of a drop. And many cyber-security professionals are waiting for an opportunity to forensically analyze the contents of this hard drive. The story, for instance, that Ben Smith put out for the New York Times on Sunday gives us another clue that the White House actually pitched the Wall Street Journal on this story, and the Wall Street Journal couldn't corroborate any of the evidence. So, they passed.

Thoko: It's unusual for the White House itself to pitch directly, I think.

Joan: Don't you think it is? Right? Because usually we're supposed to have these very transparent press conferences, where everybody get access to the things that the White House thinks are of public interest, right? So, that kind of media spin, that kind of media manipulation, tactically, about how do you plant a story? How do you plant it in a place that has some legitimacy like the Wall Street Journal does, where Wall Street Journal, again, I would maybe put in centrist category here.

The fact that they wouldn't cover it, but also the fact that they didn't talk about why they wouldn't cover it, right, not making a story about the pitch. That's very alluring for editors to give up a story that good. But they knew what was at stake, which was that you could end up sharing a leak that you don't know exactly where it came from or if any of it has been manipulated. The other thing that I'll say about the leak is, what we know about prior political leaks is that sometimes they're just straight leaks and they're just an enormous amount of information. Other times, forgeries are placed within leaked contents, and that's when we get into a situation where it takes time for journalists to really vet these things.

Matt: Actually, there's another thing that's really important about this story that hasn't come up yet, and that is the contrast and maybe the lesson learned by the mainstream media from the Hillary Clinton email scandal in 2016, where in hindsight, it seems as if that could have tipped the balance in the election. The fact that the Wall Street Journal declined this story, which is, as Joan was saying, an incredible alluring story, really hard to decline. I think this was alluded to in the New York Times, the Wall Street Journal rediscovering its status as a gatekeeper suggests that there have been some lessons learned by journalists since the last go around. It really is very different the way this story's been treated relative to the WikiLeaks and email brouhaha in 2016.

Thoko: So, Joan, if this story had hit your desk as a disinformation researcher and you didn't have the traction that we all have right now, what is it about it that would have raised flags for you? How would you have tackled it? Where would you begin with something like this if it lands on your table? The New York Post has this alluring story, as you say. Where do start? Or do you wait to see more information come up before you start tackling it? I'm just curious how a researcher would have approached this, what your first reaction was when you saw it.

Joan: Earlier this summer, we started looking at coverage of Hunter Biden because we could tell that there was an appetite in the right-wing media ecosystem for stories about Hunter Biden and palace intrigue. Who is he? Why did he end up at Burisma in the first place? So, there had been attempts to conduct journalism, uncover things about him, for months and months and months. So, the first inkling that we knew Hunter Biden was going to be the lead story from the right was... I placed a hard bet with the team and said, "I think what's going to happen at the debate is that Trump is going to force the issue of Hunter Biden," because it had been stuck in the right-wing echo chamber and it wasn't coming out. And he did. He did. He really tried to force it as a conversation, and it didn't work.

What we see as researchers when they're trying to make a story happen time and time again and it doesn't, then you start to see the intensification and adaptation of tactics. So, we pretty much expected more and different styles of attack, including a leak, but was really suspicious of it, is you've got someone with millions of dollars. He can't afford Geek Squad at Best Buy to come to his house for the laptop that he's evidence of crimes on? I mean, it's really hard to believe. The other thing that I think is important that we look at is, who are the sources? Are these people who are known for these types of shenanigans, or are they reputable people? 

And that's where journalists really come into the line of defense here, where that Wall Street Journal reporter, I'm sure, realized, "This could end my career." At the New York Post, one thing that's really suspicious when you look at a story like that, is the byline on that is a reporter that has never published at the New York Post before. So, those kind of signals, which might seem minor to most people, for us, both train us on where to look and what is to come, and then also help us understand, well, why did the campaign happen this way and not through another channel?

Thoko: And this ties into the approach that you have when you look at the life cycle of a media manipulation. You mentioned adaptation. That's a standard, if I can you use the word, phase, of these manipulation campaigns. They start somewhere, and at some point there'll be an adaptation based on response. Is that right?

Joan: Yeah. We look on five points of action, basically, where it's planned or what the origins are, if we can find them, how it moves across the web, and then who it gets in front of. Stage three is so important because if nobody responds, the campaign dies and that's it. Then, if somebody does respond that's newsworthy, like a journalist or a celebrity or a politician, then we look at mitigation or triage, right, like who tried to block it. The interesting thing about the New York Post story, of course, is that Twitter decided to break the links and actually block accounts that were trying to share it, whereas Facebook decided to throttle its velocity across their platform. And then we look at the adaptation phase, which is, what is the next step in trying to stick the landing on a disinformation campaign if it's not achieving what the disinformers were hoping for.

Thoko: Let's talk about the platforms. We've seen in recent weeks measures or steps that platforms have taken to try and tackle disinformation. Are we really seeing a change, they're stepping up, or do they want us to think that they're taking action? And can we talk a little about what their role should be? I mean, what should the platforms be doing because there have been arguments that they are pushing responsibility onto the policy makers... sorry, or the journalists, that you figure this out? What should their role be when you look at where we are right now?

Matt: That's a really big, complicated question. There are two questions. What are they doing and what ought they to be doing? What they've been trying to do is, as Joan said, a mix of throttling and outright suppressing false stories that directly link to the election. I'm saying they, but it's been more Twitter than anybody else, followed to some extent by YouTube, and then, to a lesser extent, Facebook. It's had some effect, but as you probably know, Twitter relented on the Hunter Biden story, and some others, under political pressure. So, it's not clear how much efficacy these policies are actually having. 

I think you have to understand the bigger picture of what's going on here. Public attitudes toward the platforms have tanked. We, in our surveys—and I've been surveying 25,000 people a month since March about COVID—have included a battery of asking about all sorts of institutions and individuals, public and private, how much do you trust them in general and on COVID? Social media platforms, in every survey, are at the bottom. No one other than maybe 30% of the public has any trust and confidence in social media. What does that mean? It means that the free ride social media has been getting from politicians, and in particular, Congress, is probably going to be ending before too much longer. We've seen hearings. There's mounting pressure to do something, top-down type interventions. The social media platforms really don't want to be regulated, so they're trying to do enough to show the public that they're actually being responsive, without going so far as to bite the hand that's feeding them. In this case, for instance, the Trump administration is in charge. So, you don't want to go too far in alienating the Trump administration. Whatever you do, one side or the other is not going to like it. So, they're kind of in a bind. It's a bind of their own making, I would argue, because for quite a long time they have been incredibly resistant to doing much of anything except at the margins.

But I think what you're seeing is this dance between politicians responding to the public, which is what politicians do, and the public is saying, "We don't trust them. Something needs to be done." The platforms, which have been exempt from any real regulation because the content has been treated as user-generated and they're just utilities that provide a platform, they're not media companies, so they're not regulation. But I think there's growing impetus to change that, and we could very well be entering a period where a regulatory regime of some sort starts to emerge. That, I think, explains what they're doing.

Thoko: Joan, you talked a little earlier about the need to look at the structure and design of social media and the need for guardrails in the communication and information systems. Do platforms need to be the way they are? Does the internet need to look the way it is? I mean, what's your thinking around that as you think about these guardrails? Could you talk some more about that?

Joan: What we've ended up doing is, with platforms, these are products built on top of the internet that have done a really good job at consolidating information and siphoning value out of all other markets that would utilize advertising revenue online. So, most news agencies and public health professional and civil society, everybody's got to be on these platforms if they want to have an audience and if they want to reach people at this point. That is a consequence of what Matt is pointing to, which is a regulatory environment that permits anything. I think about Section 230, which allows for both the release of liability, but also it's a policy that hopes that platform companies will do enough moderation to make sure that some seriously dangerous content doesn't get out there, including pornography. The issue, though, when it comes to the design, is that these companies have really built around an advertising infrastructure that incentivizes the lurid, the outrageous, the novel, and that isn't a network built for knowledge. 

We went back and looked at some of what public health experts and doctors were writing about the internet in the late '90s, and they know very clearly that this was going be really bad for medicine because people are going to have access to all kinds of bunk science and quackery. So, at this stage, I do talk a little bit about what the true costs are and who are paying them. When we deal with misinformation at scale, which is misinformation that's reaching millions of people, not just local rumors, the people who pay the price, journalists, firsts and foremost, they got to write thousands of debunk stories a year, public health professionals who are trying to get over the cacophony of absolutely outrageous claims about what the pandemic is and isn't, civil society who have to wage a war for legitimacy because some of these misinformation campaigns impersonate them.

Matt: What Joan is describing is, in a lot of ways, analogous to the legacy media in their early days. Think about newspapers and radio in the 1910s and 1920s, the era of yellow journalism, before the profession emerged, before we had awards like the Pulitzer Prize and Columbia School of Journalism that created a profession with professional standards, and a regulatory regime arose that reigned it in. Television stations were required to provide public affairs, knowledge building, as you put it. The internet doesn't have any of those things. Maybe it will evolve. So, it's different than where those legacy media are today, but in their early days, they had many of the same issues.

Thoko: So, Matt, what are some of the challenges that you see to policy responses to where we find ourselves?

Matt: Well, regulating social media platforms is extremely difficult because of a whole host of issues, technology, the much greater complexity of identifying exactly where you would target such regulations, the privacy, first amendment implications of limiting who can say what. The fact that a lot of this is generated by individual users is another problem. So, there isn't this gatekeeper that you can actually target policies and regulations around. You can talk about regulating the platforms, but they're very much afraid of being sued, and if they go too far, they can potentially be liable for being the referee that decides what is okay and what is not okay. They don't want anything to do with that, and that's understandable, but doing nothing is probably not acceptable either.

So, it's a really difficult nut to crack. Of course, a lot of people are focusing their energies, a lot of researchers, lawyers, and people like Joan, thinking about how can we do this in such a way that navigates this really complicated territory because it's not the same as legacy media. It is different animal. It's also global, so it's much harder to regulate what actors put on their platforms in Russia or in Europe or in Asia. The European Union has taken a very different tactic in this. They say, "Actually, we can do that. We can reach into your country and tell you what you can do to the extent that it's crossing our border." The United States hasn't been willing to do that. Maybe that will change, but to date they haven't been able to.

Thoko: Joan, I'll let you have the last word.

Joan: I agree with Matt, that there's echoes to the past where we can look to prior policy and try to get a sense of what's happening. I've turned to looking at burden-of-disease literature and thinking about the developments of policies on secondhand smoke. Smoking doesn't become illegal because individuals are being harmed. It's considered their individual choice. Secondhand smoke becomes illegal, or smoking public, because there are demonstrated health outcomes and burdens that are placed on insurance companies, employers, families. So, you actually get this entire science that comes up, that comes into view, to articulate what secondhand smoke is and what its effects are. So, I've been really trying to think through that model and how that legislation came into being, so that we can understand misinformation as a problem in public space, like on the internet, right, as something that affects people. It has, especially in the public health realm, very bad effects. We can see it very clearly in people's health behaviors. They get a piece of health misinformation, they may act on it immediately. We saw that at the beginning of the pandemic, where people were sharing these chain letters about just drinking a little bit of water every 15 minutes, et cetera. Those kinds of things change people's behaviors quickly, and the more dangerous those behaviors become or those suggestions become, the more troublesome it's going to be to get rid of them.

Joan: The last thing I'll say, and this a completely different metaphor, is the people who were crazy enough to build early versions of flying machines and airplanes didn't start by building an airport, right? So, we have to think about this industry as being really in its junior, junior phase, and the flight plan... Now we need to start worrying about where they're going to land the plane and what kind of industries are we going need to develop that are paid labor, so that we can get towards a web we want rather than the kind of information milieu that we have. I think it's possible, and we start to see it with the fact-checking industry and other misinformation labs coming up, but this is just a Band-Aid on something that's going to take several years of research, but then also policy building alongside it. I really look forward to the future at ĚÇĐÄvlogąŮÍř, where people who care about these problems and care about public service are going to learn about these issues, directly interact with them, and then hopefully carry them into their professions on their way out.

Thoko: That’s great, thank you for an enlightening conversation. I appreciate you both being here.

Thanks for listening. I hope you'll join us for our next episode. If you'd like more information about other recent episodes or to learn more about our podcast, please visit us at hks.harvard.edu/policycast.