With days still to go before the US presidential election, early voting has already topped half of all votes cast in the 2016 election, and every indication is that the electorate is energized. It makes sense, then, that in this heavily contested, highly polarized political environment (in the midst of a raging pandemic, no less), disinformation campaigns are likely to come hot and fast.
Major platforms like Facebook and Twitter have started to take aggressive action against disinformation networks and accounts associated with the QAnon conspiracy theory. But acting to block disinformation in an attempt to stop the spread can backfire and has recently left social media platforms open to accusations of censorship.
“The QAnon crackdown feels too late,” says Abby Ohlheiser, who writes about digital culture for MIT Technology Review and has been covering misinformation for years. “[It’s] as if the platforms were trying to stop a river from flooding by tossing out water in buckets.”
In this episode of Deep Tech, she is joined by Patrick Howell O’Neill, our cybersecurity reporter—who has also been writing our election integrity newsletter, the Outcome—to discuss how disinformation culture has been thriving online for years and what to expect on election day with our editor-in-chief, Gideon Lichfield.
We also examine a proposal from Joan Donovan, the director of the Technology and Social Change Research Project at Harvard’s Kennedy School. She argues that it might be time to start regulating social media like another addictive and potentially harmful product: tobacco.
Check out more episodes of Deep Tech here.
Show notes and links:
- Election result delays mean “the system is working,” says cybersecurity chief October 12, 2020
- Twitter’s ban almost doubled attention for Biden misinformation October 16, 2020
- Facebook and Twitter’s no-win situation over Biden October 16, 2020
- Thank you for posting: Smoking’s lessons for regulating media October 5, 2020
- How the truth was murdered October 7, 2020
Full episode transcript:
Gideon Lichfield: Misinformation that spreads on social media is lethal. It literally kills people. It’s fueled riots in Myanmar, killings of falsely suspected criminals in India, and the deaths of people who’ve taken fake cures or ignored safety precautions for covid-19.
It’s also threatening the integrity of next week’s US presidential election. For months, President Donald Trump and his supporters have been spreading false claims that mail-in voting is vulnerable to mass-scale fraud, even though both independent researchers and government bodies say it’s historically extremely rare.
Facebook, Twitter, YouTube and other platforms have been belatedly scrambling to respond. They’ve taken down or flagged election and covid misinformation and removed accounts associated with the QAnon conspiracy movement. But is it all too little, too late?
With me to discuss that are two of MIT Technology Review’s reporters: Abby Ohlheiser, who writes about digital culture and has been covering the misinformation beat for years, and Patrick Howell O’Neill, who covers cybersecurity and has been writing our election integrity newsletter, the Outcome.
And because misinformation is becoming an increasingly urgent problem in the political race, we’ll also look at a proposal for regulating Big Tech that borrows something from the playbook of regulating Big Tobacco.
I’m Gideon Lichfield, editor-in-chief of MIT Technology Review, and this is Deep Tech.
Patrick you’ve been writing our election newsletter, the Outcome. And obviously one of the big topics is the misinformation in the run-up to the election itself. Facebook and Twitter have been belatedly scrambling it seems to tamp down rumors about voting fraud. But there’s a lot of worry that as the election itself happens, claims made by President Trump and others will incite violence and send people onto the streets. Do you have a sense that these platforms are ready for the kind of problems we’re going to see on election night?
Patrick Howell O’Neill: So I think that you’re right to say that it’s a problem that we all see coming, but it’s not clear how prepared we actually are. Obviously there’s the president saying that he..or refusing to say actually that he will commit to a peaceful transfer of power, which injects a lot of pretty reasonable fear about what it means if he loses. So I think there’s a lot to talk about there. First of all, there’s the fact that he’s been engaged in this long-term disinformation campaign, for lack of a better phrase, about the security of the election, and specifically about the security of mail in voting, which could end up accounting for more than 50% of votes for this election.
And then there’s the fact that that kind of plays into this larger conspiratorial narrative, a meta narrative is what the experts are calling it, about the fact that this election will be rigged and that this election will be stolen. And the idea is that this can all come to a head on election day, depending on how things go. And that kind of meta narrative might end up being too big for Facebook and Twitter and social media in general to deal with, right, because it’s not so much any one post or piece of content that is subject to moderation or fact checking. It’s that everything plays into that and frankly, so far they’ve been failing on that front in terms of really getting a hold of that general narrative that the entire thing is rigged.
Gideon Lichfield: Right so this is the problem of misinformation as we have it today, right? Is that it varies so much. It’s so amorphous. The platforms like Facebook and Twitter have been investing a lot in AI, in techniques to identify things that are false rumors, and then get them taken down before a human can see them. But it’s difficult to do that when the nature of the rumor is changing so much. Abby, this is something that you’ve written about, particularly as regards the kind of misinformation that is coming out of the QAnon movement.
Abby Ohlheiser: Yeah. I mean, one of the things that has happened as QAnon has grown…which is due in a great deal to the misinformation environment after the first weeks of the pandemic…As Q Anon gained more and more attention and as these platforms started to do things to take down groups that were major circulators of misinformation the tactics simply shifted in response to those posts.
So while it is certainly true that Q Anon content has been reduced on these platforms in response to some of these more aggressive policies, it has also become harder to tell what exactly counts as Q Anon content or not. Q, The sort of mysterious figure at the center of Q Anon has even put posts up for their followers, telling them not to use Q Anon hashtags anymore because it makes the content too easy to find. So this is a defining and repeated characteristic of misinformation-abuse-harassment on the internet. And it is one that the people who are really, really good at making this work and making it work on these mainstream platforms have had years of practice to get really good at in advance of this moment.
Gideon Lichfield: I was listening a few days ago to the Sway podcast on the New York Times where Kara Swisher interviewed Alex Stamos. He used to be Facebook’s chief information security officer. And he talked there about the fact that in the early days of using AI to combat misinformation or propaganda, Facebook was dealing with fairly specific kinds of problems. Like how to take down video of a shooting that was posted on Facebook Live before people saw it or how to tamp down ISIS propaganda. And he made the point that when it’s something like ISIS, they have a very clear ideology and a very specific set of messages and an identity. When it’s white nationalism, they have fairly predictable, specific messages and memes, but when it’s something like QAnon or when it’s even when it’s something like election misinformation, the claims and the kinds of people making them can be so varied that it’s really hard to train an algorithm to do that.
Abby Ohlheiser: One of the things that also makes it so challenging right now is that the goal is not just to spread the most explicit version of these ideas on Facebook and Twitter. It’s to gain attention and following as widely as possible. And so when QAnon tries to make a campaign, reach millions of people, they’re not going to necessarily do that by talking about the core beliefs that drive QAnon, which is a very specific conspiracy theory with very specific beliefs.
They’re going to take some of the things that are part of what they believe and want others to believe. And they’re going to attach it to more mainstream ideas and causes and then try and seed their content into those more mainstream hashtags and conversations. So, yeah, I mean, it’s partially that the ideologies and the content itself is so broad and changing, and it’s also just that the.. if the goal is simply an attention, then it allows you to kind of be a little bit more flexible with what exactly you want to get attention first, as long as it kind of serves the broader goal.
Gideon Lichfield: Last week I interviewed the chief technology officers of both Facebook and Twitter at our EmTech conference and I asked them about what measures they have been taking against misinformation generally. Abby what’s your view broadly of what the platforms have been doing, particularly in the past year, to crack down?
Abby Ohlheiser: I came across a Pew study that they conducted in early 2020, which was so just before the pandemic, they asked people about Q Anon and at that point, about 23% of American adults said they either knew a little bit or a lot about it. But when they surveyed people again in early September, that number had doubled and Republicans in particular were much more likely to, who had heard about Q Anon, were much more likely to have said that they felt it was at least a little bit good for the country. Right?
So it’s not just that, like the platforms approach has changed. It’s also that the awareness and power of QAnon as an entity has changed. In June, I wrote that based on interviews with experts that I thought maybe it was already too late to stop QAnon. And I feel pretty good about writing that piece at this point. Since I’ve written that, the platforms have taken much more aggressive actions to try to take down QAnon accounts. And I think it’s too soon to look at the extent to which that has, or has not worked because QAnon is still in the process of adapting to those changes. But it certainly seems like at this point they basically had a three-year head start to learn how to deal with the moderation practices and with the changes and the beliefs of these companies. And that gives them a lot of tools to figure out how to get around these things or to simply move somewhere else and figure out a new way into these more mainstream conversations. I mean, also, at this point, The president has been repeatedly asked about QAnon and has repeatedly declined to condemn it in a way that would actually do anything to lessen its power.
You can’t really report out the answer to whether. President Trump’s retweets of, at this point, hundreds of QAnon associated Twitter accounts is something that he’s doing intentionally or not. Indications are that it’s not exactly intentional. He just kind of seems to like people who like him, which is a lot of what the Q Anon movement is about. The fact that he is doing that already gives the ideology such a platform that the enforcement actions of these companies are coming at a point when they’ve already kind of reached their goal.
Gideon Lichfield: Patrick, the platforms use a variety of methods to tamp down misinformation. Some of them are less visible, like when Facebook decides to circulate a post that it thinks is problematic to fewer people. Some are more obvious, such as when the platforms flag something that is misinformation or put some other context next to it to give an alternative source of information. We saw an example of that when Twitter blocked a link to an article in the New York Post about Hunter Biden, which was based on email, supposedly recovered from a laptop that Biden had supposedly left for repair. And there were a lot of questions about the provenance of this information, but blocking the story in that way kind of backfired on Twitter. Didn’t it?
Patrick Howell O’Neill: That’s right. So, you know, the story was no longer the story, at least for the short term, the conversation turned a little bit away from a Hunter Biden and the charges in the story, and it turned towards one of censorship and political bias. Frankly, it’s a scenario that has been kind of expected and predicted for four years since the WikiLeaks incidents in 2016. It’s been repeated and prepared for in Silicon Valley and at academics. And the particular problem is that It’s even more difficult than what happened with WikiLeaks because it’s going through American journalistic outfits.
So what is the correct course of action for a tech company, a social media company, to take in controlling that conversation? And Twitter’s decision to block the link temporarily did turn away the conversation, temporarily, towards this Republican talking point of Silicon Valley censorship and political bias and tipping the scale of the election unfairly away from Republicans.
Abby Ohlheiser: And one of the interesting things about these, like right wing claims about Silicon Valley censorship or the bias against conservative thought is that they’ve been around for years. They’ve never really been proven to be true, but they keep repeating in conversations like this because they work.
And that certainly seems to be true with the New York Post Hunter Biden story. So I reached out to Zignal labs, which tracks kind of mentions of misinformation across online media. So social media, print media, all that stuff. And they looked at Twitter shares of the URL that Twitter blocked. And just before Twitter instituted that block the link was being shared about 5,500 times. In a 15 minute period. And immediately after the block went into effect, it jumped to about 10,000 shares every 15 minutes. So in that period of time, in which Twitter was doing something to try to reduce the spread of what they were limiting under their hacked materials policy, they actually caused shares of that link to essentially double. And as Patrick said, it then became this entire new cycle about Twitter going to war with conservatives, Twitter, showing that it’s truly pro-Biden all that stuff. And then that conversation took off and kind of became its own thing, bringing this kind of dubiously sourced article with it.
Gideon Lichfield: You kind of almost feel, sorry, maybe just a tiny bit for the social media platforms, because it seems like they’re damned if they do and they’re damned if they don’t. If they censor, if they try to suppress any misinformation, they’re accused of bias. If they leave it up they’re obviously accused of being a platform for hate. Do you feel any sympathy for them?
Abby Ohlheiser: You asking me that? [LAUGHTER]
Gideon Lichfield: Yes. [LAUGHTER]
Abby Ohlheiser: The feature I wrote about how things got so bad was actually a story about listening and about memory, right? So I wanted to answer the question about how things are bad now, by looking back at all the voices that these companies could have listened to going back literally a decade to make some meaningful changes that would have possibly helped these platforms address this stuff. And at least de-incentivize this from being something that clearly works so well on a daily basis.
And so I would, I would be curious what some of the people I interviewed for those stories, would feel about whether they would have sympathy for these companies because I think sometimes these companies get away with the perspective that they didn’t know how bad things were, or they’re learning with the rest of us, or they’re restraining to things as they find out from them. But if you talk to people who have been researching with, or experiencing harassment on these sites for years, You know, they’ll tell you that they told the companies what was going on. So yeah I have sympathy for any human and who is in this space and trying to deal with it, you know, journalists, people who work at these companies, researchers. But also I don’t want that sympathy to sort of occlude the fact that there were points of intervention and there was knowledge that they had years and years and years ago that they chose not to act on. And that those choices over and over again are also a part of why we’re here now.
Gideon Lichfield: When I asked Mike Schroepfer the Facebook CTO last week at EmTech why they hadn’t acted sooner because people had been warning about Q Anon for years as, as Abby wrote, he said something that I thought was kind of telling. He said that they had gotten a lot of data on the harm that this kind of misinformation could cause. And so I said “is that it? Do you need to accumulate overwhelming data on something before you will act on it?” // And his reply to that was essentially that they wanted to be very careful about making a judgment without consulting what they called experts.
Mike Schroepfer: A mistake I don’t wanna make is assume that I understand what other people need, what other people want or what’s happening. And so you know a way to avoid that is to rely on expertise where we have it. So, you know, for example, for dangerous organizations, we have many people with backgrounds in counter terrorism. We have many people with law enforcement backgrounds when you talk about voting interference. We have experts with backgrounds in voting rights. And so you listen to experts and you look at data.
Gideon Lichfield: One might say the experts, they should have been consulting were the researchers and particularly the women and people of color who are getting affected by all this misinformation years ago and worth saying that that it was harmful, but weren’t being listened to.
Abby Ohlheiser: Yeah. I mean one of the things that kept coming up in my reporting..so for instance, I interviewed Ellen Pao, who’s the former CEO of Reddit and one of the things she told me when I asked her like what these companies could have done better was. Put leadership in place that looks more and has lived experiences that are more akin to the people who actually use the site and who experienced these issues firsthand because not everybody needs data to know that racism exists on Reddit, for instance. And I think that you’re right.
That comment is incredibly telling to me because I think that, that is one of the fundamental differences between how the people I interviewed for this piece talk about misinformation and harassment as these things that are connected into these much more systemic issues that then were sort of brought online and incentivized to get worse. And people who feel like they need it kind of proven to them that that is something that would happen.
Gideon Lichfield: So Patrick, what do you think we should be expecting on election day itself?
Patrick Howell O’Neill: I think that when you talk about what to expect on election day itself, there’s a couple of different layers to that answer. So let’s start with the mechanics of the election itself, which so far have frankly been going pretty decently well. There’s nothing to suggest that anything is wrong with voting or counting up to this point. we’re 50 plus million votes in.
The thing to worry about is perception. So there’s a question of, in an election with a potential majority mail in vote, what’s going to happen in terms of the results. Mail-in votes first of all, take longer to count and typically start counting later than in-person votes or early votes. And that’s a function of several kinds of legal and other kind of processes, but the end result is that we probably won’t know all the Mail-in votes on election night itself. Now how that actually plays into us knowing the overall votes, overall results of the election kind of depends on whether or not it’s a landslide, whether it’s close, what the swing States are.
A potentially bad scenario is that, you know, it’s very close, there are key results not being reported yet. And in that vacuum of information, disinformation floods in whether it’s from an actual candidate or whether it’s from a foreign adversary. It’s that kind of vacuum of information that could potentially sow discord, spark chaos, and then get into the actual nightmare scenarios, which could range in any number of ways that could, I guess go up to even violence. But even just even just discord or illegitimacy for whoever gets ultimately elected is a negative outcome here.
Gideon Lichfield: One of the leading scholars of misinformation is Joan Donovan, the director of the Technology and Social Change Research Project at Harvard’s Kennedy School. Earlier this month, she testified about misinformation to Congress, and she wrote an essay for us arguing that it might be time to start regulating social media like… cigarettes. So Joan, in your piece you talk about Tim Kendall, who is the former head of monetization for Facebook. He gave testimony to Congress last month. And he drew this interesting parallel. Where he said social media was like the tobacco industry in that tobacco firms added things called bronchodilators, sugar and menthol, to cigarettes to make it easier for people to smoke more. And social media added things like status updates, photo tagging and likes, which encouraged people to use social media more. You made the point in your piece that there’s also a parallel with tobacco if we want to understand the harms that social media can do. Can you tell us about that?
Joan Donovan: Yeah, I think that one of the things that has prevented us from really taking on the challenge that misinformation poses is a lack of a theoretical framework that moves beyond, “but it’s just my free speech” or “it’s just my choice to share and to create misinformation.”
“It’s just my choice,” was a way in which we initially understood the sale of cigarettes and the ways in which people were using smoking in public places. No one around you had a choice, whether they were going to breathe in secondhand smoke or not. And over time, epidemiologists and others started to think more clearly about, well, what are the health risks of smoking? And who else is being harmed by these individual choices that are causing what we would call negative externalities or causing undue harm to people who do not have the choice not to smoke.
And so over time you saw arguments and regulations be put in place and be rolled out in different cities, around smoking in public, banning smoking on planes, banning smoking in movie theaters. And now it’s very much a universal that there is no smoking in public places and I think it’s important to think about the right framework for regulation of misinformation, because we’re not talking about the whole of the information ecosystem. We’re talking about the kinds of information that can potentially cause people to take on undue risk.
Gideon Lichfield: Like what, for example?
Joan Donovan: Particularly the subject of medical misinformation comes to mind. If you are barraged with messages that make claims that masks don’t work, that even the wearing of a mask increases your risk of getting coronavirus because you’re somehow breathing recycled air, these ideas cause people to change their behaviors very quickly.
Gideon Lichfield: In your piece, you also wrote about another example of misinformation doing real world damage which was around the wildfires on the West coast where there were rumors that Antifa activists had been lighting them. Can you tell us a bit more about what happened there?
Joan Donovan: Yeah. So when we study different misinformation events, we really try to get a sense of how these rumors scaled—that is, how they started to be spread across the internet, who picked them up, who believed them, and then who was impacted by them. And the interesting thing about the linkage between the rumor that antifascist protesters and Black Lives Matter were setting fires wasn’t necessarily that people were going out and searching for Antifa in the woods.
It was really that folks had started to call law enforcement. They started to barrage local law enforcement with phone calls, which was making it hard for law enforcement to even do their work. And then there were instances where even the FBI had to tweet to say, Hey, listen, stop spreading this rumor. So we have to think about when rumors that are political in nature, have these unintended effects of preventing law enforcement from being able to do their jobs in an already very dangerous situation.
Gideon Lichfield: So what could the platforms have done in the case of the wildfires?
Joan Donovan: My take on this is if we’re in a situation where journalists are called into debunked stories and, academics and researchers are called in to look at and, and generate evidence of a misinformation campaign and law enforcement and the FBI are saying this is a dangerous situation, platform companies have to do more proactively to not let these kinds of rumors have that kind of negative public impact. And there’s a few different ways they can go about this, but we now have several years of data that point us to some of the most noxious offenders.
The blog that really pushed this idea into action and amplified this rumor has done this several other times. And so I think platform companies have a duty of care that they need to exercise.
Gideon Lichfield: So in the case of this blog that was spreading these rumors, the platform companies should have banned posts from it or made them less visible? What exactly should they’d have done?
Joan Donovan: Both and. Which is to say that if it’s the case that you are looking at a stretch of misinformation and this blog or these set of accounts are the ones that continue to be at the center of amplifying it, then you should take action by removing the accounts entirely. Or you can limit the spread of things that seem to be circulating, that are what is called in the business “over-performing.” That is, there’s a normal course in which pages on Facebook tend to operate, but when certain stories start to scale out of proportion and start to reach new audiences that should trigger some kind of content review to ensure that it’s not harassing information. It’s not libelous information. It’s not misinformation. And so I think that there’s tools already available within platform companies that could be used to throttle the spread of misinformation in a more structured and transparent way.
Gideon Lichfield: Do you think it’s even possible for them to keep up? Because what is misinformation, what counts as misinformation is obviously totally dependent on the context. It keeps on changing. And so they’re always in some sense, a step behind in the world.
Joan Donovan: Yeah. And I think that this is where we need to shift our focus from thinking about social media platforms as, you know, as a telephone or as a radio, and really start to treat them as they are, which are broadcast networks. And we need to treat them in a way that requires them to have a public interest obligation where they are required to, in some measure, produce information that is true and correct, and circulate that. As well as when they see these signals, which tend to be pretty strong suggesting that something is going viral, because it contains the seeds of a misinformation campaign.
Gideon Lichfield: On October 6th, Congress came out with its big report on antitrust and on the monopoly power that Amazon, Apple, Facebook, and Google each hold. Does that have any bearing, do you think, on this problem of misinformation?
Joan Donovan: I think when we go back and look at the last, you know, two decades at least of internet history, we have to understand what the promise of the internet was. If we think about the early internet we weren’t forecasting the death of local news, it was quite the opposite. The idea was anybody could become a news broadcaster that the promise of many websites and many ways of communicating was really where we’re at at the birth of America online and a kind of internet that was much more decentralized. But platforms are built on top of the internet and they have consolidated communication and information in, turned it into a commodity. They’ve turned data into a commodity that is incredibly profitable. So the incentive to scale is driven by profit incentives.
And what that means for misinformation though, is that misinformation to be dealt with, means you have to cut into your profits; means you have to open up the hood a bit on the kind of recommendation and search algorithms that are driving people to stay on these platforms. And so I think that as antitrust is pursued as related to misinformation, we are going to learn a lot more about how misinformation is profitable and how it keeps people on these platforms. Especially the kinds of misinformation that are conspiracy or are related to, you know, people really deeply engaging in these what we now call rabbit holes.
Gideon Lichfield: Some people think it’s not even possible for a platform like Facebook to really effectively fight misinformation. It depends so much on content being shared that if it were to really clamp down, it would undermine its own business model.
Joan Donovan: You know, these are billion dollar companies. We’re not dealing with small markets at this stage. I think one of the things that we can look to minor apps or other kinds of platforms for is a duty of care to the communities that they serve. And the idea that a platform can be everything to everyone and also serve and protect their communities, I don’t think is, is true any longer. And what’s kind of interesting about this moment isn’t necessarily that anonymous actors and folks that are on the margins are using these technologies in really nefarious ways. We’re talking about the weaponization of social media by foreign governments, by marketers and grifters, by political operatives, by people who have some amount of resources and are turning the openness of these platforms into vulnerabilities in society to further their own, either profit or their own political ends.
And so we have to understand, where the problem lies is not necessarily with the vision of what the technology is supposed to do, but with what happens when you give this technology to already powerful people and how they weaponize it and, and use it to essentially destroy the trust that has been built up with other industries, including and especially journalism.
Gideon Lichfield: Going back to the piece that you wrote, you drew the parallel between misinformation and secondhand smoke. In the case of tobacco, we dealt with that by imposing high taxes on it. You’re not suggesting we tax misinformation. So what do you think is the role for government in regulating social media companies?
Joan Donovan: Yeah. I think that there needs to be new policy. Of course, everybody is calling for a revisioning of section 230, which has to do with the roles and responsibilities of platforms and other internet companies.
It’s the rule that says essentially that you, you should get rid of things that are noxious, like child pornography but that the company is not going to be held responsible for things that people do on their software. Essentially, right? If someone sets up a gambling ring through email, you know, Gmail is not responsible for that. And I think that there’s ways in which 230 facilitated the growth of these platforms in a way that did not anticipate where we have ended up—which is essentially where different kinds of elites have taken over and are using them in these dangerous ways.
When we have the policy conversation, we need to focus on the harms. And we haven’t done so yet. Most of the time the policy conversation gets wrapped up in legalese around free speech and whose speech matters, but we need to shift and understand the true costs of misinformation. That is; who are the ones that have to repair the damage caused?
Gideon Lichfield: That’s it for this episode of Deep Tech. This is a podcast just for subscribers of MIT Technology Review, to bring alive the issues our journalists are thinking and writing about.
Deep Tech is written and produced by Anthony Green and edited by Jennifer Strong and Michael Reilly. I’m Gideon Lichfield. Thanks for listening.