Sandra Peter, Kai Riemer and Jevin West
Misinformation, fake news and elections on The Future, This Week
This week: Jevin West joins us to discuss disinformation in social media in the wake of the US election.
Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week
05:46 – How social media platforms try to curb misinformation
Our guest this week
A. Professor Jevin West, Director of the Center for an Informed Public, University of Washington
The Center for Informed Public
The Election Integrity Project
Uncertainty and misinformation: what to expect on election night and days after
Other stories we bring up
Dressing up as hand sanitiser this Halloween
‘Scary creepy good’ Halloween movie: The Social Dilemma
Our previous review of the Social Dilemma on The Future, This week
Tesla’s self-driving feature is ‘scary as hell’
Our previous discussion of autonomous vehicles and self-driving cars on The Future, This Week
ABC’s AR app shows how one town survives and adapts to intense bushfires and severe storms
Apophis ‘God of Chaos’ large asteroid heading towards Earth
NASA’s terrifying Halloween posters
Micro-targeting of ads has become scarily detailed
Our previous discussion with Jevin West on The Future, This Week
What global elections have taught Silicon Valley about misinformation
Researchers following fake news
President Trump is single largest driver of the infodemic
Our previous discussion of Twitter banning political ads on The Future, This Week
Facebook widens ban on new political ads close to the election
MIT research on putting warning labels on fake news
Twitter blocks links to NY Post article
Twitter changes how retweets work to slow misinformation
Ocean Spray, TikTok, and accidental influencers
Teens and TikTok conspiracy theories
TikTok tries to stay ‘apolitical’ ahead of US election
Follow the show on Apple Podcasts, Spotify, Overcast, Google Podcasts, Pocket Casts or wherever you get your podcasts. You can follow Sydney Business Insights on Flipboard, LinkedIn, Twitter and WeChat to keep updated with our latest insights.
Our theme music was composed and played by Linsey Pollak.
Send us your news ideas to sbi@sydney.edu.au.
Thank you to Ross Bugden for “♩♫ Scary Horror Music ♪♬ – Haunted (Copyright and Royalty Free)”
Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.
Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.
Jevin is an Associate Professor in the Information School at the University of Washington and Director of the Center for an Informed Public.
Share
We believe in open and honest access to knowledge. We use a Creative Commons Attribution NoDerivatives licence for our articles and podcasts, so you can republish them for free, online or in print.
Transcript
Disclaimer We'd like to advise that the following program may contain real news, occasional philosophy and ideas that may offend some listeners.
Kai So Sandra, what is happening?
Sandra Well, it's almost Halloween isn't it?
Kai And in 2020 that should be truly scary, given what the year has given us without Halloween so far
Sandra Indeed. So the question is, what's left for Halloween?
Kai What is scary, what scares you this Halloween?
Sandra It scares me that the Halloween costumes have now turned to things like hand sanitizer. So you can go dressed up as a bottle of Dettol.
Kai Well, in a time where everyone is wearing masks already, people have to come up with something. So what else is scary?
Sandra Well, usually Halloween is the time for scary movies. And I had a look at the recommendations for scary movies this Halloween. And the Sydney Morning Herald suggest "scary, creepy good, The Social Dilemma."
Kai Oh well, we've been ahead of the times having talked about The Social Dilemma a few weeks ago. It is a scary movie, though, given how well it documents how social media draws us in, keeps us engaged and manipulates us, #SurveillanceCapitalism.
Sandra It is truly scary when the scariest movie of the year is a documentary,
Kai You're still doing the voice.
Sandra I am still doing the voice.
Kai Well, it is a scary year. So the documentary about 2020 will also be scary. I have one which is an update to last week's story. We mentioned last week that Tesla was releasing its full self-driving car feature to a select few customers. Apparently that has now happened. And it is very much a beta version. And so Tesla, being a tech company, you can see what they're doing because we usually release software as beta. But this is different. So The Verge says "Tesla's "Full Self-Driving" beta is here, and it looks scary as hell". Because using untrained consumers to validate beta level software on public roads is actually dangerous. So this is not your ordinary software. This is cars doing stunts in cities now. Previously, the autopilot you could use on a freeway, barricades left and right, no people on the road, or you could summon your car in the parking lot. This stuff happens in roads where there's people.
Sandra Scary, but we've done autonomous vehicles last week. There was a scary new app from the ABC, suggesting that we get prepared for extreme weather and see how a town survives in terms bushfires and severe storms, through AR, augmented reality, a roadmap to our future.
Kai Well, you've just done it on your phone, you put the town in the middle of the studio on the table. And it looked kind of scary. All I wanted to see is the fire, burn hotter and let the town go under. But apparently it shows you how the town survives and becomes more resilient. But here's one from NASA, which has just reported that an asteroid which was named Apophis, I kid you not, which is apparently the Egyptian god of chaos. It has just been found that the asteroid that will make a pass of the earth in 2029 is apparently speeding up, which means in 2068 it could hit the planet.
Sandra Scary how hard that is to pronounce. But what does that have to do with the future of business?
Kai Well, if it hits, no future for business.
Sandra So we'll return to the story in 2068.
Kai In Season 102 on The Future, This Week,
Sandra But NASA did release some terrifying Halloween posters, and they are an absolute scream. We'll put them in the shownotes, since this is a podcast.
Kai Another story that scared me this week, in a time of pandemic and so close to the US election, MIT Tech Review reminds us just how sophisticated micro-targeting of advertisements on Facebook has become. While we discussed Cambridge Analytica at the time, and their supposed to ability to micro-target the electorate and manipulate users into voting a certain way, those features have become standard practice on Facebook in the meantime.
Sandra You're right, in the run up to the elections and in the midst of a pandemic we are constantly bombarded with misinformation, disinformation, sometimes downright lies on all the social media platforms that we access, whether that be Facebook, TikTok, Reddit, Twitter or YouTube. So we thought on this scary Halloween, it might be a good idea to try to have a look at what social media companies, and we, can do to counter this misinformation during the pandemic, during the elections that are happening now in the US, but are about to happen in many places around the world.
Kai Maybe this is a little bit too scary for the two of us. So maybe we should get help and dial in a friend.
Sandra I think we should call Jevin. Jevin West's a friend of the show. He's the Director of the Center for an Informed Public at the University of Washington. And he studies misinformation in science and society, and also wrote "Calling Bullshit: The Art of Skepticism in a Data-Driven World", so I think he can help.
Kai Well, let's give him a call.
Sandra This is The Future, This Week, from Sydney Business Insights. I'm Sandra Peter.
Kai And I'm Kai Riemer. Every week we sit down to rethink and unlearn trends and technology and business.
Sandra We discuss the news of the week, question the obvious, explore the weird and the wonderful and things that change the world.
Kai Hello, Jevin. Good to hear you.
Jevin
Good to hear you too Kai and Sandra. It's ah, it's been too long.
Sandra Well, welcome back.
Kai Welcome back to the podcast.
Jevin It's good to be back. But I wish I was there in person.
Kai Well, that will be a while, I'm afraid. We're a big island at the end of the world, and we're socially distancing by not letting any tourists into the country.
Jevin That's right. more countries should be talking about that geographic distancing as much as the social distancing. You're lucky right now, if anyone's lucky in a pandemic.
Kai So we're here to talk about misinformation, disinformation, fake news, infodemics, and also the US election which is coming up so very soon.
Jevin Ah, too soon, in my opinion. We're staying up pretty much around the clock, trying to track these kinds of things. And it's like playing a game of whack-a-mole that never ends. It just keeps going and going and going. So it'll be nice to see us hopefully get through this November 3 day.
Sandra Given you've written a book on calling bullshit, and that you run the Center looking at misinformation and disinformation, we were hoping to have a conversation where not only do we clarify a bit what these terms mean, but also have a look at how they spread around the internet. But then what can be done about it? If we take a look towards the future, what can platforms do about it or other organisations? And what can we as citizens do? So maybe starting out with what exactly these terms mean, because for many of us disinformation and misinformation sound kind of like the same thing. But they're not quite, are they?
Jevin They're not quite. The big distinguisher, at least in my world, is in the intention. So when we talk about disinformation or disinformation campaigns, we're usually talking about purposeful, misleading, deceitful information. And that is what distinguishes it from misinformation, which can be the kind of misinformation that happens, like the game of telephone, starts with a rumour with no nefarious intent and gets garbled up and pretty soon becomes just plain false. That kind of stuff happens a lot. In fact, I think it probably happens more, but it's the disinformation out there that concerns us because they tend to come in collections of articles, they are strategically placed. They're not just one individual news article, or Facebook posts that you can refute. It's sometimes even a collection of true things. And so it's those kinds of things that are hard to identify, but we have plenty of examples. So we're getting better and better at identifying. But as we get better at identifying them, those that are pushing these kinds of campaigns get better at sort of sidestepping those interventions. So it's a game of cat and mouse for sure.
Sandra Can you give us some examples of some of these campaigns?
Jevin Yeah, so in the 2016 election, there was a lot of attention around disinformation coming from foreign actors, at least when it comes to the US election. And we see foreign interference, at least in the information space, across the world. Many countries are involved, it's not just one country. But in this particular election, at least in the US in the 2020 election, we've seen efforts, a lot, coming from domestic sources. So these are campaigns that involved a lot of times a meta - narrative. So these narratives around mail dumping or mail harvesting or ballot harvesting, and then those narratives get fed with quote, unquote, evidence. And a lot of times these pieces of evidence are images where intent is misattributed, so even though the picture has someone that no one really knows, the writer will say, 'Oh, that's a Democrat or Republican' A lot of times these images don't even exist at that time period, they're from other elections or even other countries. So you have miss-framed information. Of course, things are exaggerated. So when these claims are pushed out strategically more, you have high-end influencers posting sort of at the same time, a lot of times, those are things that are quite rare and sort of providing that misleading kind of claim. But I mean, the disinformation campaigns from the 2016 were campaigns of actual individuals, like in the, you know, Internet research agency in Russia that had individuals pretending to be real Americans, like just I think, Jenna Abrams, I think it was, who was just as an American girl with 70,000 followers, she was posting things that most American people would like, listen to, and then all of a sudden throw in, you know, a tweet about some racial issue. And that was happening while bots were pushing things, while ads were being bought, hyper-focused on individuals in that case to just sow confusion and distrust. And so these campaigns typically are multifaceted, multi-platform, but they have these meta narratives, this meta narrative that's being pushed over and over of mail fraud and delegitimization more broadly.
Kai So is it fair to say that things can start out as misinformation and then they're being appropriated into a disinformation campaign? I read that there was this photo of mail ballots being dumped, and apparently, it was empty envelopes from the 2018 election, but then it was reported as ballot dumping, and then became part of this disinformation campaign.
Jevin Exactly. I mean, you said it exactly right Kai. That's more common than not. Where pieces of misinformation, become fodder and ammunition for these disinformation efforts. The image itself is true. It's not even a doctored image, or a deep fake of any kind, a video, you know, showing something that didn't happen. It actually could be true, but then it's used as a way of pushing, again, that narrative. So misinformation can be sort of hijacked for efforts of purposely deceiving.
Sandra And I think it's important here to note that, although we're talking about the US elections, because this is happening, just as we're coming up to Halloween, and the elections in the US, this is something that happens around the world. So we've seen Brazil and India have huge misinformation and disinformation campaigns. We've seen elections in France and in Nigeria affected by this. And we've seen other topics as well being plagued by misinformation and disinformation, in particular news about the pandemic, about treatments, about COVID in general, but also about technology, around 5G, and so on.
Kai And so in our observation, we cannot really talk about misinformation, disinformation, without talking about, how does it actually spread? What role does social media play in getting those messages in front of lots of people?
Jevin I mean, social media is one of the prime movers of this, if you took away social media, that would put enough friction online to almost stop it. It would be the lockdown of misinformation, like you can lockdown for the pandemic. So clearly, social media plays an incredibly important role, partly because they're designed to spread this kind of stuff. They're designed for user engagement and for gluing our eyeballs to the platform, and what glues our eyeballs to the platform like a good old rumour or conspiracy theory. And so these social media companies have, you know, first of all, made a lot of money off of conspiracy theories and rumours and false information. And I kind of consider the social media platforms kind of as the biggest bullshitters of them all, if you define bullshit as a person or entity that doesn't care about truth, or falseness. I mean, it's not, it doesn't have any sort of allegiance to either. It just wants to grab your attention. So yes, social media platforms are absolutely at the root of how these things spread. And they can spread so incredibly fast. There was a Facebook post, and it came from an individual, a chiropractor that said, 'hey, look, 6% of the deaths from COVID are really the only ones caused by COVID, and CDC is hiding this.' And they missed, of course, this issue of comorbidity, that if you die of pneumonia, and it's caused by COVID, you still died because of COVID. But anyway, you would think that that would have just stopped, but within a very short amount of time, it exploded, it got retweeted, picked up by QAnon influencers and landed all the way on to the Twitter account of the leader of the free world, here in the United States. So things can move so fast, and it's cheap to do it. It doesn't take a lot of sophistication. And so the amount of investment that's needed to get things to spread by those who really do want to spread misinformation and propaganda or just to simply make money, just use this powerful tool of social media, so they are playing a huge role in all this, no doubt, even though Zuckerberg, of course was quoted shortly after the 2016 election saying, 'Ah, you know, I just can't see possibly how, you know, Facebook could have affected any sort of decision making or done any of the opinion hacking that people were talking about'. But they are. Now, to their credit, I guess, I don't want to give them too much credit, because they could be doing a lot more, they are finally starting to do some things. And I think it was the pandemic, that really led to them becoming more consistent, and writing policies that were much more specific, rather than really broad ones that you don't know whether you should take something down or not. So that's sort of led to recent policies and preparation for the election itself. In fact, you know, the US election, of course, is a big one. And they've been using these things from these other countries where they've learned what works and what doesn't. And so there are some positive things. And it's not like social media is going away anytime soon, even with possible antitrust legislation coming down the pipeline, as it has for Google in the United States. But we definitely need to be pulling them at least to the table, whether it's formal regulation or informal regulation.
Kai So Jevin, you already touched on content moderation, we want to make a distinction between the content moderation, right, letting stuff onto the platform, but also the content distribution. You said these things spread incredibly fast. And of course, its users retweeting or re-sharing on Facebook. But do we know to what extent the algorithms that the platforms use for content distribution play a role in spreading this type of content?
Jevin The algorithms themselves are black boxes, and I know that there's a lot of efforts to open up those black boxes or to force them to open black boxes. But even if you could open those black boxes, I think it would be difficult even for the designers and engineers of those black boxes to understand how things are working, because those algorithms are trying to solve a fairly simple optimization problem. That is, let's find the text, the phrases, the colours, the designs that are going to keep me glued to the platform, and it runs, you know gazillions of A/B experiments all the time. And so, once they figure out what keeps us attached to the platform, then they certainly have a pretty good idea how to spread them to the next person, and to the next person. It's, again, the details are unclear, but the algorithms used aren't that different than the same algorithms used for open source Recommender systems, and, you know, engineers out there have a pretty good idea of how they work. So yes, they clearly are playing an important role. What's interesting is there have been some efforts to get back control of our news feeds, for example, on Twitter, it drives me nuts that you can't chronologically order or do other kinds of ordering from your list, you have to sort of follow the, the Twitter's recommended list, even if they allow you to switch it, they switch it back 24 hours later. And it's because they want control of that particular feed. And so it's one of those things that, you know, when we talk about regulation, maybe, you know, there'll be some rules, saying, if a user wants to keep it chronological, they can keep it chronological for crying out loud. And then maybe those kinds of things would reduce somewhat of the spread. But I think there's a lot to still understand here. But the problem is, researchers don't get the kind of access that of course, the engineer does, or internal researchers. And I think that's got to change too. So when people say, Well, what can we do? What can we force these social media companies to do? I think they should, to some degree, I know they have business models and business plans that they have to protect, their secret sauce has to be protected to some degree, but I think we should allow for a little more peering into what's going on, for researchers specifically, so they can understand what's going on better.
Kai And then of course, to study what type of content spreads the most, because it seems like outrageous content and misinformation and rumours is the type of stuff that keeps us engaged. But it's also content that, you know, comes from certain users like President Trump, for example. There was just recently a story in the New York Times that reported on research by Cornell University that showed that after analysing 38 million English language articles, it turns out that the main driver of the infodemic around Coronavirus was actually Donald Trump himself or being mentioned in those posts, right.
Jevin Yeah, I mean, it's hard to deny that he's a superspreader. I mean, with the kind of influence and the number of followers that he has, it's not too hard to be a superspreader. But it makes it even easier to be a superspreader if you don't have as much of an allegiance to truth as maybe a scientist does, and I guess maybe even sometimes regular citizens. So he certainly has been spreading a lot of false information. I mean, he's retweeted content from QAnon followers, which to me, among many things, is kind of crossing the line of information integrity, and it's those kinds of things that can quickly get into people's feeds. And then the algorithms, of course, weight big influencers like that, so I think that's problematic. And there is content that does spread more easily. And it is the kind of stuff that is emotion evoking it's the kind of stuff that grabs your attention, because it sounds so crazy. It's a man bites dog versus dog bites man, the dog bites man isn't as interesting. And so people will click on the other version, the former version. As humans were prone to that, and those algorithms and platforms absolutely, they know that. And so these are the kinds of things that we have to watch out for as information consumers. But even at the regulatory level, as these discussions increase, there should be maybe some discussions about this. I mean, certainly, there are discussions like this when it comes to health care, that there are certain chemicals that we don't want in our food, in our water. And, you know, maybe we want the same kinds of thing in our information environments. There is certain content we have to watch out for, and I do think that algorithms are great at spreading it, including those superspreaders.
Sandra You mentioned before that some of these companies are trying to do something, especially now as we're coming up to the elections in the US. So I thought it might be worthwhile having a look at a few of these companies, you know, the big ones, Facebook, Twitter, YouTube, TikTok, and what they're specifically doing, and maybe then thinking about what we can do to curb the spread of misinformation, disinformation. So Facebook, for instance, has repeatedly claimed that it's doing a lot more this time around by taking down accounts, or tightening policies on content moderation, or even having an ad database to increase transparency as to who's doing what on Facebook, and even having campaigns to encourage people to vote or taking down posts that encourage people not to vote. Do you think some of these changes have a meaningful impact on what's going on, in terms of misinformation, disinformation?
Jevin I mean, the Delta might be quite small, but at least it's positive. It's hard to tell what the effect is, since they won't allow researchers to peer into the full ecosystem. And so it must be having some effect, although it is kind of interesting, because when Facebook has been reporting some of these stats, they've been saying, 'Hey, we took down 120,000 pieces of content, we took down several thousand accounts'. That's so tiny compared to their whole ecosystem. And certainly when it comes to even just their fake accounts last year, when they had to be yanked into the congressional hearings, they admitted at the time that they had deleted, in a six month period, over one and a half billion or somewhere in that number, in the in the billions, fake accounts. When they do that, but then they also say, 'Well, we've taken down 120,000 pieces of content', it seems like yeah, there needs to be even more. And they've hired thousands of fact-checkers, and that's good, because it certainly isn't going to be figured out through automated means. And they've done things like put banners up. We've looked at those banners. And we've done some surveys with papers and asked, 'are these helpful?' If you search coronavirus, for example, you'll see a banner that says 'hey, you should probably go to the CDC or the World Health Organisation before, you know, following some crazy post on our platform', but it doesn't say it that way, but that's kind of what they're saying. They're better than nothing. And I do like to see these things, and I will say among the criticisms I've had for these platforms, especially Facebook, these things are the kinds of steps we want to make. And these policies that they're putting out are getting better. They really made a leap during the pandemic when they had very specific kinds of content that violated their platform, and then they take it down. But there are times of course, even when it violates their platform, it still stays up. In the ad world that makes all their money, they have certain policies, but I have some colleagues that have done some work showing how easy it is to put ads up there that violate their content by embedding it in images or other things. And actually we've been tracking Google, for example, looking at when they are placing ads around the election. So they have a set of rules, for example, around ads that are pushing false information about voter integrity, and we've identified them and we've even told Google about them. And they, to their credit, have taken some of these down. But it shows that even when you have these policies, it's a huge problem. The scale, of course, is astronomical. And so they can't track all these things. But it shows that even just the little things that we're seeing, and other researchers are seeing, they're still missing a lot of these things. So they're moving in the right direction. It's just not enough.
Sandra To be fair, we should note that Facebook made last year about $70 billion from advertising. So the incentive to remove all of those might not quite be there. And it's also some of these policies are not exactly what they seem, right. Back in August they said they'll have a ban on new political advertising in the week leading up to the election. Well, many people have voted already. And it's also just a ban on new political advertising, not on political advertising altogether. So if you got it in before the last week, then you know, lucky you.
Kai So Jevin, I also wanted to come back to some of those measures, like you mentioned labelling, for example, but there have been some voices that said, it might actually be counterproductive to start labelling some content. But if you can't label all of the questionable content, then with some content having a label on it, it might suggest that those that don't have a label might actually no be regarded as true or, you know, legit content.
Sandra So this was a study published back in March by MIT that said that labelling of false news has a detrimental effect, the implied truth effect, where unmarked and unchecked, are demonstrably false stuff, now appears legitimate because all the other things are labelled as untrue or false.
Kai So how do you go about this?
Jevin This is a big concern of ours, something we talk about, actually, quite often, when we talk about design interventions on these platforms. And it's hard, that's why, you know, Facebook's got a big challenge about how to do this. Because if you put a big red box around a piece of content, and you say, 'this could be misleading', or 'this is misleading', that can actually lead to more engagement. In fact, from a general standpoint, we have to be careful in the research world and the journalistic world about not amplifying things that haven't been amplified yet. It actually might be false news, it might be fake news, but it might be something people have never heard of. So let's make sure we don't put that out, and have this person now know, and be curious about who this individual is. We care about this, both in the design world, and also when talking about it from a research perspective. We just wrote a post recently, some guidelines possibly for journalists to look out for when amplifying or not amplifying content around election misinformation. And so the same goes for these platforms and for these design changes. And like you said, if you do have these labels, and then something's not labelled, then the implied truth phenomena could be something to be concerned about. So yeah, it's difficult. And also, you know, there are simpler things to do, because that's a hard task, by the way, I mean, there's just infinite content. But there might be things that might even be easier. So look at repeat offenders, for example, those that are constantly putting things out, but are high influencers, most of the social media platforms haven't really touched those. I mean, Facebook did remove a lot of QAnon accounts, but those QAnon followers have already shifted into other platforms or into other groups. And so it's this information warfare that's going on, and these groups are adapting.
Kai So Jevin, you mentioned QAnon. And also your guidelines for journalists and what people can do. And Sandra and I actually been discussing this on our own podcast, quite recently, whether or not we should actually mention things like QAnon and give them a time in debunking them, or whether it's better to not discuss and unpack what this phenomenon is, at all. So what's your advice for us?
Jevin Gosh, it's been changing, because when I ran into the QAnon groups in about 2017, we were looking at some of these groups online, and they surfaced as a result of some of the Pizzagate conspiracy theories that were going on during the 2016 election, and various other related conspiracy theories. I would have never guessed that that group would have arisen to the influence levels that they're having right now, not just in the United States, but across the world. It's this massive movement that's landing in the physical world, I think it's over 70 countries now have had in-person protests or some kinds of major event with QAnon sympathisers. We have a congressman and Congresswoman in the United States that are sympathisers, one from Georgia that will be moving to DC and representing, as a political leader. There are mayors of cities and even one of the people running for our governor, he had given reference in the recent debate to QAnon. So the rise of QAnon tells me that we can't ignore these things. There's infinite rumours. So we have to decide when not to amplify it. So I don't have a good answer. Because at this point, I would say we probably should have done more with QAnon. But there are many groups that people have never heard of, and we don't want to turn them on to them, we also don't want to give them legitimacy either. But, boy, this one's serious enough that I think it has to be addressed.
Kai This one, as you said, it is so outrageously ludicrous that in the beginning, it was very hard to imagine that this should ever become a thing. And it's still ludicrous. And it is quite a phenomenon that so many people actually seriously seem to be buying into this. But I want to take the conversation to another issue, which is platforms as we said, they're doing something they're trying to moderate content. They're trying to take down accounts. They're trying to take down content that pedal miss-or-dis-information, but it can land them in deep trouble as we've seen with Twitter recently. I refer of course to the takedown of a story that broke in the New York Post around, supposed email evidence against the son of Joe Biden, which was initially taken down, so it couldn't be reposted on Twitter, and then Twitter changed its mind and allowed the content back. So the tension between content moderation and trying to do the right thing on the one hand, and accusations of censorship and curbing free speech. So what's your advice to platforms? How do you actually deal with that?
Jevin Well, I think this is where they need to spend more time on very specific policies, the more specific they are, the more they can defend them and the more they can be consistent. Now, what's interesting about the Biden takedown, that particular post did violate their policies at the time, and they received so much pushback, they did sort of bring it back. And I think in retrospect, their policy wasn't prepared for that particular incident. And so they then looking consistent, they look like they're censoring, just a perception of saying, 'Oh, whoops, yeah, we shouldn't have covered that'. It gives room for attack, for those political leaders who are making this claim and making this argument that the social media companies are censoring them, because it's like being sued. If I was to sue you and say, you know, 'you, you said something bad about the Calling BS project or whatever'. And then you end up taking something down from your podcast, even if it wasn't something that wasn't bad, but you just didn't want to deal with me and my lawsuit, then I can say, 'Aha, see, look, you are guilty, and you are part of this censorship campaign'. And so they're in a tough place right now. And this is where they should be begging for regulation, to be honest, they should just be saying, 'hey, you tell me what can and can't be taken down'. But again, we're dealing with information, we're dealing with politics, we're dealing with so many grey areas, subjective areas about what's true, what's not, it's going to take us some time to lay down some of these laws, if you want to call them that. And these can be informal laws of information. It took a long time to lay down sort of laws of government for the nation, you know, centuries, I think we have that kind of work to do now, when it comes to the information space.
Kai And it also becomes a bit of a murky issue when those who get to regulate those platforms, and those who peddle disinformation are not necessarily disjunct groups, right? To put it mildly.
Jevin To put it mildly, Kai, for sure. And what's also interesting that's going on right now, is this splintering, not of just groups, and communities within these individual platforms, but platforms themselves are splintering. So many of your followers might know of these sort of alternative Twitters that are evolving, so Parler being one, and Gab another. And it'll be interesting to see whether they really take off. They have millions of users at this point, and the content, of course, is much more loose. And you see a lot of conspiracy theory kind of discussions there. But the splintering of social media platforms themselves could isolate groups even more so. And that's something we should be on the lookout for.
Sandra It's not just the splintering of platforms, like Twitter into similar type platforms, but it's also between platforms. So we've talked a lot about Facebook, and we've talked a lot about Twitter. But there's also platforms like TikTok that not only have a very different type of content that can't be moderated in the ways that Twitter and Facebook are, but also talks to a very different audience. A lot of people on TikTok are teenagers. There have been studies showing that American teenagers, up to 54% of them, get their news from TikTok. So what happens when we have platforms where the content is quite different, where regulation that would be in place for say something like Facebook or Twitter, just does not apply to the type of content that we see on TikTok?
Jevin It definitely doesn't. And it's crazy to me to think that there is such thing as news on TikTok. TikTok is a completely bizarre phenomenon. You're right, it is very different. And the same kinds of laws won't work for it won't work for platforms even like WhatsApp within the Facebook family of, of social media companies just because of the encryption issues. And so they're all different beasts. And you can even look at the policies that have been put down by Instagram and by WhatsApp and Facebook and even just all the social media companies that Facebook owns. And they all have varying policies about what they consider fake news and misinformation that's going to get taken down. So even within same ownerships, there's variance. Now, you know, look across platforms. And TikTok, it is a different beast, I don't fully understand it yet.
Kai You said earlier, maybe platforms should be forced to allow users to reorient their feed chronologically, right. That assumes that I follow people, I get their messages, the messages travel along the so-called 'social graph', and then the algorithm comes on top and rearranges stuff to drive engagement on the platform. TikTok has completely inverted that, right? It doesn't really matter who you follow, the algorithm will just give you what engages you. So it's very much about creating content that is engaging. It's very short, people watch, you know, four videos a minute. And a lot of the misinformation is packaged up in just content by everyday people where, you know, just someone raps to some music and spreads information around voter fraud, mail-in ballots and things like that. So it's such a different phenomenon.
Sandra The other concerning thing is that with something like Facebook or Twitter, it's quite clear how political advertising for instance, works on the platform and how that advertising is tied into the business models that a company like Facebook would have. Whereas on something like TikTok, whilst political ads are banned altogether, campaigners still pay individuals on the platform who are part of these Hype Houses to do 15 seconds engaging dance videos that talk about COVID or about elections, or any other topic, and these really happen outside of the platform so that regulation becomes even more difficult.
Jevin Yeah, isn't that interesting? You're right, you can ban political advertising. It sounds good, and it sounds like you can now be an arm's length away from a regulator, but you can get the same effect with the influencers. What's interesting about TikTok is one of the more popular, more viral videos that have gone around over the last several months is spin individual from my hometown, a person that his truck broke down and he was on a longboard, and he was, you know, late to work. And so he got on his longboard, and turned on his video, that he ended up pushing on to TikTok, and he was drinking an Ocean Spray drink. And this thing went so viral, and then Ocean Spray then used that video as their biggest, probably ad campaign of the year. And so it's not even that there was an advertisement within that video, it's just these creative new ways of getting brands across TikTok. So that's why TikTok is still going to find lots of ways of making money. But what's interesting, in terms of those efforts, or those changes on platforms that I think are truly positive, are things like Twitter's most recent change to require some sort of context additions to retweets, where you can't just sort of see something, re-tweet it, you need to think more and share less. You sort of put a little friction in the lines of information, and in doing so slow it somewhat.
Kai That's really interesting Jevin, right, because while Twitter moves in that way, TikTok is going radically the other way, right?
Jevin Exactly.
Kai You just have this constant stream of videos, you don't have to do anything. And they're so fast and snappy. And once you've heard 6,7,10,12 times the same political message around mailing ballots and voter fraud, you start believing that there's something to it because it comes across so innocuous with dance moves, and as entertainment, basically.
Sandra And repetition does seem to equal validation on many of these platforms.
Kai So it'd be very hard for TikTok to introduce friction, given that the whole idea of the algorithm and that feed is built on pace, you know, less friction and things like that. So very interesting, again, to see that you know, what might work on one platform is not suitable for another, which again, makes regulation so much more difficult.
Jevin I totally agree. And you're right, it's, it's just the opposite. And like you said, Sandra, just that repetition, and it's like marketing on drugs and amplified even more. Yeah, TikTok is a different beast.
Sandra But, Jevin, because this is a scary Halloween story, I think we should leave people with some hope and something they can actually do, because it's quite easy to feel helpless in the face of this many problems. So you run a Center for an Informed Public, what can the public do? What can we do as citizens to actually help mitigate some of these issues?
Jevin We all are responsible for slowing the spread of misinformation, and that requires calling it out sometimes. But I think, you know, being careful not to be spreading, especially the kinds of things that are emotion evoking. We're doing a bunch of things in our Center right now, at least around major events like the pandemic and the election. We're in a collaborative project called the Election Integrity Partnership with Stanford University and a couple other labs, where we're monitoring in real time, misinformation around election integrity. It's a nonpartisan effort, just because we look at things from the left and the right. And the public can do that, too. They can actually, you know, be sending things into some of these platforms, or you can send them into fact-checking organisations. Think of your role as an individual to be the inoculator of your social network. It's easier said than done, of course, but be a little bit more proactive when you see things. There's all sorts of things that we can do to improve our online discourse, and it kind of just needs to start with all of us. There's not one magical pill or one policy that's going to do it, or some new AI that's going to come along and solve this misinformation problem. Ultimately, it's going to come from us. But I will say, ending on an optimistic side, I do think that we will be get through this. You know, it seems like forever that social media has been around. But it hasn't. When you look at it in terms of human history and history that's seen major technology revolutions, these big leaps in information technology in particular, we've got through them. It took us some time, it took us a little bit of learning how to use this powerful tool. And I think we are, and I think that there's movement going on right now. There's activity in the academic world across disciplines thinking about this. There's a lot of philanthropic money throwing money at this, governments across the world are starting to pay attention. Citizens more generally, are paying attention. You know, journalists are being more careful about what they're doing. I attended a high school event, and there were 150 high school students presenting about these misinformation topics, and they were so passionate about it, and they were talking to their parents about it, And seeing those kinds of things inspires me. So I think we have some rough patches ahead. And we've seen the effects of an infodemic around this pandemic and elections. And we've got work to do, but I think we'll get there.
Kai It's really interesting you're making the connection there between the infodemic and the pandemic. And what I hear you say, looking at the various measures that are being taken, and the platforms are the ones that are almost in charge of, you know, the tension between lockdown and opening up, between content moderation and censorship and allowing free speech. We have entities like your Center, doing almost like the contact tracing in the infodemic, trying to trace where things are happening...
Jevin Yep
Kai To get rid of it. And then all of us need to wear you know, our intellectual facemasks, we have to think twice before we share, think twice before we retweet something, and have to be responsible and do our own thing and socially distance from the misinformation. So it's really interesting.
Sandra Hence this podcast.
Kai Absolutely!
Sandra And hence all the links that we'll add to the work that Jevin is doing, and all the studies that we mentioned in this podcast.
Jevin I think it ultimately comes down to all of us, all organisations, all institutions have a role in this.
Kai Again, much like with the pandemic, even in the infodemic we are all in this together
Jevin Yes!
Kai Jevin, it's been a real pleasure to hear from you and your insights and about the great work that you're doing in trying to help curb the spread, so to speak.
Sandra Thanks again for talking to us today.
Kai Thank you, Jevin.
Sandra And I'm sure we'll be back talking about this because as you've said, we're in it for the long run.
Jevin Thanks for having me, you guys. It's always fun to talk with you.
Kai And thanks to all our listeners, go leave a comment, leave a rating.
Sandra Thanks for listening.
Kai Thanks for listening.
Megan Sandra Peter is the Director of Sydney Business Insights. Kai Riemer is Professor of Information Technology and Organisation here at The University of Sydney Business School.
Sandra With us every week is our sound editor Megan Wedge.
Kai And our theme music was played live on a set of garden hoses by Linsey Pollak.
Sandra You can subscribe to The Future, This Week, wherever you get your podcasts.
Kai If you have any weird and wonderful topics for us, send them to sbi@sydney.edu.au.
Kai So what else is scary?
Sandra We're gonna do this voice for the rest of the episode.
Kai No, we don't have to use this voice for the entire episode. But isn't it Halloween? Shouldn't we be speaking in spooky voice?
Sandra It is scary how quickly the voice became annoying.
Close transcript