This week from San Francisco: a fake special with restaurants, reviews and monsters. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

00:48 – Living with monsters in San Francisco

02:18 – How a fake restaurant went to number one on TripAdvisor

14:10 – AI has learned to write believable product reviews

23:36 – The fake information apocalypse

32:03 – The era of video deep fakes makes fake news look quaint

The stories this week

I made my shed the top rated restaurant on TripAdvisor

AI has learned to write totally believable product reviews

Aviv Ovadya is worried about an information apocalypse

The age of fake video begins now

Living with Monsters? conference website

The Shed at Dulwich

Believe nothing: the hoax of the Shed at Dulwich

You literally can’t get a table at the best restaurant in London

How to become TripAdvisor’s #1 fake restaurant, Ooobah Butler on Youtube

How Tripadvisor moderates traveller reviews 

TFTW 8 Sep 2017 – we discuss fake reviews

TFTW 16 Feb 2018 – we discuss fake news and the information apocalypse

TFTW 13 April 2018 – we discuss fake videos


You can subscribe to this podcast on iTunes, Spotify, Soundcloud, Stitcher, Libsyn, YouTube or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Our theme music was composed and played by Linsey Pollak.

Send us your news ideas to sbi@sydney.edu.au.

Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Disclaimer: We'd like to advise that the following program contains real news, occasional philosophy and ideas that may offend some listeners.

Intro: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful, and things that change the world. Okay let's start. Let's start.

Sandra: Today on The Future, This Week from San Francisco: a fake special with restaurants, reviews and monsters. I'm Sandra Peter, the director of Sydney Business Insights.

Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group.

Sandra: So Kai, I should ask you what happened in the future this week. But I should ask you first where are we this week.

Kai: Well the two of us, we are in San Francisco where we have attended a conference. It's the IFIP Working Group 8.2 Working Conference.

Sandra: That's a mouthful.

Kai: It's a mouthful. IFIP, for those interested, it stands for International Federation of Information Processing. But this is an academic conference. Scholars, colleagues of ours, who this year came together under the topic of "living with monsters". And we looked at...

Sandra: Algorithmic decision making, artificial intelligence, platforms, Google, Facebook and all sorts of monsters and monstrous technologies.

Kai: We've previously discussed on the podcast a whole heap of these. Monstrous effects that is. And this podcast today will feature a rerun of some of the most important stories on this topic. At the conference we heard talks about the pitfalls of predictive policing, of quantifying quality and performance in health care. We heard about algorithmic pollution. This is the unintended consequences of algorithms on platforms such as Google and Facebook.

Sandra: And the shout out to our former colleague Oliveira.

Kai: Yeah. And finally a topic was how reality is created on these digital platforms. And that brings us to today's topic, which is fake information. Fake content, fake news.

Sandra: So our first story for this week actually should be titled The Future, Last Year because it's a story that appeared during our last break in 2017. But it made the reappearance at this conference, and we thought this would be the perfect opportunity to bring it up again. And the story is from Vice, and it's titled "I made my Shed the top rated restaurant on TripAdvisor".

Kai: So Professor Susan Scott from the London School of Economics, one of our colleagues at the conference, used this example to illustrate how fragile the reality is that is created on platforms such as TripAdvisor. And Susan and Wanda Orlikowski have done some really interesting research on how algorithmic reality is created and performed on TripAdvisor, and how it shapes the reality of the stakeholders involved. Most notably the people who run restaurants.

Sandra: So the story is written by one Oobah Butler, a writer for Vice at the moment. Who a while back, before he started working for Vice, was actually in the business of selling his skills as a reviewer for TripAdvisor except with a twist.

Kai: So let's hear from Oobah himself.

Oobah Butler (file audio): My first writing job was writing fake reviews for restaurants. I would do that and they'd give me a tenner. Boom. Businesses' fortunes would genuinely be transformed. That made me see TripAdvisor as like a false reality that everyone completely seriously. Over the years I just thought the only bit of TripAdvisor that is unfakeable is a restaurant itself. Then one day I thought, oh maybe it is actually fakeable.

Kai: So Oobah decided to set out and create a fake restaurant. He says he had a revelation. Within the current climate of misinformation he says, and society's willingness to believe absolute bullshit, maybe a fake restaurant is possible. So, he started off by getting himself a 10 pound burner phone, and he bought a domain and built a website for The Shed at Dulwich.

Sandra: He also tried to think of a concept that would be silly enough to attract food enthusiasts. And the concept was to name all the dishes after moods. So the website features moods they've served in the past. Things like...

Kai: Lust. Love. Comfort. Happy. Contemplation. Empathetic. And so because he confessed that he cannot cook, he set out make photos of delicious dishes composed from shaving foam, bleach tablets, sponges, paint, sprinkled with some herbs made to look like, you know, foodie photography.

Sandra: So once he had his website together with all the pictures and the previous menu items, he submitted his restaurant, The Shed at Dulwich, to TripAdvisor.

Kai: And lo and behold, he fairly promptly got the confirmation from TripAdvisor that his listing has gone up on the website. So, he started out as the worst restaurant in London, ranked eighteen thousand one hundred and forty ninth.

Sandra: So he decided, well he needs to do better than this, so he will need a lot of reviews. And these reviews have to be written, at least some of them, by real people with different computers on different IP addresses.

Kai: Not to make it too obvious for TripAdvisor's anti-scammer technology to pick up that these were actually fake.

Sandra: So he contacted his family, his friends, acquaintances and asked them to start submitting reviews.

Kai: A few weeks in, the restaurant cracked the top ten thousand mark, and it started to climb.

Sandra: Then as it was climbing, the unthinkable happened. The burner phones started to ring with people trying to get a table, trying to get bookings.

Kai: Now, obviously The Shed, not being a real restaurant, didn't have a concrete street address. He had listed just the street and said bookings on appointment only.

Sandra: Which made the place even more desirable. And as the reviews from friends and family and other people kept coming in, the restaurant kept climbing and climbing. And then as the author puts it seemingly overnight, they were rated number 1400 and 56.

Kai: So Oobah now spent a good deal of his time answering his burner phone, finding all sorts of excuses for why the restaurant was fully booked and too busy that particular night someone wanted an appointment. Which, you know, makes the restaurant even more desirable.

Sandra: Yep. So the lack of appointments, the lack of address, the general exclusivity and the air around the place and the reviews that kept coming meant that by the end of August, they were number one hundred and fifty-six.

Kai: He started bumping into people in his street looking for The Shed.

Sandra: And asking him if he knew of this location. Then companies started using the estimated location of The Shed on Google Maps to start sending him free samples of various types of ingredients that he might want to use in The Shed.

Kai: People contacted him who wanted to work at The Shed.

Sandra: Even an Australian production company got in touch, saying it wants to film a documentary that would be shown on flights to London.

Kai: Oobah made a final push and they arrived at number 30 for all of London on TripAdvisor. But he said at that point, things got a little harder. And it was very hard to actually get higher in the rankings, no matter how many reviews they were throwing it. But then one night he gets an e-mail from TripAdvisor titled 'Information request'.

Sandra: He thought, well the game's up. This is as long as it lasted. But as he opened it, he realised it was about 89000 views in search results in the past day. With dozens and dozens of customers asking TripAdvisor for more information.

Kai: Why? Well on the first of November 2017, six months after listing The Shed at Dulwich, it turns out it was number one for all of London.

Sandra: So just to recap: in one of the biggest and the most famous cities in the world, on the biggest website that reviews such places perhaps one of the most trusted such websites on the Internet, the number one restaurant was a restaurant that did not exist.

Kai: Which was arguably pretty elaborate, but nonetheless a prank.

Sandra: So Oobah decides to TripAdvisor to actually tell them what happened, and to ask how is it possible that he can make it to number one and TripAdvisor promptly responded, and the response is a little odd. TripAdvisor seemed pretty dismissive of what had just happened saying that they go to great lengths to check for fake reviews but there is no incentive for anyone in the real world to create a fake restaurant, it's not a problem that they experience with their community. So this is not a test of a real-world example.

Kai: Which is to say that no one will ever create a fake restaurant, so we have no need to check for that. Missing the whole point of what had just happened. That the number one restaurant on their platform was indeed a fake restaurant. And while Oobah says that you, know fair enough, I can't imagine that this will happen very often, it tells us something about what happens on these platforms.

Sandra: The Shed at Dulwich which actually reveals quite a bit about how individual users are able to manipulate such big platforms, such big organisations. Not by overwhelming them with the sheer quantity of reviews for instance, but by very carefully playing the algorithms that help us navigate through the thousands and thousands of reviews that such companies have.

Kai: It shows how vulnerable platforms, are given that due to the sheer scale at which they operate - in the number of restaurants for example, and the number of reviews that people leave, and the number of views on the platform, that even small manipulations can have great effects. And we want to point out that on the 1st of November when the restaurant shot to number one, it was based on just one hundred and four reviews.

Sandra: Only one hundred and four reviews ensured that a non-existent restaurant stayed for almost the fortnight at the top of TripAdvisor's best restaurants for one of the biggest cities in the world.

Kai: And while fake restaurants, as TripAdvisor points out, might not be the biggest problem - fake reviews certainly are. Which, as Oober's own history points out, is a business model. People go and pay people for fake reviews. It is something that people can make money from. So the gig economy and the digital economy which in a sense is pretty needy for validation and customer reviews - everything is reviewed, products, restaurants, experiences - has created this underbelly of small-scale jobs. Yet another form of gig jobs. Of writing fake reviews on Amazon, on TripAdvisor, Yelp and other platforms, which can have material effects on people's lives. Those who own restaurants. If 104 reviews can catapult a restaurant to number one in London, we can easily imagine how smaller restaurants can be affected if only a handful of negative reviews are posted that pretty much destroy their score and let them drop in the rankings such that materially their business is affected. Because, let's remember scores, move people. The ones that show up in rankings at the top will be the ones that get more business.

Sandra: So Oobah Butler was paid 13 dollars to write reviews of places he had never eaten at. So it doesn't come with a high price tag.

Kai: So a malicious restaurant owner who wants to take down a competitor can do so for comparatively little money. And it is then on the other restaurant owner to go back to TripAdvisor, and argue, and prove and make sure that those reviews are deleted. And all the while in the meantime they are losing business if they manage to get rid of those reviews at all.

Sandra: And this creates a real problem for platforms like TripAdvisor. An article in The Washington Post was pointing out that TripAdvisor over the last year has been the subject of criticism, after an investigation by the Milwaukee Journal Sentinel showed that it had deleted reviews that contained dangerous activities, things like theft and rape from some of the listings that it had.

Kai: So it is then up to TripAdvisor, and of course its algorithms, to discern which reviews might be true and weed out the ones that aren't.

Sandra: Or in the words of TripAdvisor itself:

TripAdvisor (file audio): At TripAdvisor, we know that in order for a review to be truly useful, it has to be - well, true.

Kai: Really? In order to be useful they have to be true? I remember we did a story once.

Sandra: Yes, I think I remember this story.

Kai: This is a story we did it's more than a year ago, on the 8th of September 2017. And it talks about fake reviews.

Sandra: Let's hear that story again.

Audio sounds: <Music and audio sounds> The Future, This Week...

Kai: And it's called "AI has to learn to write totally believable product reviews and the implications are staggering".

Sandra: "I loved this place. I went with my brother and we had the vegetarian pasta and was delicious. The beer was good and the service was amazing. I would definitely recommend this place to anyone looking for a great place to go for a good breakfast. A small spot with a great deal."

Kai: This is a review written from scratch by an algorithm in a study reported by researchers from the University of Chicago in a research paper that will be presented at a conference in October (we'll put the link in the show notes). And the researchers basically demonstrate that machine learning algorithms can create totally believable fake review texts from scratch. So these text not only can they not be detected by plagiarism software because they do not just work by reassembling existing reviews, they have learned the patterns that underlie successful use of reviews from scratch by being fed a large amount of Yelp restaurant reviews to then be able to write those reviews from scratch.

Sandra: So what happened with these reviews is that individuals were asked to rate whether they thought these reviews were real or not, which they passed the past. They were believed to be real. However the really interesting thing comes from the fact that what the researchers claim that this is effectively indistinguishable from the real deal because the humans also rated these reviews as useful.

Kai: Which means that the deception intended by the researchers was entirely successful. So these algorithms can now write reviews that are completely indistinguishable from human written reviews.

Sandra: Yep, that are perceived as useful as the ones that humans write.

Kai: And fake reviews are already a problem. There's a whole industry, a grey area.

Sandra: Human generated fake reviews?

Kai: Yes so, there's people reporting (and we can put an article in the show notes) of a woman reporting what it takes to write believable fake reviews, and building a profile that is trustworthy and believable. So there is a whole industry of people getting paid to write these reviews for product owners, for restaurants, to pimp their reviews and it happens on Amazon on Yelp, Google, TripAdvisor.

Sandra: So are these algorithms actually coming for their jobs?

Kai: Are we seeing the automation of click farms and the fake review industry?

Sandra: It's a scary thought.

Kai: Yeah, it's a scary thought because so far with human written fake reviews, there's limitations as to how many reviews you will have. Once you can do this with an algorithm, you can really scale this up and turn this into a massive problem.

Sandra: Let's recognise that there are still problems to be solved. These reviews have to come from certain accounts that have to be believable as well. However the really big issue has been solved, which is are these perceived as real and do users trust these reviews? Are they perceived as useful?

Kai: And those articles on the web which you can look up. How to spot fake online reviews. And many of them talk about, yes, what accounts do they come from. But this is a problem we can solve with technical means as well. But most of them centre around how do these reviews read? Are they overly positive? What are the traits of a fake review? But the research has shown that algorithms can write reviews that are indistinguishable from real reviews to the naked eye. They can however still be uncovered by machine learning, because apparently the way in which the algorithm distributes certain characters across the text different from how humans would normally do it.

Sandra: And once we figure out the cost of this, this will lead to an arms race where you try to generate fake reviews, I try to figure out what the fake reviews are. You get better at writing them, and I try to get better at catching them. Soon enough, early enough...

Kai: Sounds a bit like the giant robot duel that is about to happen. But seriously, are we now on the verge of engaging in a machine learning arms race about fake information on the Internet and detecting that same information. Which leads us to a bigger problem.

Sandra: Fake everything. Fake news. Fake tweets. Fake...

Kai: Journalism, propaganda, all kinds of fake shit.

Sandra: So while fake reviews might threaten the business models of things like Amazon, or the business model of Yelp, or in the TripAdvisor, or all of these sites the idea of fake everything has some deeper implications. So think about the fake tweets right. For an account like @DeepDrumpf, the bot that humorously impersonates Donald Trump, might be a fun thing to follow and an easy thing to spot. Verses the pro Trump chat bots that are indistinguishable from real people tweeting pro Trumps. So this might hijack things like elections and we've seen that happen before. We also have algorithms that write articles. A lot of market updates in financial newspapers are written by basically algorithms.

Kai: And there's just today an article in The New York Times reporting that it has surfaced that fake Russian Facebook accounts or about 100000 dollars’ worth in political ads during the campaign.

Sandra: And this has happened so far with a lot of written language. So with saying tweets, reviews, articles.

Kai: And there's also applications, for example, in journalism right. A lot of articles like say, a match report in sports, follow certain patterns which could easily be written by an algorithm instead of a journalist. Or take your spam email. Some of them are of abysmal quality. So...

Sandra: We would welcome some algorithms.

Kai: So I'm sure algorithms will find their way into spamming to create ever more believable spam emails, creating a real problem. But it's not confined to just texts.

Sandra: Nope. We've seen researchers at the University of Washington having recently released a fake video of President Obama, in which he very convincingly appears to be saying things that he has been made to say. He's never said them, but we can realistically create that.

Kai: So he might have said them in a different context but what they have done is they have overlayed his mouth movements on to a real video footage. And the artificial mouth blends in quite convincingly, so that it's really hard to discern whether this is actually real or fake. You can now create a fake voice from just a few minutes of recorded audio, so we are on the verge...

Sandra: And we can also make them say things that users will perceive useful. That inform their political views in a meaningful way, or that might inform the choices they make about the restaurant, or about the place to visit in meaningful ways.

Kai: So we're now on the verge of having a president say things to camera that they have never said, which creates scary scenarios. Orson Wells-like scenarios of deceiving the general public about terrorism threats or alien invasions. So fake news, fake information on the Internet, takes the next leap to fake believable artificial humans on video.

Sandra: So, the fact that these things not only exist and are believable but are also perceived as useful does actually two things. First one is it tends to undermine the trust that we have in these things. Right. On the one hand, we might think well this could have been written by an algorithm, who knows whether this is real or not? And that is one way in which they can become quite dangerous. The second thing is their ability to actually hijack trust. And just to mention here that Rachel Botsman's got a book coming up - I was on the panel with her yesterday and the book's coming out in October: "Who can you trust?" That really is looking at our perception of trust and how this has been transformed in things like banking or media or politics or even dating. So the second thing that can happen here is the hijacking of trust. We might not be aware that these things are real. And actually let's say if I am a bank and I have a chat bot that looks and sounds exactly like a real person trying to convince you to buy something. And I advantage of everything that I know about you to give you exactly the right cues to make you buy something, that is hijacking the very notion of trust. So.

Kai: We're talking deception here, but we're also talking the bigger problem that we're now on the verge of and we talk a lot about post-truth society. But with algorithms being able to pose as humans not just in text conversations, but soon in believable video or artificial Avatar technology - and giving a shout out here to Digital Mike who are presenting Disrupt Sydney soon. And the research into digital humans and digital faces really puts us on to the trajectory that at some point, what can you trust on the Internet? And does that put the whole idea, the utopian idea of, the original idea of the Internet as a democratic space with information at your fingertips and the democratisation of information at peril. Because if you cannot believe anything on the internet anymore, and it's harder and harder to know that what you're looking at is actually real and produced by a real human and not by an algorithm for malicious purposes, the very nature of the Internet might change dramatically.

Sandra: So we don't really want to paint a really doomsday scenario here. Most of these things have applications that also can make our lives much better.

Audio sounds: <Music and audio sounds>.

Sandra: And this is a topic that we took up again at the beginning of this year on the 16th of February in our episode on the information apocalypse.

Kai: So let's hear this story and continue with the thread on fake news, fake stuff on the Internet and how that changes the Internet itself and the way in which we consume information.

Audio sounds: <Music and audio sounds> The Future, This Week...

Kai: Here's our second story from BuzzFeed News titled, "He predicted the 2016 fake news crisis, now he's worried about an information apocalypse". So who is he Sandra?

Sandra: So he is Aviv Ovadya and he's an M.I.T. graduate who's done a few things at various tech companies around the Valley. Companies like Quora. But who has now moved on and is a chief technologist for the University of Michigan's Center for Social media responsibility. And Aviv has pretty much dropped everything in 2016 where he realized that the business models behind companies like Facebook and Twitter would enable the kind of onslaught of fake news that we've seen influence the American elections. And he has been trying to raise attention ever since to what he perceives the inherent problems of the web and the information echo system that has developed around the social media platforms.

Kai: So, the point that Aviv and this article are making is that on top of the kind of social media platforms that we just discussed, with Facebook and the way in which it tends to spread all kinds of fake news and things like that, that on top of this we are seeing a new wave of technological developments that make it easier to not just create but automate the creation of fake content. And not just text but video, audio and a large scale.

Sandra: And he thinks that if the stories like the one we've had before about the conversations that plague Facebook scare us, what comes next should really scare the shit out of us.

Kai: So the stories collected in the article are truly worrying.

Sandra: So what he does is ask a very simple question: What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did.

Kai: And he presents a collection of examples that show what is already possible, and then he imagines of where things are going in a reasonably short amount of time. So for example: there's examples now and that has been in the news of fake pornographic videos where algorithms use faces of celebrities sourced from Instagram for example to put it on fairly explicit video material.

Sandra: There is also the research out of Stanford that we've discussed here before, that combine recorded video footage with real time face tracking to manipulate that footage. And for instance puppeteer images of let's say Barack Obama in a very believable way, and make him look like he said things or did things that he hasn't really done.

Kai: And we've discussed previously research that is being done here at the University of Sydney Business School into photorealistic believable avatars which are piloting the technology to basically create synthetic, believable humans that can be puppeteered which can be copies of real people or synthetically created people. So the prospect is that we now enter an era where I can make anyone say anything on video. I can create synthetic humans. We would not even know whether we're conversing with a real or a fake human. We're entering an era where not even video or live video streams can be believed.

Sandra: So why does this spell the end of the Internet for Aviv?

Kai: So there's two steps to this. First of all, obviously, there's these scenarios that fake content can create serious problems. Mass panic, diplomatic incidents. US Presidents declaring war on other countries.

Sandra: And currently war game style disasters scenarios based on these technologies are being run to see how these would play out.

Kai: But the more worrying part is that the ability to create any type of fake content at the same time discredits what is real. So I can always turn around and say this piece of information could be fabricated. Or this video isn't real, because we have the technology to create this kind of videos synthetically. Who gets to say what is real and what is fake. As a result.

Sandra: Furthermore these things will all start competing with real people and real voices for the same type of limited attention. So think about your representatives in government. They could be flooded with a range of messages from constituents that sound, real look real. Asking them to behave in a certain way or vote for certain policies. These things will compete for real time and real attention from legislators and will come at the expense of other voices.

Kai: And even in the absence of large-scale doomsday scenarios it starts small. What happens when you can no longer distinguish between real messages from colleagues in your inbox and fake messages that are there are as phishing attempts, as spam essentially. There is just this week been an article about Google rolling out a new service called reply. Which is essentially a bot that allows you to have friend messages answered it automatically.

Sandra: So say someone contacts me while I'm doing this podcast with Kai. This bot would see in my calendar that I'm scheduled to record this, would send the message saying I'm just recording this with Kai. Please called back between these and these hours when I'm free and so on. But you can imagine the complexity of these things growing and the accuracy with which they will be able to conduct conversations on our behalf growing.

Kai: So the real issue is that when you can no longer discern what is true, what is real, what is fake, what is fabricated in social conversations online, on the Internet or news platforms on social media. This will basically mean the end of the Internet as we know it as a source of information, as a source of social interaction.

Sandra: So where does this leave us in a world with a lot of fake content, where social platforms like Facebook act as accelerators for distributing this content and amplify the facts that they would have. Whilst we are trying to find solutions to this - and we've discussed this with our previous story - in the interim we have very few alternatives left. One of them arguably could be turning to trusted institutions where we know that certain news outlets have been trusted institutions for over 100 years. Turning to universities as sources of knowledge.

Kai: Yes, absolutely. I think we will see a move back to the future if you wish, in the sense that for a long time we have decoupled content from its source. Social media, Facebook trades in presenting all kinds of content decoupled from its originators. I think we will have to see a big move back to a trusted brand, such as The New York Times or consumer brands - Amazon, Apple - to institutions who have a proven interest in presenting real and trusted information and services. So I think the Internet as a global democratic space where every voice is equal, I think we are about to see destroyed. Precisely because it's very hard to know what and who we can trust to tell us something about the world or something that is entirely fabricated to serve some ulterior motive and self-interest.

Sandra: Well, that is slightly depressing.

Audio sounds: <Music and audio sounds>.

Kai: Well, while that was slightly depressing, it also turns out that this is a timely story which the conference this week has really shown because there's now a ton of research being done on how to solve this, new regulations are being contemplated. But what is also significant about the story is that we've now arrived at a point where the faking of videos, the last holdout of truth on the Internet, what you can see has become a topic.

Sandra: And here we want to remind you of a story we did back on the 13th of April of this year on fake videos.

Audio sounds: <Music and audio sounds>. The Future, This Week...

Sandra: Our last story comes from The Atlantic and it's titled, "The era of fake video begins - the digital manipulation of video may make the current era of fake news seem quaint". The story is trying to highlight that whilst we have all these conversations in the media around fake news and their effects on our society, we are actually trying to just play catch up with things that have already happened. And in the background of these conversations, fake videos are already happening. The article makes reference to "deepfakes", something that we've touched on in this podcast before. Where the faces of famous actresses from movies like Harry Potter are superimposed onto pornographic video footage. And this is important because hackers developing this technology actually intend to democratise their work. The article reports on how automating and making this technology freely available would then allow anyone to transpose disembodied heads of people or co-workers or politicians onto clips with really just a few simple clicks and no technical ability at all.

Kai: And so the democratisation of technology, as much as we might appreciate it, in this case could have dangerous consequences when it comes to the phenomenon of revenge porn for example. But the implications of this are much larger when in fact any kind of video can now be manipulated. And we have previously highlighted examples, such as by the University of Washington which have shown with video footage of former president Barack Obama, that you can manipulate existing video footage and have people in those videos say different things. Stanford has shown that you can do real time face capture to re-enact videos of celebrities such as U.S. presidents.

Sandra: And make them see things that they have never said, in a completely realistic way. And this is becoming increasingly a service available to everyone. We quite often tend to think of these things as technologies that only exist in an academic lab or in a commercial lab with huge funding. But very soon such technology will be widely available.

Kai: Which brings us to the main point the article makes. The author - and I'm quoting from the article - makes the point that unedited video has acquired an outsized authority in our culture. So the idea that for a long time if you wanted to prove a point and you had video footage, you were in a much better position to actually rally people. Because once you have video footage, people are much more ready to believe a story and also get behind a cause. And that's, the article says, because the public has developed a blinding irrational cynicism toward reporting and other material that the media have handled and processed. An overreaction to a century of advertising propaganda and hyperbolic TV news. So unedited video has been the last holdout for reporting reality. And this is now under threat and up for grabs.

Sandra: And let's remember that the opportunity to manipulate not only people's mental states or emotions but also behaviours comes not only from trying to convince them that certain videos are necessarily real, which is one way you could influence public opinion, or stock prices, or voting habits, or commercial behaviours, or political leanings. But the opportunity also comes from the quantity of videos that could be produced that will compete for attention. So for instance, in politics it is not only that you could manipulate what the politicians could say but you could also manipulate what constituents would say. In which case having a few thousand videos of constituents asking for something would compete with the real voices of people trying to have a conversation. So with fake videos I think we're actually at the time when we need to start figuring out how we will be able to sort fakes from truth, and how we will keep a functioning internet once these things are done at scale. Obviously, our first solution would be that perhaps individuals will have the ability to tell which is a role they think we don't necessarily want to explore. There is virtually no chance that people will have either the time or even the capacity to sort truth from fiction in this space. Consider the fact that we watch a video for 10 - 15 seconds when we are on our Facebook feed or on our LinkedIn feed and these things look completely real. So the fact that individuals would have this capacity is not even worth exploring. Another option would be technology companies actually stepping in.

Kai: And this is where the article makes a really interesting observation. In that the author says that Silicon Valley has never been in the business of preserving and representing reality. And the author draws a straight line from the mind altering and reality denying effects of drugs like LSD in the 1970s, where the early cyberpunk and technology movement in computer science has its roots, to new developments in virtual reality where the next big step in redefining the Internet is precisely in creating alternative realities that will captivate and entertain people. And where the whole technology is based on the idea that we are not representing a reality but creating entirely new and potentially completely weird experiences. Which, you know, is a strange reminder of the recent movie Ready Player One. A future in which people are being plugged in to these virtual worlds which help them forget their real lives. But who are then at the whim again of technology companies that shape realities for their own purposes and this is the warning that the article sounds. Just as we are messing with the mediums that have allowed us to represent and actually know reality and what actually happened, we're moving to a space where we were deliberately creating new realities and virtual worlds where the idea of what is actually real becomes completely obsolete.

Sandra: And yet there is one more way we could tackle this, because unlike other crises that have snuck up on us this one is one that we can foresee and that we are at the very beginning of. And we could reconsider, for instance, and the article makes a good suggestion in its final paragraph. We could reconsider the place of trusted validators in our society. And this place would not be a space for technology companies, but rather for institutions that have previously had cultural authority as trusted validators. So think about well-respected newspapers, or academic institutions such as universities, or new institutions that we would bring into this space, where we would outsource the problem of validating content. We have previously had this issue at the much smaller scale and we have established cultural institutions that have fulfilled that role in society. So may be a good point to also have those debates about how we want to handle this.

Audio sounds: <Music and audio sounds>.

Kai: And this is all we have time for today. We end on this note, having just been part of one of those debates and the research around platforms and the monstrous effects that these platforms can have on our daily lives.

Sandra: And from San Francisco, this is all we have time for today.

Kai: Thanks for listening.

Sandra: Thanks for listening.

Outro: This was The Future, This Week. Made possible by the Sydney Business Insights Team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes sound good and keeps us honest. Our theme music is composed and played live from a set of garden hoses by Linsey Pollak.

You can subscribe to this podcast on iTunes, Stitcher, Spotify, YouTube, SoundCloud or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss, please send them to sbi@sydney.edu.au.

Related content