This week: how bots use us to make money online and a recap on algorithms. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

Something is wrong on the Internet

Facebook enlists Wikipedia (recap) 

The algorithm is innocent (recap)

Nudge nudge Nobel Prize, the internet of creepy things, and Facebook enlists Wikipedia

The algorithm is innocent, Australians in space, and the licence to watch.

Youtube bans disturbing child videos

Google reacts to child safety concerns on Youtube


You can subscribe to this podcast on iTunesSpotifySoundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Our theme music was composed and played by Linsey Pollak.

Send us your news ideas to sbi@sydney.edu.au.

For more episodes of The Future, This Week see our playlists.

Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Introduction: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter. And I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful, and things that change the world. Okay let's roll.

Kai: This is The Future, The Week. Sandra and I are on semester break but we have pre-recorded this story before we left. Today we discuss how bots are using us to make money online and a recap on algorithms.

Sandra: I'm Sandra Peter the Director of Sydney Business Insights.

Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group. So Sandra, what happened in the future this week?

Sandra: I actually have no idea I've been on holiday but there are a couple of stories we wanted to pre-record. These are stories that have appeared over the last few months and we didn't get a chance to discuss them on the show because we've only got three stories every week but we thought they were really important to do before the end of the year.

Kai: So this one is from Medium and it's titled "Something is wrong on the Internet.

Sandra: This story is really depressing, don't read it.

Kai: The story is written by writer and artist James Bridle and it comes on the back of another article that appeared in The New York Times that inspired James to dig a little deeper and really get into this phenomenon which we'll try to unpack for you, it's a disturbing one so much like James is not unpacking every detail in his article we're not going to unpack every detail in our discussion but it concerns the way in which content is created, in particular on YouTube, and then presented for advertising dollars. And the particular phenomenon he looked at is content that is created for small children basically and presented on YouTube for kids.

Sandra: So we're not talking here about normal kids videos which are weird enough as it is you know unpacking toys or endlessly looking at surprise Kinder Eggs although that I do find it quite soothing.

Kai: Yeah nursery rhymes and finger family songs, there's certain trends and crazes that are going on which then lead to hundreds and thousands of videos being produced. That is weird as it is and the educational value for children of these videos is highly questionable, but the author is actually looking at something much darker that comes on the back of this.

Sandra: So he's found a number of videos that actually have these really strange names things like "Surprise Play Doh eggs Peppa Pig Stamper Cars Pocoyo Mindcraft Smurfs Kinder Play Doh Sparle Brilho."

Kai: We want to stress this is the title of one video.

Sandra: Yes. There is also "Cars Screamin' Banshee Eats Lightning McQueen Disney Pixar.

Kai: Or "Disney Baby Pop Up Pals Easter Eggs SURPRISE.

Sandra: What the author did was then went on and looked at these videos and figured out just how odd these videos are. There are a combination and mix up of images, sounds, music, sometimes entirely animated, sometimes acted out by people or people with things. Here's a short clip from one of them. [AUDIO]

Kai: So let's try to describe what is happening in this video. So this video is a mash up of the finger song which appears in hundreds of videos. The visuals however is characters from Aladdin changing their heads and then from the side there's this little girl from Despicable Me coming in crying or laughing when the heads are not matching or matching the body.

Sandra: That also happens randomly.

Kai: Yeah it's a completely random mash up and it's quite clear that these videos are computer generated by bots. No human probably had a hand in creating those videos and the author says there's not only hundreds and thousands of channels which publish these videos but there's thousands of these videos which sometimes have millions of hits. A lot of which are probably again by bots to beef up the numbers and make sure that these videos are being listed in these search results in the playlists that are presented to children.

Sandra: And let's just be clear these videos range from the just bizarre like the example that we mentioned to videos that are actually quite disturbing or violent or just completely inappropriate things like Peppa Pig drinking bleach instead of naming vegetables.

Kai: Or Peppa Pig eating her father. So there's a phenomenon that mixes a number of different kinds of videos. There's satirical or outright trolling and abusive videos that show Peppa Pig in all kinds of different, inappropriate situations. Then there's these computer generated mashups that we've just discussed. But there's also a disturbing number of videos that actually involve humans. There's a phenomenon where parents are using their children to create videos of very questionable value, one of which is mentioned in the article where children are actually shown in quite abusive situations, being subjected to pain or other uncomfortable situations. A channel by the name of Toy Freaks which has actually since been delisted by YouTube and deleted on the back of this article.

Sandra: But again there are hundreds and thousands of these channels. This is not one person. There are thousands of these videos.

Kai: But there are even more weird ones we where a group of human actors enact situations that quite clearly comply to a set of keywords that are generated by an algorithm to optimise the listing in search results to maximise advertising dollars. And so these humans are just acting out random situations of toy finger puppets and crawling around the floor and dressing up where what the humans are doing is to basically just create content that matches those keywords.

Sandra: Let's try to unpack this and figure out what exactly is happening here and why. Because as we said this is quite disturbing and we gave the example of the children's content that is being showcased in the article and that was the topic of the New York Times article as well and there's also an article in The Washington Post that focused on the same implications but began with very little effort you could basically make the same argument for videos that feature white nationalism or violent religious ideologies or they could be about climate change denial or about conspiracy theories or about all sorts of fake news. So we think this is quite important to discuss.

Kai: So at the heart of this problem is a combination of monetising content and automation. So if we take a look at YouTube - YouTube at some point allowed content producers to monetise their content by having ads placed alongside or now inside those videos. And while that is a legitimate business model that allows people who create content for YouTube to make a living and advertisers to find an audience...

Sandra: And by itself that is a legitimate thing.

Kai: It's a legitimate business practice right. The problem gets out of hand once you add bots, algorithms, and automation to the game and we've talked about this previously and we called it colonising practices that basically sit on top of a legitimate practice which is content production and viewing and all experience of YouTube where the advertising practice then basically takes over. And it all becomes about generating advertising dollars and the content is then produced for that very purpose. And that can be parents subjecting their children to all kinds of crazy and more and more extreme situations to make a living and generate income from their YouTube endeavours but it's also people employing bots to automatically create content that is then automatically being keyworded and listed just to harvest advertising dollars.

Sandra: And as the article makes clear some of these videos make very little money by themselves so they're videos that only have 4000/5000 views so not huge numbers but if you think about the number of these channels, in the aggregate these things actually make real money.

Kai: And because you can produce these videos at near zero cost it doesn't really matter how many cents each video brings in. If you can create hundreds of thousands of these videos, it's a so-called long tail phenomenon, it all adds up to a worthwhile endeavour. And we also want to make clear that this is not a phenomenon specific to YouTube. We have seen this in other places like Amazon for example where merchandise products such as t-shirts, cups, mugs, smartphone cases are being offered that are quite clearly being generated by just grabbing either text blocks off the internet or pictures from Google which then leads to such bizarre shit such as a smartphone case that shows an elderly man in diapers or...

Sandra: ...T-shirts that have "Keep Calm and Hit Her" on them. Products which the author says might not necessarily have ever been bought by anyone.

Sandra: Or even created by anyone. No one actually sat down and thought this would be a good idea to create.

Kai: No.

Sandra: But rather algorithms have generated these images and these t shirts...

Kai: ...as offerings then listed on eBay or Amazon. So again this is a phenomenon that has zero cost in its creation, but might potentially lead to income on the back of something that happens without any human intervention.

Sandra: So it's quite tempting to dismiss these sort of examples as examples of trolling or of algorithms gone awry or even of parents not minding their children enough to not watch these videos. But there is actually something much more disturbing happening here.

Kai: The problem is the systematic nature with which this happens. These are not isolated cases. This is happening a lot and this content is infesting the lists and channels of legitimate content so it's presented as if they were legitimate BBC-produced Peppa Pig videos or content that at first glance is indistinguishable from legitimate children's content for example so it is very hard for parents to actually pick up on this in the first instance. And yes you could argue why are children watching YouTube that much but given that YouTube is actually running a platform that is called YouTube for Kids, there's a reasonable expectation that this content is actually appropriate for children.

Sandra: So there is the systematic and the automated way in which this is happening. The deeper issue is however that the reason these problems are occurring in the first place is the revenue models that platforms like Google and Facebook and YouTube are built on and the fact that there is very little that these companies are willing to do and very little that they can actually do to prevent this from happening again.

Kai: And the reaction of Google in regards to its YouTube platform is quite telling. They said as soon as we were notified we took action and in this case delisted this one YouTube channel. But a reaction like this waiting for someone to flag inappropriate content that is then being removed vis a vis the extent to which this is happening as mentioned in the article suggests that the platform providers are largely accepting of this practice. They can't do anything about it because it's systemic and baked into their business model and what they basically say is we've created this monster and we all have to live with it and if it creates collateral damage then we'll do something about those cases, but we basically cannot know everything and thinks we don't know about we can't do anything about. So they're basically blaming the algorithm and pretending innocence which is incidentally a story we have talked about previously and which we will recap today.

Sandra: It's worth noting though that companies like Google and Facebook are actually monetising this. This is their business model so even if they were to accept some responsibility for what is going on, we need full responsibility. They are complicit in building this because this is their business model.

Kai: So the problem is in the very platform nature of these systems. So first of all what happens is a decoupling of content from its source. So if you go to the ABC or BBC website and you watch Peppa Pig then you can be relatively certain that what you're watching is legitimate content, it's age appropriate, it's well curated. What these platforms do however is that they decouple the content from its source, you're no longer watching Peppa Pig on the BBC you're now watching Peppa Pig on YouTube and because anyone can use keywords "Peppa Pig" and there's a lot of pirated content and mashed up content, what happens is that legitimate content is being mixed in with fake news, with pirated material, with mashed up material and that creates the problem in the first place. Add on top of this the algorithmic management or the automation of how these things are being presented in your Facebook stream but also in play next lists, in YouTube streams...

Sandra: And the same algorithms that allow these things to bubble up would also allow legitimate content creators who might not have a lot of users, who are trying to enter the platform to bubble up to the surface or it might even be crowding them out since many of these videos as we've seen are very well optimised for keywords and for maximum exposure.

Kai: So the decoupling of content from the source, the automated keyword based presentation of the content, plus the incentive to monetise content with advertising then creates incentives to game the system and create content for the sole purpose of either spreading news for some ulterior political motive or in this instance to just reap advertising dollars by content that has zero value and zero cost in production that is purely geared towards gaming the system and making money. Which then creates the phenomenon that we've been discussing so there's no real solution to this because it seems to reside in the very nature of how these platforms work and hiring more people as in the case of Facebook and we discussed this before or even enlisting Wikipedia to weed out fake news are only patches that fix up some of the symptoms but they don't go to the heart of how these platforms work.

Sandra: And awareness can only take us so far. We actually live lives where we do outsource some of the decision making that we have to algorithms that give us recommended videos or recommended play lists or other things. So as long as we do that awareness can only take us so far. The last issue that the article brings up that I thought was worth mentioning was the fact that the author says "what concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us all of the time and we're still struggling to find a way to even talk about it, to describe its mechanism, and its actions, and its effects". And so the story we've discussed today on The Future, This Week highlights just how difficult it is to unpack what is going on for people who are just users of this platform, consumers, that the fact that how these platforms can be skewed by profit motive,s what the mechanisms behind them are, how generally inscrutable many of the services that we use today are.

Kai: And this is nowhere clearer than in the reaction by the companies themselves, reactions of Facebook or Google in actually pinpointing the problem and doing something about it. And while this is all we have time for for this story, we thought we'd rerun two stories that are immediately relevant here. The first one where we discuss how Facebook is planning to enlist Wikipedia to solve its fake news problem.

Sandra: And the second one where we discuss whether platforms can blame the algorithm.

Kai: And pretend innocence.

Audio: The Future, This Week.

Sandra: Facebook taps Wikipedia to fix its fake news problem for them. And our last story comes from Mashable.

Kai: And it's really an update of a continuing story that we've covered on a number of occasions, it follows directly from last week's "the algorithm is innocent" story, the story that Google and Facebook are pointing to their algorithms as the culprit in the fake news issue around Facebook posts and the US election, inappropriate ads that appear next to YouTube videos or Google inappropriately highlighting links to 4Chan in its top stories in the aftermath of the Las Vegas shooting. So the idea that the algorithms frequently come up with things that are less than optimal say.

Sandra: Facebook's tried to solve this problem last week as well. The solution they came up with was to hire another thousand people to actually manually remove some of this content or tag some of this content. This comes on top of the seven and a half thousand people that they had previously hired in various rounds to fix similar issues around inappropriate content including suicides and racist content and so on and so forth. This week we have a new solution.

Kai: So Facebook has noticed that there is one organisation that has successfully tackled the fake news or fake content problem and that is Wikipedia. A non for profit community of more than 130000 moderators and millions of users who contribute to creating the online encyclopedia which has a mechanism that really works very well in weeding out problems, bias from its content base.

Sandra: So is this a case of if you can't fix it outsource it. And where does the burden now lie? What Facebook has done is actually say that they going to attach a small eye button next to the news feed next to the articles that you see and you'll be able to click on that and it will take you to the Wikipedia description of the publisher. You could now follow on this button and read up more on who the source is and what their background is and so on. So in a world where we are all struggling for attention and we are spending less than two or three seconds on videos and articles, the burden is now on us to go to a Wikipedia page and look up the relevant information.

Kai: So Facebook in this way is not actually solving the problem they're absolving themselves by putting the burden on the user and actually on the Wikipedia community and it raises two big questions. The first one is will anyone ever click on those links? Will users understand this and as you say when we are browsing through Facebook will we actually want to leave the Facebook stream and go someplace else to read up on the publisher? Is this even realistic? But also what were to happen if this catches on and if the publisher information on Wikipedia becomes the source for weeding out the fake from the real.

Sandra: Let’s not forget Wikipedia is not perfect. It’s done really well on curbing fake news but also it enlists volunteers, real life people who have some time to spend on this but not all the time in the world. So what happens if we redirect the burden to this not for profit community?

Kai: So I could just create an article on Wikipedia of a fake publisher and make it sound really really well and then propagate my news on the Facebook stream and when you click on it you have this great information on Wikipedia and now it is up to the moderators to actually patrol and monitor all the things that might be popping up in their community to prove that what happens on the Facebook community is legitimate. So I'll be pushing the burden on a community that is not actually equipped for this and whose purpose it is not to solve the Facebook fake news problem which takes us back to the real problem which is the business model that companies like Facebook are built on.

Sandra: As long as the business model relies on me clicking on more and more things the problems inherent in these algorithms are not easily solved or addressed.

Kai: So is this just a media stunt where Facebook can say we have done something. There is this button now. Because if that business model is to work, Facebook has no incentive to make users click on these links and leave the Facebook stream where they receive information and ads and where Facebook can actually monetise the user. So is this just something that they do to be seen to be fixing the problem or will this actually work. That's the question here which we have to leave open...

Audio: The Future, This Week

Kai: Our first story is from The Outline and it's called "The algorithm is innocent". The article makes the point that Google and Facebook who have been in the news a lot recently for all kinds of different instances of presenting inappropriate content that they are deflecting responsibility onto their algorithm. They basically say the computer did it, it was none of our making, the computer malfunctioned, the algorithm presented inaccurate content. So Sandra what is that about?

Sandra: On Monday for instance when the worst mass shooting in U.S. history took place, if you were to Google Geary Danley the name that was mistakenly identified as the shooter who killed a lot of people in Las Vegas on Sunday night, Google would present quite a number of threads that were filled with bizarre conspiracy theories about the political views that this man had.

Kai: Story sourced from the website 4chan which is basically an unregulated discussion forum known for presenting all kinds of conspiracy theories and not necessarily real news. And the point was that Google presented these links in its top stories box that sits right on top of the Google search page.

Sandra: Google then went on to say that unfortunately we were briefly serving an inaccurate website in our search results for a small number of queries.

Kai: And we rectified this almost immediately once we learned about this mistake.

Sandra: In an e-mail sent to the author of The Outline article, Google also explained the algorithm's logic where this algorithm had weighed freshness too heavily over how authoritative the story was and that the algorithm had lowered its standards for its top stories because there just weren't enough relevant stories that it could go on...

Kai:...so the news was too new essentially for the algorithm to find other relevant things that it could present or so the story goes.

Sandra: So it was the algorithm's fault.

Kai: Absolutely.

Sandra: And really this wasn't the first time we blamed the algorithm. Back in April the article mentions the Faceapp app that had released a filter that would make people more attractive by giving them lighter skin and rounder eyes. And it was called an unfortunate side effect of the algorithm - not intended behaviour.

Kai: So it was an inbuilt bias that attractiveness was essentially associated with whiteness.

Sandra: And of course there are the big stories of the past couple of weeks where Facebook had allowed advertisers to target people who hated Jews in what was called again a faulty algorithm.

Kai: And we also have discussed this on the podcast previously, stories around YouTube presenting inappropriate ads on videos. And let's not forget the whole story around Facebook and the US election where Facebook is frequently being blamed for taking an active role in presenting biased news stories, fake news to potential voters that has played a role in the election outcome and also that...

Sandra: Facebook had said that this idea was crazy - that fake news on Facebook had influenced the outcome of the election. But then coming back recently and saying that they are looking into foreign actors and Russian groups and other former Soviet states as well as other organisations to try to understand how their tools are being used or being taken advantage of to obtain these results.

Kai: So Facebook, Google and others working with machine learning and algorithm and algorithmic presentation of content are frequently blaming their algorithms for these problems. They're saying it wasn't us it was a faulty algorithm.

Sandra: So let's examine that idea of a faulty algorithm. So what would a truly faulty algorithm be?

Kai: In order to determine this, let's remember what we're talking about: traditional algorithms are definite sequences of steps that the computer runs through to achieve a result - to bring the software from one state to another so it's a definite series of steps which we would call an algorithm and we can determine when it malfunctions because we don't end up in the state that we intended to be in. But machine learning works differently. Machine learning is a probabilistic pattern matching algorithm which in this instance did exactly what it was supposed to do. Present certain results that are relevant to the topic on some criteria, semantic nearness or some key words that it elicits and so the 4chan article was relevant because it was talking about the same topic.

Sandra: These algorithms are not designed to either exclude faulty information or deliberate misinformation. Nor are they built to account for bias.

Kai: No and in fact they don't actually understand what they're presenting, they just present stuff that is relevant to the audience as measured by will someone click on it. So relevance is measured after the fact. So I am being presented with a set of links and when I click on those links then the algorithm will learn from this that next time present something to Kai that is similar to what Kai just clicked on and so over time the algorithm is improving what it presents to me to elicit more and more clicks so that I like stuff, that I share stuff. It also in the case of Facebook presents me with potential friends and if it presents the right people I might create more connections so really what the algorithm does it optimises engagement with platform links, shares, likes, clicking on ads and therefore revenue for the company.

Sandra: So first off the algorithms are not per say faulty. They are doing what they're designed to do. We just happen not to agree with the results that they are presenting but they are working pretty much as they were built to work.

Kai: Yes the problem is not so much that the results are inaccurate. It's more that they are inappropriate and the algorithm has no appreciation for what is appropriate or inappropriate because it doesn't understand our world, it doesn't live in our world, it doesn't know anything about culture, about norms, about what is right or wrong. In other words as someone said on television it doesn't give a damn.

Sandra: So the question is how do we fix this? How does Google go about fixing things? So first of all can you fix this?

Kai: So you can't fix the algorithm. The algorithm does exactly what it's supposed to do. It does pattern matching and it presents results that are relevant but it's also essentially a black box. We discussed this before so you don't actually know how the waiting in the algorithm works and what will come out at the end. The only thing you know is that it will present something that is probably relevant to what you were looking for.

Sandra: So the reason it would be really hard to fix this is because you don't exactly know what type of information you should change and also the data that you model it on is biased to begin with. So how do you go about changing that?

Kai: And we're not talking algorithms that were trained with a definite set of training data that you could change to eradicate or minimise bias, those algorithms learn on the fly they learn constantly from what people are clicking on. So people who are clicking on links that are associated with a political leaning will then be presented more of those things that they are potentially clicking on which also leads to the echo chamber effect where people are presented things that just reaffirm their beliefs and we talked about this previously. So the whole idea is not for those algorithms to be unbiased it's precisely to exploit bias to present things that are relevant to people, to have them click on more things.

Sandra: So Facebook's solution to this and there is a good article in Business Insider looking at this and as always we will link to all the articles in our show notes and you can explore all the resources that we had a look at. Facebook's answer is to throw bodies at the problem. So on Monday Facebook announced that it would hire another thousand people in the following months to monitor these ads, for instance like the Russian ads that we saw, like the Russian ads linked to fake accounts that we've seen during the US elections that it will hire a thousand people to remove the ads that don't meet its guidelines. If this sounds a little bit familiar it's because Facebook's done this before. If we remember the sad incidents of lifestream suicides and lifestream murders that we've seen on Facebook, this is when Facebook said that it will hire about 3000 new people to monitor some of the content on top of the four and a half thousand people it already had. So we're now at over eight thousand people that are monitoring - are these the jobs of the new economy?

Kai: Sadly yes. So what we're talking about now is a system where a vast array of algorithms is in charge of discerning who gets to see what on Facebook, what search results are being presented on Google. The kind of ads that are presented alongside YouTube videos and because those algorithms are not really very intelligent they are very good at matching relevant content to search results and to people's known preferences. But they have no appreciation for appropriateness, for things that might be out of line, things that might be illegal, things that might be offensive. So you have to add this human layer of judgement of often by the hour low paid jobs who are in charge of weeding out the most blatant obvious mistakes that those algorithms are making.

Sandra: And intuitively this idea of hiring more and more people to throw at the problem seems a good solution, seems like a reasonable commonsense solution but if you take a closer look and the Business Insider article also takes a closer look at this. There are quite a few things that we would need to figure out, things like who are these people that we're hiring? Are these contractor? Where are they from? Are they in the same places? Do they understand the context that they're supposed to regulate?

Kai: On what basis do they make their judgement?

Sandra: Exactly. Is there a training, they're taught to look for specific kinds of things?

Kai: Where does reasonable filtering and inappropriate censorship start?

Sandra: How does this then inform how Facebook's algorithm and machine learning processes work? When does it start flagging things that it wasn't flagging up until now? Are any of these organisations then working with government authorities or with other people to figure out what are some of the standards? How do we develop the standards by which these will happen? So there are a whole bunch of questions that remain unanswered and yes this is a step forward but probably not an ultimate solution to the problem.

Kai: And the bigger question is do we have a good understanding of what the problem is because eradicating so-called buyers or diversity in search results is not the ultimate solution to every search that we do on the Internet.

Sandra: Absolutely not. So there are a couple of other really good articles by William Turton who also wrote The Outline article. He gives a couple of really good examples. For instance if you do a google search for a flat earth it should give you a wide variety of stories that the earth is not flat but also that there are unfortunately still a lot of people out there who believe the earth is flat.

Kai: Yeah and you might want to actually look up the flat earth movement and what ideas the people are into.

Sandra: However same author did a search for the Great Barrier Reef and the top story is presented by Google on the Great Barrier Reef were some from the Sydney Morning Herald around the coral crises and from Wired magazine talking about the crisis of the Great Barrier Reef but the other story was a Breitbait news saying that the coral reef is still not dying, that nothing is happening and that this is all a great conspiracy. So the idea of what is a point of view vs. what is probably complete nonsense...

Kai: Because it just goes against all the science that we have on the topic.

Sandra: Is it irresponsible for Google to attach some kind of implicit credibility to a story that is pushing these things around the coral reef?

Kai: Which goes back to the old problem that the algorithm does not really understand the intention that goes with searching for a particular topic and also that it cannot really distinguish between real news, fake news, between scientifically sound facts and just opinion or propaganda.

Sandra: So where does this leave us? First there is a huge problem associated with bias in algorithms and it has a number of consequences some of which we spoke about on Q&A that have to do with how we hire people or how we grant people parole. But there is this whole other range of consequences of bias in algorithms. Second is the language that we use to talk about this. We talk about faulty algorithms doing the wrong thing.

Kai: So we anthropomorphise these algorithms as if they had agency, as if they were actors that would make those decisions and therefore would make mistakes or apply the wrong judgement. And incidentally that allows us to absolve ourselves to just point to the algorithm as the actor who made the mistake.

Sandra: But it is our job or indeed some of these companies jobs to get the thing right.

Kai: Yes but here I want to interject. What does that mean for these companies to get things right? What are they trying to do? What are they optimising on? And if we're looking at what Facebook does essentially they're in the business of connecting everyone, of creating engagement on the platform. They're not really in the business of providing balanced news. What they are optimising is clicks, ad revenue, connecting more people because that leads to more clicks, sharing in ad revenue. The problem of fake news or buyer's imbalances - they're basically a sideshow for them. It's an unfortunate side effect of the thing that they're trying to do of creating more connections and engagement. It is something that they have to deal with but it's not their purpose to actually be a balanced news outlet. And neither is Google actually doing this. For them it's much the same, it's actually about advertising and you drive advertising by exploiting people's world views and preferences and yes biases and the province that we're discussing are emergent side effects that they have to deal with and they do this by layering filters of people and other algorithms that try to weed out the most obvious problems.

Sandra: So are you saying that because it's not these companies jobs it absolves them of any responsibility?

Kai: Absolutely not. That's not at all what I'm saying. What I'm trying to say is we need to understand what they're trying to do to then realise how these problems come about and maybe ask the question whether they are actually optimising the right thing, whether we actually want those platforms which have become the Internet for some people who spend most of their online time on platforms like Facebook. Whether we need some form of at least awareness in the first instance or regulations or some standards that will provide incentives for these companies to actually deal with the problem not as something that happens after the fact but actually removes the systemic issues that create the problem in the first place.

Sandra: So at the very least we need to talk about these issues, have a public conversation about them and be aware that they are happening.

Kai: And I'm sure we will have to revisit this issue because those problems are not going away.

Outro: This was The Future, This Week made awesome by the Sydney Business Insights' team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music was composed and played live on a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Soundcloud, Stitcher or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss please send them to sbi@sydney.edu.au.

Related content