text
The Future, This Week 13 October 2017

This week: nudge nudge Nobel Prize, the internet of creepy things, and Facebook enlists Wikipedia. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

Why Richard Thaler won the 2017 economics Nobel Prize

The eerie Google Clips camera

Facebook enlists Wikipedia

The problem with Nobel prizes and the myth of the lone genius

Thaler’s Save More Tomorrow paper

Thaler and Sunstein’s book Nudge

Wharton’s Katherine Milkman discusses the awarding of the Nobel Prize Richard Thaler

Big Think interview with Richard Thaler

Thaler wins Nobel prize 

Nudging, what works and why (not)

Johnson and Goldstein paper on default donation

Dan Ariely’s TED talk

Policymakers around the world embracing behavioural science

Low organ donation rates in Australia

NSW Government Behavioural Insights Unit

Mattel thinks again about AI babysitter 

Google’s new Earbuds instantly translate 40 languages

The Internet of Things is sending us back to the Middle Ages

The Future, This Week 06 October 2017

The Future, This Week 28 April 2017

The Future, This Week 18 August 2017

Linsey Pollak, learn how to make carrot instruments

Robot of the week

The Stormtrooper Robot

Join us and other University of Sydney researchers take education out of the classroom and into a bar

Raising the Bar Sydney
25 October 2017
20 Talks, 10 Bars, 1 Night


You can subscribe to this podcast on iTunesSpotify, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Send us your news ideas to sbi@sydney.edu.au

For more episodes of The Future, This Week see our playlists

Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Intro: This is The Future, This Week, on Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful, and the things that change the world. OK. Let's roll.

Sandra Intro: Nudge, nudge the Nobel Prize, the internet of creepy things and Facebook enlists Wikipedia.

Sandra: I'm Sandra Peter, I'm the Director of Sydney Business Insights.

Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group.

Sandra: So Kai what happened in the future this week.

Kai: Our first story is from The Conversation,"Why Richard Thaler won the 2017 economics Nobel Prize." So Richard Thaler is a behavioural economist. He won the 49th prize in Economics Sciences which is commonly known as the Nobel Prize for Economics. Sandra you're an economist by training. What did he win it for?

Sandra: So Richard Thaler won it for his contributions to behavioural economics and he was one of the key proponents of this idea that as humans we do not act entirely rationally all of the time. He took on a lot of the assumptions that come out of economics and other financial literature where we assume that people will act perfectly rational in all circumstances and they will try to maximise their benefits, maximize their expected profits. Let's remember behavioural economics was already recognized in 2002 with the work of Daniel Kahneman, that we've covered on this podcast previously. Richard Thaler is another big name in the field that brought us things like mental accounting, so how bad we are at long term planning and ideas such as nudge, but here is Katherine Milkman from Wharton University explaining Richard's contributions to the field.

Kai: And this is from the Knowledge@Wharton podcast.

Audio: The standard economics makes assumptions about the rationality of all of us and essentially assumes that we all make decisions like perfect decision making machines, like Spock from Star Trek who can process information at the speed of light and crunch numbers, come up with exactly the right solution. In reality that's not the way humans make decisions. We often make mistakes and Richard Thaler's major contribution to economics was to introduce a series of predictable ways that people make errors and make it acceptable to begin modelling those kinds of deviations to make for a richer and more accurate description of human behaviour in the field of economics.

Sandra: So let's look at two instances of how Thaler's behavioural economics play out. First is around this idea of mental accounting, around how bad we are at long term planning. What Thaler says is that we can view each individual as really two minds, two people. One side that plans and looks at the long term and the other one that actually goes about the daily life doing these things. And these two people have very different mindsets.

Kai: Absolutely. And it kind of resonates with Kahneman's system one and system two. System One is the one that gets on with things in the world where we just go about our daily business and System Two is the one, the side of us that is reflective and thinks about the world. So take a typical New Year's resolution, we might decide that we could be fitter so we take out a gym membership in January and we have the good intentions and the plan to go to the gym every week.

Sandra: And that's the planner side. I see perfect sense of why this would improve my life and why this is a good idea to do. So I buy a 1 year gym membership.

Kai: And then once the holidays is over we get back into our daily routines. We go back to the office and the do-er side takes over, and we just go about our business and there is really no time to go to the gym, and this is essentially how gyms make their money right by people paying their membership. But then the do-er side not really getting into the habit to go.

Sandra: So every single week I have to actually turn up at the gym. I say well it makes sense to postpone it because this week I am very busy and maybe I can have time next week and I've got this membership anyway, so I could really just postpone it to a time that is more convenient. But I never end up going.

Kai: So on the surface that seems irrational. We're paying for the gym. But we're never going. So that does not make sense right. So how is it then that people are not acting according to their beliefs.

Sandra: And this plays out not only with gym memberships but it plays out also with financial planning. So people, for instance left to their own devices are really not that great at saving for retirement. It plays out with the foods that we eat, that we are really not optimizing from long term health. Most of us, we will eat fast foods we will eat sweets, we will eat nuts at the dinner party before having the dinner.

Kai: Yes absolutely. We know it's not good to stack up on snacks before dinner or eat too much fast food. But in the moment the do-er side takes over and those beliefs do not play any role in making those decisions. But this is not all that Thaler did. He actually offered a solution which he calls "nudge".

Sandra: And "nudge" theory which might be familiar to some of you from Thaler and Cass Sunstein's book that is available at all good bookstores, Amazon and most airports which looks at nudging, at this idea that very small things can actually be used to influence people's behaviours. So this idea of nudging that works at the individual level where the person might not be doing these things but we can use small stimuli to get them to make a specific choice.

Audio: "What's a nudge? A nudge is some small feature of the environment that attracts our attention and alters our behaviour."

Sandra: So that was Richard Thaler explaining what a nudge is.

Kai: So this idea of nudge has been taken up in product design, in policy design, in the design of forms for enrolling into things and the simplest form of nudge is what you present as the first choice or the default option.

Sandra: There is a famous example around organ donation. Behavioural economist Dan Ariely uses this quite often to make the points around how we behave irrationally and this is a wonderful paper in social sciences by Johnson and Goldstein and they're looking at organ donations across Europe. And percentages of organ donations around Europe very quite widely but you basically have two large camps. You have a number of countries where the percentages of organ donations are very low - places like Denmark and the Netherlands and United Kingdom where you have numbers around 4 to 17 to 20 percent of people donating their organs. On the other hand you have large numbers of countries that seem to be donating a lot. These are almost in the hundred percent - so places like Austria or Belgium or France or Hungary. And people have been looking at these charts trying to figure out what it is. Is it culture? Is it how much we care in society, how cooperative we are, how much we rely on our families and friends? But the curious thing and this has been pointed out in the paper and picked up by them the fact that we have countries like Sweden that donate a lot (86 percent) and we have countries like Denmark that only donate about 4 percent of the organs and you could argue that these two countries are culturally very similar, they share a similar history, but still there seems to be a very wide difference and there are a number of other comparisons that you can have along the same lines. But it turns out that actually the only thing that is different is the way the forms are set up. In a country like Sweden, the default option when you register for your driver's licence is I want to donate my organs and the action that you have to take is to tick the box if you want to opt out. In a country like Denmark the form is the other way around. You do not want to donate your organs and if you would like to then you have to make an effort and opt into the program and that form is enough to make the difference between 86 percent donation of organs and 4 percent.

Kai: And the reason for this is because the form catches us in a situation where we're actually not really thinking about this decision, we're just getting on with getting our driver's licence. And so the effort of actually reading the sentence and making an active decision is too much for people to actually opt into the process. And in Australia by the way this has been richly discussed in the past, most recently a year ago, and the government has made it clear that there is no intention at this point to change from the opt in to the opt out model and the organ donation rates in Australia are far too low. Some other things like an online sign up system were proposed but at this point there seems to be no intention to follow this research.

Sandra: So what we can see here is that theories like the nudge theory that Richard Thaler is one of the proponents of show that with very small nudges with very small stimuli we can actually make huge differences in the world. That is why for instance we have a Behavioural Insights Team (BIT) a company that's spun out of the British government in 2014 which remains in part publicly owned. But Behavioural Insights Teams have been using insights from works such as Thaler and Cass Sunstein's work to try to change what policy makers do, not through taxes or through laws, but rather through the subtle nudges in the way people behave. And that approach has spread to Australia as well and we have in Australia a Behavioural Insights Unit that sits within the New South Wales Department of Premier and Cabinet and we've had it since 2012.

Kai: And those natures can be simple things in the built environment, can be how you design a form, or it can be online how you design websites and of course those insights from Richard Thaler's research can be used by governments for good or to increase policy compliance but it of course can also be used by businesses to increase the effectiveness of advertising, clickthrough rates on websites and generally just sell more shit. So we want to congratulate Richard Thaler for his Nobel Prize but we also want to point out that behavioural economics is not without controversy.

Sandra: Of course one of the controversies that always surrounds behavioural economics or in this case a subset of behavioural finance is around bringing in a psychological perspective into fields like economics so we have people like Merton Miller who won the Nobel Prize in economics in the 1990s who died a few years ago who used to point out that this is almost totally irrelevant to finance, that this entire field is actually a distraction. And one of his most quoted statements was "that we abstract from all these stories in building our models is not because the stories are uninteresting but because they may be too interesting and thereby distract us from the pervasive market forces that should be our principal concern."

Kai: So in other words they disturb the simplicity and functioning of the economic modelling.

Sandra: So there are still people in economics who assume the homoekonomikus who makes perfectly rational decisions.

Kai: And this view of the homoekonomikus as this self interested rational economic actor is quite useful when you create those models and we can learn from those models. But the point that Richard Thaler and colleagues have made is that if economics wants to have a say in how people in actual situations in the world go about their business they have to take into account human nature and the psychology of human decision making and how humans are embedded in social situations and it is not enough to model and abstract from all those supposed irrationalities.

Sandra: So the first controversy around such a Nobel Prize is have we gone too far with this, bringing in psychology into a field that was pure.

Kai: And this is actually something that we see quite often when new paradigms emerge in scientific fields and this is not a new idea. Thomas Kuhn in The Structure of Scientific Revolutions has pointed this out.

Sandra: But we could also bring up the point that maybe this is not going far enough.

Kai: And I would come down on that side so I'm not an economist like you Sandra.

Sandra: We're going to bring that up, weren't you.

Kai: There's no way I wouldn't.

Sandra: I'm firmly on Danny Kahneman's side if we're staking claims here.

Kai: Yes. It's no secret that you fairly and squarely come down on the behavioural economics side of things. So to me as an outsider the idea of the rational homoekonomikus has never made much sense because it seems so far fetched that people would behave like that and there's so little evidence in the phenomena that we study for example how people use technology or how they behave on Facebook and how they use online media. So my point is one of puzzlement that the discovery of human nature and the introduction into economics deserves that much attention and also the uphill battle that people like Kahneman and Richard Thaler have had to fight in effecting change in their field. So the reason why I think it's not going far enough is that we are still holding onto the idea that rationality is the yardstick so Katherine Milkman talked about how Richard Thaler discovered the mistakes people make in making decisions so we still seem to assume that there is one right objective way of making decisions and it's usually against an optimal self interested monetary yardstick that we're holding up and we've discussed this in the context of bias and algorithmic decision making that we seem to assume that someone can decide what an objective unbiased decision is and we then judge humans against that. So my point would be that we have to actually rethink this yardstick and realise that all of the kind of flaws and deficiencies that we commonly attribute to humans like biases and emotion that clouds our rationality is actually an inherently human trait and it has served us well as a species and that maybe that kind of abstracted economic decision making is what is alien to human decision making. So the more we're going into developing algorithms and employing machines in our decision making I think Richard Thaler's insights are coming in handy but we have to go further than that. We have to realise that people are social beings that are part of social groups which has an influence on how we act and that we are also part of systems and that some problems need to be systemically approached rather than just nudging the individuals sometimes.

Sandra: And that quite often there is benefit in acting irrationally and we see this in nature as well if we look at bee swarms. Bees will follow the bee that has found some food and they will keep going to the field where there are flowers and gather that food and eventually exhaust that food. But we do have bees with inbuilt systems that make them behave irrationally and not go to where the food is (which would be in their interest) but rather go exploring and find new fields so that they can find another source of food and these bees that act irrationally actually enable the hives to survive.

Kai: Which illustrates my point perfectly. So what we call rationality is a matter of perspective and unit of analysis. So for the individual bee it is rational to just go and exploit that food source. But for the social collective, for the system as a whole, that would be irrational because it might put the viability of the system as a whole in jeopardy. So the point that I'm making is where these ideas and economics might not go far enough is that it hasn't broken with the focus on the individual as the unit of analysis and nudging the individual's behaviour rather than thinking in bigger systems and social collectives.

Sandra: Which brings us to our last controversy around the Nobel Prize which is the Nobel Prize itself. We don't want to go to whether the Nobel Prize in Economics is or isn't a real Nobel Prize.

Kai: But you just went there.

Sandra: I just went there. So the Nobel Prize in Economics started with a donation from one of the large banks in 1968. This was the bank's 300 year anniversary and it was awarded for the first time in 1969. But it is awarded by the Royal Swedish Academy of Sciences in Stockholm. And it is according to the same principles as for the Nobel Prizes that have been awarded since 1901. So that is not the controversy I want to focus on. There is no controversy there.

Kai: No the controversy is rather that the Nobel Prize is awarded to individuals and we all know that scientific fields are carried by much more than individuals, by collective contributions, by standing on the shoulder of giants and maybe more so in the natural sciences than in the social sciences it is often entire teams, entire inter-university research collectives that bring forward those ideas rather than the person who then goes to Stockholm to receive the prize.

Sandra: So the Nobel Prize can go at most to three people but in most of these fields the contributions have been made by hundreds of people, by attending conferences, by hearing other people talk, by having academic conversations around these ideas and they develop through the contributions of hundreds of people and we've actually done a podcast previously where we've looked at this myth of the lone inventor, of the lone scientist. Even with stories where we quite often identify one person for instance, with the discovery of penicillin, we spoke about Fleming and how he you know made this mistake.

Kai: Steve Jobs singlehandedly invented the iPhone.

Sandra: But there were never true - Fleming might have had a lucky accident and actually come across penicillin in 1929 and published the paper around it, but it took 10 years for a couple of chemists to come across Fleming's paper and actually develop penicillin in quantities where they could experiment on mice and it actually took a World War and the government to intervene and recruit/enlist in 1943 during the Second World War 21 companies to produce enough penicillin to actually make this a viable alternative. It's always groups of people who make these discoveries but so far we've only found ways to honour one person. But again congratulations to Richard Thaler for winning the Nobel Prize in Economics.

Kai: Our next story is from Alphr. And it's called "That doesn't even *seem* innocent. Elon Musk calls out Google's eerie and invasive Clips camera."

Sandra: So on the fourth of October at the Google event, Google released its latest innovation called Google Clips and it's a tiny wireless pocket sized camera that has a very very powerful lens and a sensor that will work as a shutter button.

Kai: And this device was made specifically to unobtrusively sit in your living room somewhere and take photos of your family, your pets, your children as they go about their merry business autonomously so there's a powerful chip in the camera and it runs an AI algorithm that will detect faces that appear in the picture and movement and will over time learn supposedly to make really good pictures of your children or your pets or your family while they are not aware of being filmed so that you capture them in those precious natural moments. And Elon Musk thinks it's creepy.

Sandra: Like many of us he's probably having flashbacks to The Circle and the cameras that we saw in The Circle both the book and the movie with Tom Hanks where we were surrounded by a number of these objects that really intruded on our privacy in every single moment of our lives.

Kai: So the idea of constantly being filmed and photographed is indeed slightly unsettling. And Google is painfully aware of the disaster they had with the Google Glass spectacles and how people negatively reacted to being filmed by people wearing those glasses.

Sandra: It also needs to make an effort to reassure people that Google is not automatically getting all of these pictures.

Kai: Yes so the idea is that this is not automatically connected to the cloud so you are in charge of actually downloading the pictures from the camera. But we can of course envision all the benefits that might stem from automatic sharing and turning this camera into a security device that might double up as a security camera and notify you on your phone while you're not there...

Sandra: Or just leaving one behind in a meeting room or in an office or in a public space.

Kai: So I think what people find creepy is mostly that this thing will detect faces and autonomously take those pictures. What we want to take this story a little further because there's another product that Google also introduced and it's made by Google Event which hasn't actually received...

Sandra:..the attention that the Google Clips has.

Kai: Yes the Google Pixel Buds a new earbud that plays in the same league as Apple's new Airpods, wireless with a particular design.

Sandra: But that will translate spoken language in real time. So this thing will listen to what you're saying to me in German, go to the Internet and translate it back to me in real time.

Kai: Yes so far the coverage around those earbuds marvel at the ability of having your spoken word translated. The way this works is that the Pixel Buds are connected to Google's Assistant - Google's version of Alexa or Siri - which is of course connected to the Google cloud and it will capture the spoken word, turn it into text, send it to the Google server where it will be translated into a different language and then via a synthetic algorithm turned back into speech which is then spoken back at you. Now, the problem here is of course that everything you say or the other person in the conversation says is turn into a text form and then transmitted to the Google server. So Google has a complete transcript of all the conversations that happen via its Earbud's translation feature.

Sandra: This is not to say that there aren't huge advantages to this. At Google's event it accurately showed nearly instantaneous translation from Swedish to English and there are other companies working on this as well - companies like Baidu that are making tiny translators for travellers that double as WIFI hotspots. So it's wonderful to have access to all the languages in the world.

Kai: And of course there might be limitations as to how nuanced and complex the conversation can be that you can confidently hold by way of having it translated with this device. But this is not really the point we want to make, sure the benefits of being able to talk to someone in a language that you have no knowledge of are great. The point we're making is that for as long as these devices actually have to transfer every tax to a centralised server we're running into huge privacy and potentially security issues. As long as the translation cannot happen on the device itself and require the text transcription and transmission, we're making ourselves vulnerable to eavesdropping, we're running into security and privacy problems and we have to trust the company who takes possession of all these transcripts to actually use it only for the sole purpose of translation.

Sandra: So as we add more and more devices are more and more capabilities to the internet of things we reap all these benefits but we also have to be mindful of the cost that it comes at.

Kai: Or indeed the risks that a world in which the things around us are all connected to the Internet brings with it. So I want to go to an article in The Conversation again by a Professor of Law at Washington University Joshua Fairfield whose also written a book called "Owned: Property, Privacy and the New Digital Serfdom". So, he makes the point that if we think this all the way through and we connect all those devices, fridges, cameras, intelligence assistants, drones and all of this to the Internet and we add software to all the devices that we own we might actually run into a system that resembles what he calls digital feudalism. So he plays on the idea of ownership. So the more software is in physical devices the more the manufacturers might argue that we do not really fully own those devices. And we've previously discussed the John Deere example where the tractor manufacturer argues that the farmers shouldn't have the right to repair or fiddle with their own tractors because they do not fully own the device because the software is only licensed and still owned by the manufacturer.

Sandra: We've covered this a couple of months ago. We will put the link to the podcast in the show notes.

Kai: So Joshua Fairfield makes two points: one is about privacy and security risks and he uses an interesting example where a casino was recently hacked by people breaking into the Internet-connected fish tank and then from they are gaining access to its computers and being able to download a whole slew of data from their systems. So the more we have connected devices around us, the more we are becoming vulnerable to external attacks. But the more important point is one of control. The more these devices are uploading data about us the more these devices and the software is not fully owned by us, the more the control of the world that we live in and our environments and we're talking the living room the room environment. We've covered the Roomba vacuum robot previously that those devices are not actually fully under our own control. And he likens this to feudalism in the Middle Ages where the king would own everything there would be no individual ownership and the peasants would just inhabit the environment but would be fully under the control in the mercy of the king and he reckons that we are on track to a digital feudalism where a few corporations own the data and control the environment in which we live in in public spaces but increasingly also in private places.

Sandra: But recent developments actually show that there is some hope. We've seen this with pushback from customers even with big companies like Uber where Uber now has stopped tracking us after we get out of the car. It used to track us for five minutes. But the pushback from customers has led to the company changing their mind and saying no we will stop doing this. And we've seen recent pushback with companies like Mattel and their very creepy internet of things babysitter.

Kai: A device called, and I have no idea why, Aristotle a little Siri/Alexa kind of thing that sits on a table or a cupboard and comes with a camera and was supposed to take care of looking after baby - transmitting video of baby to Mummy or Daddy's smart phone but also engaging in soothing conversation with the baby and incidentally also using the camera to see if the stock pile of the nappies has depleted and reordering things. A digital baby butler sitting in the child's room.

Sandra: Singing lullabies to the baby, telling the baby bedtime stories, monitoring how they sleep.

Kai: For no apparent reason people found this creepy. So in July Mattel replaced its Chief Technology Officer and the new guy has decided to abandon the whole project and canned the device.

Sandra: But that's not going to stop us nominating Mattel's Aristotle for our new award.

Kai: The Juicero Award for the most unapologetic use of technology to solve a trivial user problem resulting in hilarious outcomes. So this is of course awarded in honour of The Juicero Juicer which with lots of fanfare and hundreds of millions of funding had produced this ingenious device...

Sandra: That could basically do the job a normal person would do just by squeezing the packet of fruit to obtain the same juice in less time.

Kai: And we have had previous fun on the podcast covering Juicero. We will link to the episode in the show notes. So this is our new Juicero Award.

Sandra: And this week's nominee is Aristotle. So finally our last story for this week: Facebook taps Wikipedia to fix its fake news problem for them. And our last story comes from Mashable.

Kai: And it's really an update of a continuing story that we've covered on a number of occasions, it follows directly from last week's "the algorithm is innocent" story, the story that Google and Facebook are pointing to their algorithms as the culprit in the fake news issue around Facebook posts and the US election, inappropriate ads that appear next to YouTube videos or Google inappropriately highlighting links to 4Chan in its top stories in the aftermath of the Las Vegas shooting. So the idea that the algorithms frequently come up with things that are less than optimal say.

Sandra: Facebook's tried to solve this problem last week as well. The solution they came up with was to hire another thousand people to actually manually remove some of this content or tag some of this content. This comes on top of the seven and a half thousand people that they had previously hired in various rounds to fix similar issues around inappropriate content including suicides and racist content and so on and so forth. This week we have a new solution.

Kai: So Facebook has noticed that there is one organisation that has successfully tackled the fake news or fake content problem and that is Wikipedia. A non for profit community of more than 130000 moderators and millions of users who contribute to creating the online encyclopedia which has a mechanism that really works very well in weeding out problems, bias from its content base.

Sandra: So is this a case of if you can't fix it outsource it. And where does the burden now lie? What Facebook has done is actually say that they going to attach a small eye button next to the news feed next to the articles that you see and you'll be able to click on that and it will take you to the Wikipedia description of the publisher. You could now follow on this button and read up more on who the source is and what their background is and so on. So in a world where we are all struggling for attention and we are spending less than two or three seconds on videos and articles, the burden is now on us to go to a Wikipedia page and look up the relevant information.

Kai: So Facebook in this way is not actually solving the problem they're absolving themselves by putting the burden on the user and actually on the Wikipedia community and it raises two big questions. The first one is will anyone ever click on those links? Will users understand this and as you say when we are browsing through Facebook will we actually want to leave the Facebook stream and go someplace else to read up on the publisher? Is this even realistic? But also what were to happen if this catches on and if the publisher information on Wikipedia becomes the source for weeding out the fake from the real.

Sandra: Let’s not forget Wikipedia is not perfect. It’s done really well on curbing fake news but also it enlists volunteers, real life people who have some time to spend on this but not all the time in the world. So what happens if we redirect the burden to this not for profit community?

Kai: So I could just create an article on Wikipedia of a fake publisher and make it sound really really well and then propagate my news on the Facebook stream and when you click on it you have this great information on Wikipedia and now it is up to the moderators to actually patrol and monitor all the things that might be popping up in their community to prove that what happens on the Facebook community is legitimate. So I'll be pushing the burden on a community that is not actually equipped for this and whose purpose it is not to solve the Facebook fake news problem which takes us back to the real problem which is the business model that companies like Facebook are built on.

Sandra: As long as the business model relies on me clicking on more and more things the problems inherent in these algorithms are not easily solved or addressed.

Kai: So is this just a media stunt where Facebook can say we have done something. There is this button now. Because if that business model is to work, Facebook has no incentive to make users click on these links and leave the Facebook stream where they receive information and ads and where Facebook can actually monetise the user. So is this just something that they do to be seen to be fixing the problem or will this actually work. That's the question here which we have to leave open...

Sandra: Because there is one more internet connected thing that we want to talk about today.

Audio: Robot of the week

Kai: So this is for all Star Wars fans out there. A little storm trooper robot that uses augmented reality and facial recognition to do what Sandra, exactly?

Sandra: Um not much: patrol around your room. The good news is that the robot will be in a closed network. It won’t start taking commands from anyone online. There is no data or...

Kai: Or so we hope.

Sandra: Or so we hope. There is no personal information saved to the robot or the companion app and no data saved to any of these devices.

Kai: But the little facial biometrics in the robot can create a database of up to three faces that it can keep track of and so you can program customised reactions to certain people, your partner, your children or your parents and have the little storm trooper tell them off in a customisable fashion. So we think this is a great idea and I can think of many uses around the office.

Sandra: And that's all we have time for today. We're off to watch the new trailer for The Last Jedi. This is definitely the Droid you were looking for.

Kai: Thanks for listening.

Sandra: Thanks for listening.

Outro: This was The Future, This Week made awesome by the Sydney Business Insights team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music was composed and played live from a set of garden hoses by Linsey Pollak. You can subscribe to this podcast to iTunes, Soundcloud, Stitcher or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss please send them to sbi.sydney.edu.au.

Related content