This week: two bots talk trash, modular design and drivers gaming Uber. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

Two bots talk trash and the world goes crazy

The modular future of design

Uber drivers ‘secretly colluding’ to cause price surges

Facebook manages to shut down chatbot just before it could become evil 

No, Facebook did not panic

Fear of a Robot Planet

The Uber study by researchers Mareike Möhlmann and Ola Henfridsson, from Warwick Business School, and Lior Zalmanson

How Uber ‘psychological tricks’ drive drivers 

The gamification of UBER

Why work gamification is a bad idea

Our robot of the week:

Eagle Prime

Watch Eagle Prime in action here


You can subscribe to this podcast on iTunesSpotifySoundcloudStitcherLibsyn or wherever you get your podcasts. You can follow us online on FlipboardTwitter, or sbi.sydney.edu.au.

Send us your news ideas to sbi@sydney.edu.au

For more episodes of The Future, This Week see our playlists

Dr Sandra Peter is the Director of Sydney Executive Plus and Associate Professor at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Introduction: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter. And I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful things that change the world. OK let's roll.

Kai: Today in The Future, This Week: two bots talk trash and the world goes crazy, modular design and drivers gaming Uber.

Sandra: I'm Sandra Peter. I'm the director of Sydney Business Insights.

Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group. So, Sandra what happened in the future, this week?

Sandra: Well the world almost came to an end when two AI driven chat bots by Facebook started talking to each other in a language that stopped being English but there was something that they invented. Luckily humanity stopped them before they could take over the world.

Kai: Yes, that was all over the news and we're discussing an article in Venture Beat titled "New AI languages should be the least of our concerns". This is a really funny story that has created a lot of buzz and panic really in the media, a lot of news around how these bots which were part of an experiment to have two algorithms, two chat bots, that converse in English language, engage in a fairly bounded negotiation about something and Facebook was trialing whether pattern matching algorithms could match human negotiators in a conversation. And it turned out yes, they can because negotiations follow certain patterns that these algorithms can learn and then they can converse back and be convincing in those conversations. The next step was that they put two of those bots and had them negotiate with each other. And that experiment went off the rails when they started talking gibberish. And the news that was then headlined was that they had invented their own language and then you know imagination took over from there and got the better of us.

Sandra: Exactly. So, we had the fantastic headlines like Facebook manages to shut down chat bot just before it could become evil and some dramatic news about how they had developed their own intelligence and how we had to stop them. But the reality wasn't quite that, was it?

Kai: No basically the experiment had failed and they just started talking gibberish. And so they turned them off.

Sandra: You mean gibberish?

Kai: Gibberish, gibberish. I'm German you know.

Sandra: Covfefe?

Kai: Covfefe. Yeah. Maybe that's what they said.

Sandra: So, really two things to note here first that this was nothing new. Right. Algorithms have invented languages for quite a while and even Facebook algorithms have been inventing languages for quite a while. So, nothing new and was shut down.

Kai: But also, you know let's keep in mind they did not invent a language they just used random words right. How can they invent a language when they have no understanding of what they're talking about? They're just pattern matching rhythms right.

Sandra: And that is the point of why it was shut down because it failed as an algorithm, it failed to make sense so we shut it down, we alter it and try again.

Kai: Yet the headlines all over this really entertaining Facebook engineers panic pull plug on AI after bots develop their own language. Facebook shuts down AI after it invents its creepy language. Did we just create Frankenstein, the dangers of deferring to artificial intelligence could be lethal?

Sandra: Yes, well you know that's their big problem right there, we're actually making machines angrier and angrier. I'm really sure that they hold grudges they'll come back when we put them back on line. But now seriously. So, two things: a) is this our really big problem?

Kai: Well the article makes the point that no we have more pressing problems than you know falling into panic over something that is a really trivial case. And the article lists some of those problems and we won't iterate them here because we've discussed in recent podcast how algorithms can be biased, that they need control, we need to be aware of things, how things go. They will have impact on displacing people in certain jobs, when put into the right place. All of these kind of things, that's not the question we want to ask.

Sandra: We want to look at why are we so afraid of chat bots and robots in general. So, all of these articles show a genuine fear of what these technologies could become and it echoes things we've discussed before with Elon Musk's staving off the AI Apocalypse and the robots that are coming to kill us all.

Kai: Yeah. And so, there was a recent article in Politico magazine which ties in with this it's called "A fear of a robot planet" and it asks the more general profound question - why do humans on the one hand engage in building machines in their own image and then turn around and panic about what they have created. So where does this ambivalence come from that on the one hand we have this urge to build those machines and on the other hand we're so afraid of what they might do to us.

Sandra: So, this is not really something new. We've struggled with both how to build these machines and the fear of what they will do to us for ages. If we look back the article mentions the 16th century clockwork monks, and these were things built in Southern Europe around the 1560s that would walk and pray and these things are actually still available at the Smithsonian and that the tea serving robots in the Edo period in Japan. Again, there was excitement about what they build but there was also fear followed by suspicion you know were these evil creatures going against the will of God and so on. And again, by the mid-17th century where they had the French scientists who built incredible machines, automatons, that are still there today and they were amazing things, you know lifelike ducks that would work and they were displayed in Paris for thousands of customers to look at them, they are still available now and again they sparked the craze where we wanted to build more and more of these. But whilst being the symbol of what we could achieve there were also a symbol of what we are most terrified.

Kai: Humanity has for a long time obsessed with you know robots and life like machines to rebuild life in machines and artificial intelligence and the robots that we're discussing every week on the podcast, they're really just the latest incarnation. And at every point people have been fascinated and they have been afraid of the coming ages where those machines will displace humans and will rise up above us. So, this is nothing new. And even today we're pretty sure that the apocalypse is not imminent.

Sandra: Our whole thinking about the role of these machines both in our lives and in our society, go back a bit earlier. So, we're thinking enlightenment here.

Kai: Yes. Ever since Rene Descartes the father of modern philosophy introduced his rationalistic version of philosophy the introduction of mathematics into thinking which laid the foundation for the scientific method, have we been animate with technology and have we come to think of ourselves really as these disembodied rational minds, these subjects that contemplate and reason about the world. And that has really been the starting point for our thinking about the world in terms of mechanics, physics and that has been at the heart of building these machines. But not only that or more importantly, it has given us self-understanding as rational beings that are reflective and analytic first and foremost. So, we're building these machines in our own image because we want them to be rational and solve these problems in mathematical ways. And we see this in our discussion about how we think that algorithms are less biased in making decisions, how we entrust them with decisions that we want to be unemotional based in calculus and based in all these characteristics that do not suffer from the human deficiencies.

Sandra: Those deficiencies you're talking about are really only there because since the time we describe and we have people like Descartes, Leibniz and Boyle. The idea was that there are complex but consistent rules that we can figure out and that we can address and once we know them we will be able to perfect ourselves or perfect the machines that we have. So what happened during history's first generation of robots, taught us something about physical abilities of humans and we learn not to see those things as deficiencies but to try to understand what makes us the way we are. So we built machines that were faster than us, machines that were more powerful, stronger. And that was the first generations of machines. But there are other human abilities that we're actually exploring as we move on. So, these might be artistic abilities that we have, creative abilities that we have and the new generation of robots is again challenging those same fears but around a different set of abilities or of things that actually make us human.

Kai: The cognitive abilities. Yes, so we've always learned more about ourselves than about machines in building those machines. And I think that's the point here as well that the engagement with artificial intelligence will not create an artificial human but it will let us learn about what makes us human in the first place. So, the point that we want to make is that there is this picture of the human who should aspire to be rational but at the same time we have this long list of all the biases that humans consistently show which means humans are characterized as these deficient beings that never live up to the standard that Descartes has set. Which means it's an unrealistic standard. Humans are not these rational beings. Humans are emotional. Humans have imagination. They have many general abilities. And sure, they are not as good at mathematical thinking as machines are but they have other qualities so the point we're making is that the picture of the human that we embody in machines as rational decision makers is an unrealistic one and we do not in fact make decisions based on pure rationality. Emotions play a big part in decision making and we know this now from modern neuroscience. So, building artificial intelligence algorithms that we are doing at the moment are really an opportunity to learn about what makes us human and the qualities that we have in making decisions and being creative and imagining new words and making judgments about the words, things that those algorithms cannot do.

Sandra: And that is precisely why it seems to be so much harder this time around. We're challenging the very things that make us human. We need to think about ethical behaviour in those robots. Do we teach these robots? We figure out that actually no we can't, we can't even think about how we teach them but we are also allowed to take a step back and think about well maybe it isn't something that you teach a robot, it isn't the qualifiable body of knowledge that we can teach it. Maybe the question of ethics does not rest in the robots themselves but rather in the human robot unit.

Kai: Exactly. So, what we're learning is that things like ethics cannot be expressed in a set of rules nor can we define a set of situations that we could have those robots learn. What we're learning is that human judgment is of a kind that cannot be easily taught into a machine. So, what we need to learn and figure out is how we can come to utilize those machines in productive ways without being challenged by them and engaging in the ever repeating fear of machines taking over and rising above us, whenever two chat bots in some lab go off rambling in some unintelligible language.

Sandra: So, we here at The Future, This Week are absolutely sure we are not making them angrier and angrier every time we turn them off.

Kai: Covfefe.

Sandra: Covfefe.

Kai: Now our second story comes from Wired magazine. Yet again. It's titled "The Nintendo switch is the future of gadget design". It features an author who really is very fond of his new Nintendo Switch and talks about how this gadget lets him attach and detach and reattach the controllers to this little display that is the Nintendo Switch and play on the g,o then connect the Switch to the television. So, he's really in favour of this modular, very flexible design and then goes on to talking about how this is really the future of design of gadgets like smartphones and gaming consoles. And so, the whole article is about modularity.

Sandra: And on first sight this is a very appealing type of design. Right. And by no means is Nintendo the first one to go at it. Google for a very long time ran a project called Project Ara. This was a modular phone, very pretty, extremely pretty, that did become a real product at some point and then it went away a lot quicker than it had arrived. Motorola had the Moto Z that had a bunch of snappable accessories, the same with Aldi's G5. That was a failed yet modular phone. Facebook is supposedly working on a similar project, so this seems to be a fantastic idea for the future, modular gadgets. What's the appeal?

Kai: The appeal is that as a customer I don't have to replace my gadget every time I want a new functionality, I can attach, reattach. I can upgrade things and so it would actually extend the life span and I could also personalize it to my own needs. So that's the idea, the promise of modularization. But we see a few problems.

Sandra: First of, there's something telling in the fact that most of these products have quietly died away. Whilst it seems appealing that I might be able to construct my own phone or construct my own console or anything else, and let's remember these arguments for a sustainable product where I would only upgrade just my camera, if I only needed to upgrade my camera, which is something that many of us do with our mobile phones, where you buy a new phone because we want the new camera. This is very appealing. But let's think about it a) How many times have you built your own computer?

Kai: I did it a lot but I did it back in the day when I was a student right? I had a lot of time I didn't have money but I had time. I don't have time now, and what I like about my gadgets is that they are simple, they just work. I don't have to think about you know what camera do I want? Do I want to upgrade my camera? and I don't have to actually concern myself with these things. But they should work out of the box and I think simplicity in a world that grows increasingly complex on us is appealing so that would fly in the face of modular design.

Sandra: The second thing that we see flying in the face of this modular design is actually the economics behind it. So, someone would need, as in the case of the Google project Ara, to build the chassis for all of these things to fit in. Now if we are arguing for full modular design that means that eventually if I'm let's say a small company in China that actually wants to build a much cheaper camera that might not be as good but it's extremely cheap, I could build a modular camera for that. Now what's the incentive of Google to build a chassis that I can use for five or 10 years and just attach cheap gadgets to it. So, in the case of something like the App Store, yes it makes perfect sense for Apple to build a platform on which I can put in small modular bits and create a business model around that. But in the case of a product.

Kai: Well it harks back to the difference between hardware and software. Right. Software is easily done. It can be iteratively developed in an agile manner and it comes with much less capital expenditure. But also, any time we introduce modular design into our products, we are creating interfaces. Interfaces are error prone. So, when I need to sort of plug and play these modules together I'm creating opportunities for failure, but also, I'm locking myself into these interfaces so when I want to change the interface I lose all the compatibility between the modules that are already on the market. And let's not forget that with all these very instead I'm creating in all the various different modules, I'm creating supply chain complexity. The reason why we can build advanced gadgets like the iPhone at scale is because the supply chain works seamlessly, so Apple is very good at creating a supply chain that works and so is Samsung for example. So, the more we have modules that we need to create the more we have complexity in the supply chain. We need inventory for these modules because we don't know which ones the customers will buy.

Sandra: So, the sustainability argument does not necessarily hold water. This is not going to be a more sustainable way to have gadgets. The business model is not there. Apple wants me to buy a new phone rather than just replace the camera in my old phone.

Kai: But also, let's not forget that most people on sell their phones right. It’s not like I am putting it in a drawer or throw it away. I might put it on eBay so they're not necessarily out of use just because I'm buying a new one but also the fact that I might have these many modules and a more complex supply chain might introduce waste within the supply chain. So, the sustainability that I might gain at the customer end I might lose at the supply chain.

Sandra: So rather than having a physically modular design you could achieve the same modularity in different ways.

Kai: And if we look at the article most of what the author is really fond of is that the device interacts with other devices and we can achieve that on the software level by making devices such as phones and iPads and gaming consoles and smartphones talk to each other, and we already do. And we can achieve this by going precisely the opposite way by having a tightly controlled ecosystem like Apple is building where they can introduce hand over between devices such as the computer, iPad, Apple TV where they can then make those devices talk to each other because they control the entire ecosystem and they keep it simple and not modular. So I think software is the way to go and we've seen this in the success of apps, App Store, APIs, and the way in which most of the compatibility and interfacing has been achieved at that level.

Sandra: So, we are unlikely to build our own phones or our own gaming consoles.

Kai: Speaking of gaming, our last story is about Uber drivers gaming the Uber system.

Sandra: This has been very widely reported in the news over the last week. The gamification of Uber, including not surprisingly in the Herald Sun.

Kai: Yes, so we picked this article from The Herald Sun titled "Uber drivers gamed the system for bigger fares by using WhatsApp" because it has such a hyperbolic take on the topic. It reports an underground network of Uber drivers is using Messenger app WhatsApp to trigger search pricing inflicting higher fares on passengers. So, the article makes it sound like a conspiracy. It talks about one Uber driver as the godfather of Uber surge, who would study the network and then alert other Uber drivers to engage in these colluding behaviours, so they're really painting a dark picture. They make it sound like Uber drivers defrauding passengers.

Sandra: Uber drivers seem to be gaming the system, tricking the algorithm. One of the examples given is that Uber drivers organise these mass switch offs, so they go to an area they organized on social media, everybody turns their Uber app off. It seems like there are no Ubers in the area. The price surges and then they all turned their Uber apps back on and take advantage of the higher prices. Another one is with Uber Pool where they only take one passenger and so on and so forth.

Kai: They're basically making use of features that controversially really, Uber has introduced to charge a higher fare when there's not enough drivers. The rationale is that yes, we have this to encourage more drivers to go to this area or start driving. But the net effect is that Uber charges more when there's less drivers and so the drivers make use of that feature by pretending that there's not enough drivers and then coming back on line to make use of the surge pricing equally with the pooling that's not entirely uncontroversial.

Sandra: With Uber Pool, the option is that I am a passenger, I get the opportunity to ride along with other passengers, so you as an Uber driver have to pick me up and then pick another person up and another person and to incentivize Uber drivers to do this because economically it's not better for you as the driver, Uber is saying well on the first passenger, I'll only take a 10 percent commission, as compared to the 30 percent.

Kai: Yes. But it still doesn't really add up for the drivers because what could have been three separate fares now is only one fare with a slight benefit.

Sandra: The way drivers game this is that they accept the first passenger and then just log off.

Kai: This really makes it sound like the Uber drivers are disadvantaging the passengers. But let's look at the bigger picture here. This is based on an academic research paper by our colleagues from the University of Warwick, Mareike Mohlmann and Ola Henfridsson who by the way will be here in Sydney in only a few weeks’ time and a colleague from New York University Lior Zalmanson. They have gone out to actually interview Uber drivers and done this research and they paint a very different picture. They say we identify a series of mechanisms that drivers use to regain their autonomy when faced with the power asymmetry imposed by algorithmic management.

Sandra: So, Kai, this is really academic speak for the drivers. One thing to regain some control some autonomy over their work.

Kai: Yeah.

Sandra: What happens here is that Uber uses these algorithms to oversee what their drivers do and to try to control their behaviour. They are constantly tracked and their performance is evaluated and monitored and nudged in certain directions. Drivers have virtually no access to the algorithms that drive their Uber.

Kai: No, it's intransparent to them but importantly Uber keeps them separate as separate drivers and makes them compete. They have started to collaborate and figuring out the rules of the game that Uber has invented and tries to keep intransparent and hide from them.

Sandra: If there is a perception that you are being treated unfairly or not receiving the entire information that you're due, you're going to try to plan and try to organize in such a way that you capture more value out of the system.

Kai: Exactly.

Sandra: We can have a closer look at what Uber does. And there was a very interesting article in The New York Times in the beginning of this year, looking at the psychological mechanisms and the ways in which Uber does this and really let's not forget Uber and this is really the company we all seem to love to hate. Uber employs hundreds of social scientists and data scientists to try to optimize these things and has experimented with a variety of techniques, many of them that come from gaming, many of them that come from psychology, looking at rewards, to try to prod drivers into working a little bit longer or a little bit harder in certain areas. And sometimes these are very lucrative for the drivers themselves so let's not forget that these algorithms allow drivers to extract more value out of the system as well. But sometimes they can make them work longer hours than they would otherwise do in places that are not as productive for the driver as they are for the organization. And really, we're singling Uber out here, but of course even a company like Netflix would load you videos to let you watch more videos and to binge watch which is not necessarily the best thing for you but it might be the best thing for Netflix.

Kai: So, what we're saying is that many of these platform companies are using data and gamification mechanisms to tweak the rules of the system. Essentially creating a game from which it is fair to say they try to extract more value for themselves.

Sandra: And the gamification conversation is an interesting one indeed. You've earlier written a blog post on gamification and whether that works or indeed does not work in all organization and we'll make sure to link to that in the show notes.

Kai: Yes, so the point is that once you start gamifying work and turning it into a game, we shouldn't be surprised when people start playing this game and trying to figure out the rules and then game the game or game the system. And the bigger picture points that we want to make is that it's not entirely fair to blame the players who played someone else's game and do not have access to the rules when they try to figure out those rules and try to extract more value for them than they would otherwise be able to because let's face it the whole point of the game is to make these players compete for resources. And when someone else is able to create the rules and keep them intransparent we shouldn't blame the ones that are trying to figure out those rules to get by.

Sandra: What we want to note here is really that this entire conversation is still very much in its infancy and this article is really a welcome start to that conversation and there is a huge possibility that comes with the sharing economy with the gig economy where companies like Uber may actually decide at some point what are the sorts of norms or rules that they want to abide by. So that whilst they're manipulating workers they are also doing that in a clever way with transparent algorithms and transparent ways of doing that. And also, as we do this we are continuing to do research. Let's not forget gamification research is in its very infancy. We are looking at different ways to design. This has been quite limited even in the case of Uber, with badges or things that people compete for. But we are also trying to build in things like participation and inclusion and other things in this research.

Kai: Yes. As those systems become more prevalent in society I think what we need to have is a discussion about transparency. Who sets the rules. Who gets to benefit and then finding ways to actually balance the interests of all the stakeholders involved? After all this has always been a hallmark of a functioning society with employment relationships and unions and all of the discussions that come with balancing fairness in those systems. And the more we employ technology and data the more we're black boxing those effects so the important discussion that this research has started is around how we can balance the interests of those involved. And now to cheer us up........

Audio: Robot of the week.

Audio: "Radio check, this is Eagle Prime, command are you reading. Affirmative Eagle Prime. Command is reading. We have just uploaded the latest MK3 class firmware, so she's going to feel a little more frisky today. You are clear to power on."

Sandra: So that was Eagle Prime the very first giant robot on The Future, This Week. So, two years ago a group of American robotists at MegaBots issued this challenge to their Japanese counterparts.

Kai: 'pseudo bashi'. We have a giant robot. You have a giant robot. You know what needs to happen. Team USA.

Sandra: Finally, later this month we will actually see the two biggest robots in the world metal on metal.

Kai: Face off against each other. Crossing swords or whatever they have at their disposal.

Sandra: I think they have guns, chainsaws, all sorts of equipment.

Kai: They look hideous.

Sandra: And this promises to be fantastic.

Kai: A bit like the Transformer movie, probably with a little less coordination and a little more scrap metal at the end.

Sandra: But definitely fun to come with a 16-foot-tall, two and a half million-dollar bot taking on its counterpart.

Kai: So if you are asking the question why?? We are asking the same question but we will keep you updated.

Sandra: And that's all we have time for this week.

Kai: Thanks for listening.

Sandra: See you next week.

Outro: This was The Future, This Week made possible by the Sydney Business Insights team and members of the Digital Disruption Research Group. And every week right here with us our sound editor, Megan Wedge who makes us sound good and keeps us honest. You can subscribe to this podcast on iTunes, SoundCloud, Stitcher or wherever you get your podcasts. You can follow us online, on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news you want us to discuss please send them to sbi@sydney.edu.au.

 

Related content