This week: a bumpy road ahead, missing out on AI glory, and long haul just got longer in other news. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

Uber’s fatal crash shows the folly of how we test self-driving cars

The US, China and JEDI and the race for technological supremacy

Uber’s collision and Facebook’s disgrace

Supplier says Uber disabled Volvo

Toyota pays $1.3 billion for cover-up of safety issues that caused cars to accelerate even as drivers tried to slow them down

Kai Fu Lee on how the list of countries that will benefit from the AI revolution could be exceedingly short

Full interview with Kai Fu Lee

Qantas and the University of Sydney’s Charles Perkins Centre research partnership to transform air travel health and wellbeing

Future bites

The first Qantas non-stop flight between Australia and the UK has touched down in London

We need to talk about the data we give freely and why it’s useful

Alibaba’s car vending machine in China gives free test drives to people with good credit scores

WOOLF University, the uberversity on the blockchain


You can subscribe to this podcast on iTunes, Spotify, SoundCloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au.

Our theme music was composed and played by Linsey Pollak.

Send us your news ideas to sbi@sydney.edu.au.

Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Disclaimer: We'd like to advise that the following program may contain real news, occasional philosophy and ideas that may offend some listeners.

Intro: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful and things that change the world. Okay let's start. Let's start.

Sandra: Today in The Future, This Week: a bumpy road ahead, missing out on AI glory, and long haul just got longer in other news. I'm Sandra Peter the Director of Sydney Business Insights.

Kai: I'm Kai Riemer professor at the Business School and leader of the Digital Disruption Research Group. So Sandra what happened in the future this week?

Sandra: We're continuing the saga of the Uber fatal crash that's happened last week and our first story is from Wired and it's titled "Uber's fatal crash shows the folly of how we test self-driving cars". The story tries to unpack how we think about testing self-driving cars and it tries to go through not only the technicalities of how we currently test self-driving cars but also examine some of the logic behind how we should think about understanding the difference between an autonomous vehicle and one that actually has a driver.

Kai: The article features Martyn Thomas a professor of information technology at Gresham College who discusses how Uber and their competitors test their self-driving cars in real world settings and compares it with how we typically and scientifically test new technology.

So he says that in a scientific experiment you have a hypothesis and you try to prove it he explains. At the moment all they are doing is conducting a set of random experiments with no structure around what exactly they need to achieve and why these experiments would deliver exactly what they need to achieve.

Sandra: So let's unpack a little bit what you would actually be testing when you're testing autonomous vehicles.

Kai: So the main point of a test is usually to establish some form of causality. You have a certain expectation of what ought to happen when you put a technology to use and then you basically compare with what actually happens.

Sandra: So in the case of the Uber vehicle that was involved in this crash, the vehicle used first a combination of a set of material technologies things like sensors, video, lidar, radar. So these technologies would be used to not only identify objects but measure the distance to those objects, try to predict where those objects will be in space and in time.

Kai: But crucially the data that is collected via the sensors is fed into a set of self learning, so machine learning, algorithms...

Sandra: That processes that data and that then makes a decision about how the car will behave in a specific situation.

Kai: Exactly.

Sandra: So what we have in the case of this specific autonomous vehicle, we know that it was using video and lidar at the time of the accident and we know that the car failed to brake before it hit Elaine Herzberg, we don't know yet whether that was because of a failure in the sensors because of the failure in the lidar technology or because the algorithm had processed the data incorrectly or whether it was maybe just an accident altogether and could not have been prevented.

Kai: So as the investigation is ongoing Professor Thomas however points out that with these self-learning algorithms we're not actually in a position to create the conditions under which testing is normally done because as these algorithms constantly learn they change all the time so as you're going about driving around these cars and put the car through different situations, the configuration that is being tested is changing constantly. He also points out that the hardware, the software, and all of the technology that is being deployed is being updated and changed all the time. So he questions the extent to which these companies are actually able to reliably test and therefore improve the technology which would live up to engineering standards.

Sandra: Because it's in that improvement that the problem really lies. So if we think about the algorithms that are embedded in these machines they learn all the time. If an accident doesn't occur we actually have no way of telling whether the algorithm got better or worse at what it was learning. Maybe it learned to observe less things in its environment because it had no accidents and it didn't have the opportunity to correct itself. Maybe there are situations that it still can't anticipate but those remain buried in the algorithm and we have no way of telling whether the car has improved or not.

Kai: Yeah. So let's stick with this point for just a second. First we want to point out again that machine learning algorithms are black boxes. We do not understand exactly how they come to produce certain outcomes which are then the basis for making the decisions and automating the behaviour of the car and that they are changing constantly as they are being presented with new data. Now one thing I want to point out is that typically when we talk about the fact that these algorithms are learning we seem to imply that they are therefore improving. But what they're really doing is they're changing, they're adjusting the internal weights in the network of statistical values or you know so to speak neurons and therefore come to ever changing adjustments to the outcomes that are produced. But that could technically mean that those changes could actually lead to introducing new problems. So who's to say that as the network is learning or adjusting and changing, that certain improvements in one aspect of behaviour of the car might not override and therefore technically forget other behaviour so it's never quite clear how a car will react in a certain situation and whether you're actually on a pathway to truly improving the behaviour of the car across all possible situations and that's at the heart of the issue here.

Sandra: This is the major problem with testing autonomous vehicles. This is further complicated by the task that regulators would now have as we have outlined just before it is really difficult to figure out what the right criteria would be for testing of licensing autonomous vehicles but then they also have to balance this against providing enough incentives for these companies to continue to develop and invest in improving their technologies.

If you restrict too much too early there is a risk that companies will not invest the significant amounts of money that you need to get this technology off the ground.

Kai: And there was actually an article in The Atlantic also this week titled "Uber's collision and Facebook's disgrace" which makes that argument and relates it to the Facebook disaster with Cambridge Analytica and the whole discussion that has broken out around Facebook's business model and we've discussed that last week and the article points out that in the case of Facebook and comparable companies such as Google, regulators have deliberately gone out of their way to not regulating privacy aspects, providing companies access to user data if the user consents but also to absolve them of the responsibility to policing what is being posted on their platforms, granting them wide ranging freedom in taking a step back arguing that they are platforms, they cannot be held responsible for what users do on their platforms. And the article makes the point that this is at the heart of some of the problems that we see today and was it such a good idea to grant free rein to these companies and maybe we are at the cusp of making the same mistakes with not regulating self-driving cars as is currently the case in Arizona and we're already seeing lawmakers stepping in there and preventing Uber from testing their cars at the moment.

Sandra: So we have outlined so far two difficulties with regulating autonomous vehicles. One is how do you actually know what you're testing and test for the right thing. The second is how do you do this in a way that ensures the safety not only of people in the autonomous vehicles but pedestrians who might not have signed up for these experiments anyway whilst not stifling innovation and the opportunity to develop newer safer technologies. So let's take the Uber case again: we have a combination of video, of radar, lidar, sensors and algorithms all of which came together and may or may not have resulted in this accident.

The car manufacturers involved, the programmers involved, the company that owned the car and the company that operated the car - you've got Volvo, you've got Uber, who takes responsibility for the accident in the end.

Kai: I want to bring in an article from the L.A. Times which we found also from this week which actually nicely illustrates that point. So a company called Aptiv has come out and they are a parts supplier who supply Volvo's driver assistance system that is built into the XC90 which was involved in this accident and they said we had nothing to do with this because Uber is disabling our technology which would have prevented this accident from happening because they are building their own systems. At the same time, the maker of the sensors in that car have come out and said oh we're baffled we cannot understand, our sensors certainly would have picked that up. The lidar maker came out and said our technology did not fail so we're not responsible it must be Uber's algorithm. At the same time two other companies Waymo and Mobileye came out, developers of similar self-driving technology and said with our technology this accident wouldn't have happened and Mobileye actually went as far as demonstrating with the video images how their algorithm would have picked up the person as they appeared in the image. So there's a number of companies who come out trying to absolve themselves from taking any responsibilities, others jockeying for competitive position arguing that 'we would have done better'.

So the question then remains who is to blame? Is it Uber and their algorithm? Is it the person behind the wheel who wasn't paying attention but should have intervened? Is it solely Elaine Herz b erg who was in a place where she shouldn't have been? Without an investigation we will not be able to answer those questions but the fact that we always when an accident happens have to ask those questions and come to a resolution of what caused the accident that makes us all feel comfortable and getting on with our lives and those involved basically, points to the fact that there is a lot of potentially unresolved questions when it comes to self-driving vehicles that we have to address as these technologies become part of our daily lives. So let's take a look at some scenarios.

Sandra: Let's suppose in the hypothetical scenario an autonomous vehicle is driving down the road, someone decides to cross the road but the algorithm fails to predict the speed at which they are doing so an accident happens and the person dies. We know it was the algorithm's failure to correctly predict the position of the person, who gets blamed?

Kai: So first of all the question is: who owns the car? Is this a service like Uber that picked up someone and drove them as in a taxi-like service then surely the person being chauffeured wouldn't be to blame so blame would be apportioned to either the service company or the manufacturer of the car. What about the situation where the car I own reacts in that way and kills someone but since it's a fully autonomous car I'm sitting in the back seat reading my newspaper. So as we cannot fully comprehend the legal implications it has economic consequences though. Because if I'm as the owner being held responsible why would I ever want to buy a car like this which might have erratic behaviour because as we said earlier we can never fully reliably test machine learning based algorithms but if the manufacturer is being held responsible because it was their algorithm that supposedly caused the accident, what is the economics then? Will they ever want to put out a car to the market which they can never fully guarantee that the self learning algorithms will not make the wrong call in a certain situation or can they just account for this by way of setting aside a pot of money for the invariable deaths that will occur once those algorithms malfunction from time to time. And is the public going to be mollified by the argument that oh overall we are causing x percent less deaths in society but in every single case sorry the algorithm just made a mistake and we're all gonna have to live with that. Are people going to be comfortable with that?

Sandra: So you might argue at this point that there are still many failures with the cars that we have now and we would come to some agreement along similar lines as we do when there is a failure in airbag technology or in the brake systems that we have now or on onboard computers that we currently have in the cars. Currently companies and we've had examples in recent years with brake failures in the case of Toyota where many people have died and the company has paid compensation for this. Luckily this is not something that happens very often but it still happens that car manufacturers cover the cost of malfunctioning equipment.

Kai: Yeah but I still think this is qualitatively different because today it's the human driving and most things that go wrong when an accident happens it's apportioned to the human element that the human made a mistake. Are the car manufacturers prepared to take control and therefore responsibility for the entire process of driving the car? And mind you when cars are out in the wild there's other cars, there's pedestrians, there's animals, there's children. Accidents will invariably happen. Sure we might bring down the number of accidents but accidents will happen. The question is economically, morally and legally are the manufacturers prepared to bear most of the guilt for the accidents that are going to happen?

Sandra: Or is there a case that we will see new types of insurance or new types of organisations that will actually become intermediaries between the car companies and the owners, new types of car ownership, it might be a subscription service where you might get updated terms and conditions every few months that you will never read and agree to. Too early to tell.

Kai: Yeah but I want to also highlight a different point: at the moment we seem to discuss the idea of autonomous vehicles by ascribing a certain agency to these cars, by talking about them as if they had some form of intention and therefore some form of responsibility or agency because we seem to compare autonomous vehicles with human drivers and I think that's a fallacy because they are no more than an automated driving tool and can therefore not be responsible in a court of law or morally because these cars are not making any of those decisions. So we have to find someone else who is to blame either the owner or the manufacturer or any part of the technology and I think we haven't quite discussed this all the way through to figure out what the implications are when these technologies are coming in. Because one likely reaction might be that because the world out there is too messy for these things to function reliably we have to do something about that and that would mean that no as a pedestrian you cannot walk in the streets anymore because there's too many of these things cruising around and it's too dangerous and we want to prevent accidents from happening that way. So the question is systemically what will have to change not just legally morally or from an insurance point of view but also with the built environment and I think we ought to have that discussion sooner rather than later which is what the other article pointed out which said if we wait until the proverbial shit hits the fan it might be too late to actually do something about it in a more comprehensive way.

Sandra: And we could wrap up the conversation here

Kai: And Megan would certainly thank us for it.

Sandra: But I think all of these conversations have actually missed the critical point. We have started all of these conversations by talking about regulation and how we regulate these things.

Kai: Today on The Future, This Week Sandra finds what's actually the point again.

Sandra: Isn't that what this show is about: the point.

Kai: Yeah come on so make the point.

Sandra: We made the case that we wanted these autonomous vehicles because most of the accidents are caused by human beings and having something on the roads that would reduce the fatalities would be a good thing. Yes maybe there's probably not enough evidence at this point to know that there would actually be less fatalities. But again I think this misses the point. This is not the conversation we should solely have on the safety, error criteria. We want these things simply because they are safer than we are as drivers. We should be asking the question of why do we want the autonomous vehicles in our lives in the first place? And convenience is one aspect. But there are a number of other aspects besides economic aspects that we might want then there are social aspects or environmental aspects that we seem to be missing altogether in these discussions. Let's say we are in the centre of a city where we have lots of pedestrians. We have cyclists going around, we have children playing in the street, we have coffee shops on the street, pedestrians crossing left right and centre. Would that social and community environment be made better if there were less accidents, less fatalities but if all pedestrians would have to keep to specific paths if you couldn't allow cyclist to go on the road, if the entire community was reorganised to accommodate these autonomous vehicles.

Kai: So the question is at what cost do we actually want to introduce self driving vehicles into our environments and what problems are they actually solving. If it is just to have a safe commute from A to B, we've already solved this problem. It's called public transport right and we can have fully automated trains for example that are pretty good and do not cause many fatalities.

Sandra: If we are optimising for a long haul transport so for instance dedicated lanes on freeways and highways outside of the cities then we can prevent the type of accident that we've seen today simply by designing for autonomous vehicles over certain stretches of road.

Kai: And mind you this is only really interesting for parts of the country where we can't go with rail already because freight trains are probably more efficient to haul lots of cargo from one place to another.

Sandra: So rather than only having blanket conversations about how we test and regulate autonomous vehicles maybe it is also important to have conversations about why we want them and where we want them in the first place.

Kai: I also want to point out that many people make the argument for the car on the basis of individual freedom, the freedom to go wherever I want to go, not being restrict by where public transport can go. The question then is can autonomous vehicles actually provide that same freedom.

Sandra: And what's the cost to the community for allowing that individual freedom?

Kai: We're going to leave it here. This will come back as a story for sure. We're going to our second story which comes from Bloomberg titled "There are worries European technology will be left behind".

Sandra: So this story is about the global fight for the future of technology. Basically the global fight for tech supremacy especially artificial intelligence and the article outlines the two big players that is the US through companies like Google and Amazon and Facebook and China through Baidu, Alibaba and Tencent.

Kai: So it is generally accepted that those are the two countries which will divide up the global AI cake so to speak. Because fundamentally they are in a better position than other jurisdictions such as Europe because first of all they already have a head start in developing a AI. They also have access to large amounts of data - China because of the sheer number of people living there and the way in which these technologies are being supported and pushed out by the state as well and the US through technology companies such as Google and Facebook because they already have a global user group providing their data by using those services as a central part of the Internet infrastructure.

Sandra: So the article tries to outline the difficulty in actually responding to the status quo that you've outlined. And the focus is on one initiative by the European Union and you have to give it to the European Union for having the best acronyms out there. And this is the Joint European Disruption Initiative. JEDI for short. So they do have a JEDI council which is using the force to respond to the threats to global tech supremacy. That's not using the force its actually using the former CEO of Deutsche Telekom, it's using one of the first female astronauts, is using winners of Fields Medals which are the equivalent of a Nobel Prize but in mathematics, who come together to enable a different type of approach to tech supremacy. So this is a group that actually looks at funding fundamental research projects in a very agile manner.

So think about research projects where you would not need to put forward a business case, where you would not need to spend months getting it through various committees in the European Union to get it funded but rather you can explore moonshot breakthrough type initiatives and the outline of this initiative actually spells out the real difficulties associated with getting ahead in this tech supremacy battle. Most of the conversations that we have around AI and developments in AI, we usually have from either a product service perspective so we might be looking at Alexa or we might be looking at autonomous vehicles or we have at a company level. We spoke quite often on the podcast about the frightful five in the US or in China the BAT.

Kai: So what we want to do is we want to take this conversation to the level of nation states or jurisdictions. There was actually another article in Quartz this week where Kai-Fu Lee the founder and CEO of a company called Sinovation Ventures a venture capital firm and also the former president of Google China came out and said that at the current rate most if not all of the benefits from AI will basically flow to two countries only: the US and China, precisely because it is their companies that are going to provide a bulk of the services that are going to be based on AI to the world population. China is pressing into India and Africa where US companies haven't built a stronghold yet and we all know that Facebook, Google and the like have a strong grip on markets in the West, Europe, Australia and the like. And so the point is that even though we as citizens of these other countries such as Europe and Australia enjoy the benefits of these services Google and Facebook, our data and what is being done with the data and the income and benefits derived from that data will flow to the US because this is where the companies originate from.

Sandra: So far the response from places like the European Union has been to prioritise regulation, privacy, and other things or above giving free rein to even their own organisations to harvest enough data to become competitors in the space.

Kai: Yeah but is that necessarily a bad thing? I want to ask because we can see in the US with the election rigging and the problems of social polarisation that is at least in part attributed to platforms such as Google, YouTube, and Facebook. Isn't it also the responsibility of a state or a jurisdiction like the European Union to protect its citizens from the fallout of over sharing of data and invading of privacy and shouldn't be acknowledged that the EU maybe did not provide the conditions for services like Facebook and Google to grow and foster in the first place but that they have taken a different approach to this whole problem and valued the individual privacy of their citizens higher than it is the case in the US and China for example.

Sandra: So unfortunately at this point it also comes down to an argument around competitiveness. When we look at the large organisations that had actually appeared in places where there was less regulation and the size to which they have developed, let's remember companies like Facebook and Tencent have more than a billion users and they continue to harvest the data of these users. But at the same time are able to develop a range of products and services that they have built because of that access as well. So if we take China for example access to frictionless payment throughout a very large country and for a very large population has been enabled through such tech companies.

It also raises questions around the ability of organisations should they ever appear in a place like the European Union to become competitive against services that are offered out of places like China. The size of the population in these places also comes to the aid of Chinese or American companies simply by having access to sheer numbers of people.

Kai: So this is a process that is ongoing but I think it pays off to actually lift the conversation to that level so we have three very different approaches. We have a corporate first approach in the US that gives free rein to tech companies to harvest and work with individual data. We have very much a state based approach in China where the state is using modern artificial intelligence, face recognition, to drive a large scale social engineering experiment and then we have the EU who for e ground individual privacy and the rights of their citizens. And it remains to be seen which one of those three approaches will win out especially when seen against on the one hand the problems associated with artificial intelligence and we've discussed this last week for example and the promises that are being attributed to these technologies in solving many of the pertinent problems from climate change to health and public safety. So an interesting space to keep an eye on.

Sandra: So let's finish off with our quick fire round: Future Bites.

Kai: So what's one thing you learned this week?

Sandra: So my short story is about the first nonstop flight from Australia, from Perth, to the UK, to London.

Kai: So actually a pretty long story.

Sandra: Yes that is true, over 17 hours long.

Kai: My goodness.

Sandra: Qantas has just completed its first journey from Perth to London. It was 17 hours long and it set a new standard to how we think about long haul flights.

Kai: For those interested in a Boeing 787 900.

Sandra: And the University of Sydney is actually deeply involved with figuring out how this changes the way passengers travel. Remember you have to be in your seat for a whole 17 hours and a lot of people have agreed to share their sleeping patterns and their activities and their nutrition on the flights to maybe rethink how we do long haul flights and I'm really looking forward to trying it out. What was one of yours?

Kai: Mine was from the conversation it's called "We need to talk about the data we give freely of ourselves online and why it's useful". It's by Robert Ackland who is an associate professor at the Australian National University and it comes on the back of the Cambridge Analytica Facebook story and he's saying that wait a minute this whole story puts into disrepute the gathering of social media data via APIs which can actually be quite useful he says when done right you know with the proper ethical processes in place that wasn't quite observed in the Kogan Cambridge Analytica case. Those data can actually be quite useful and even though people said wait a minute why would people be able to access the data of friends of a person, he says that this is precisely the kind of data that is useful for researching social networks and social network effects and that if done right those public APIs can serve a greater good because it allows academics to research important social phenomena and we shouldn't throw out the baby with the bathwater and if we close down all of these ways for the public to access this data then we're actually in a position where only the companies themselves can do research on their data and that doesn't quite provide for unbiased transparent research that would benefit society more broadly.

Sandra: And we could be definitely take this back to our conversation around AI supremacy and the speed at which benefits would accrue to the country that manages to get there first or get there faster but we won't because these are our short bytes.

Kai: So what is something else that you learned?

Sandra: My second short story is an interesting development from China. Alibaba has opened a car vending machine that offers free test drives for people with good social credit. This is a deal between Alibaba and Ford who signed a partnership to have a car a vending machine that will allow people to test drive Ford cars and this is unstaffed. It simply works by scheduling a pick up, snapping a selfie so that the vending machine can recognise that it is you who picked up the car and then taking it for a test drive but you would need to have a respectable social credit score of 700 and above and the social credit score is something that we have discussed previously on the podcast a couple of times and we will put that in the show notes. It refers to the citizen ratings that China is now piloting similar to a credit score which is looking up how people behave and how they can quantify good behaviour or misbehaviour to contribute a score. What have you learned?

Kai: Well my final one is what I can only hope is the April fools for the week. It's from The Australian: it's titled "Oxford don plans blockchain university". Woolf University which will operate on the blockchain is being founded by a group of Oxford academics, they call it the "uberversity" and they claim that they can utilise a blockchain to bring together lecturers and their students and completely disrupt and reorganise the way a university works and as is customary for any good April Fool's joke there is also an initial coin offering of a cryptocurrency called WOOLF. If you read carefully it does contain the word fool which they will issue in April 2018, of the funds raised they say 35 percent will go to the university's core leaderships so that is Woolf University, 25 percent will go to its academics and 40 percent will go to the various aspects of institutional development and promotion. Now there's a website, there's a white paper, there's a document - a lot of work has gone into setting this up. We're not holding our breath. We currently think it might be an April Fool's joke, if it is real...

Sandra: We'll discuss it next week.

Kai: Okay then that's all we've got time for today.

Sandra: Thanks for listening.

Kai: Thanks for listening.

Outro: This was The Future, This Week made awesome by the Sydney Business Insights Team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music is composed and played live from a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Stitcher, Spotify, SoundCloud or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss please send them to sbi@sydney.edu.au.

Easter egg: So Sandra what happened in The Future, This Week?

Sandra: Not much.

Kai: And that's all we have time for.

Sandra: Thanks for listening.

Kai: I think we're done. That's it! There's your Easter egg.

**End of transcript**

Related content