This week: Cambridge Analytica is not the real story, backchannel gig chats, and a fading giant. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

This is the smoking gun at the centre of the Facebook and Cambridge Analytica story

Gig platforms’ claims over worker chat groups

How GE got disrupted

A recap of the Cambridge Analytica saga in the Guardian

Facebook has finally been hit where it hurts the most 

Cambridge Analytica controversy and Australian political parties

Cambridge Analytica was not a data breach

Zuckerberg on Cambridge Analytica

People are increasingly having a bad user experience on Facebook

The Verge’s guide using Facebook while giving it the minimum amount of personal data

Uber drivers colluding

GE to build the world’s largest offshore wind turbines off the coast of Victoria

Uber ‘likely’ not at fault in deadly self-driving car crash

Future bites

Uber suspends self-driving car tests after vehicle hits and kills woman

New Yorker applied machine learning to blocked bike lane problem


You can subscribe to this podcast on iTunes, SpotifySoundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Our theme music was composed and played by Linsey Pollak.

Send us your news ideas to sbi@sydney.edu.au.

Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Disclaimer: We'd like to advise that the following program may contain real news, occasional philosophy and ideas that may offend some listeners.

Intro: This is The Future, This Week. On Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. And every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful, and things that change the world. Okay let's start.

Kai: Let's start. Today in The Future, This Week: Cambridge Analytica is not the real story, backchannel gig chats, and a fading giant.

Sandra: I'm Sandra Peter Director of Sydney Business Insights.

Kai: I'm Kai Riemer professor at the Business School and leader of the Digital Disruption Research Group.

Sandra: So Kai, what happened in the future this week?

Kai: Obviously the biggest story this week in business and tech has been the story around Facebook and Cambridge Analytica. We could have picked a million stories but the one we want to highlight is from Wired UK. So an opinion piece by James Templeton titled: "This is the smoking gun at the centre of the Facebook and Cambridge Analytica story and he says that the growing scandal around those two companies isn't actually the real story. To get what's really going on we need to understand Mark Zuckerberg's mission and Facebook's business model. Now before we come to discuss this I think we need to actually unpack what is that story.

Sandra: This time around the story started with The Guardian, The Observer and The New York Times who worked together with a whistleblower named Christopher Wylie to start to uncover Cambridge Analytica's role in the American elections. The whistleblower really is a data scientist who helped set up Cambridge Analytica which is the data company that is said to have help Donald Trump win the American elections. Let's hear from him.

Audio: Cambridge Analytica was birthed out of a company called SCL Group which is a military contractor based in London. This data was used to create profiling algorithms that would allow us to explore mental vulnerabilities of people and then map out ways to inject information into different streams or channels of content online so that people started to see things all over the place that may or may not have been true. This is a company that really took fake news to the next level by powering it with algorithms. You have to understand that this is based on this, on an idea called informational dominance which is the idea that if you can capture every channel of information around a person and then inject content around them you can change their perception of what's actually happening. So the fundamental difference between what Cambridge Analytica has done and standard political messaging is that when I show you an ad for a candidate it says you know hi I'm so-and-so and I approve this message. It is apparent that they are seeing political messaging. It is apparent that they are trying to be convinced but what Cambridge Analytica does is works on creating a web of disinformation online so that people start going down the rabbit hole of clicking on blogs, websites etc. that make them think that certain things are happening that may not be.

Sandra: So this was Christopher Wylie a former Cambridge Analytica employee talking to NBC News.

Kai: And while this is quite a shocking story to exploit the mental vulnerabilities of people in a perversion of democracy where the election is really only about manipulating people brainwashing them into voting for your candidate, this is not the part of the story that we want to focus on this week we want to focus on the role of Facebook in this story and what it tells us about Facebook more broadly.

Sandra: So let's have a look at how Cambridge Analytica managed to do what it did. We have to go back to 2014 where a researcher from Cambridge University Dr Kogan created an app that asked users on Facebook to take a personality test and this was for academic purposes only. And almost 300000 people agreed to have their data collected in this way. And let's remember what Dr. Alexander Kogan had created with this app that was really asking users to fill out a little survey was no different than what many other apps had previously done on Facebook.

Kai: What this app did was basically making use of what Facebook had introduced a few years earlier called the Open Graph API and the Wired article outlines quite nicely what the purpose of this API was so an API for those of you who don't know is an application programming interface it's basically an interface that allows apps to access data from a platform such as Facebook and then make use of that data in their own app. So with that API these apps were able to access a user's data and I would read out the list just to make clear how full the access to data was. Data that covers the About Me section in a person's profile, all the actions, activities, birthday, check ins, history, events, games activity, groups that someone is a member of one's hometown, interests, likes, location, all the notes, online presence, photo and video text, all the photos, the questions, relationship details, relationships, religion, politics, subscriptions, website and work history, you could argue almost everything that a person leaves in Facebook but the kicker is here that it is not just the person's data but the data of all the friends in a person's network.

Sandra: So thanks to the terms of service that Facebook had at the time the app was able to collect not only let's say my data if I chose to fill out the survey but also everything about Kai as well who had not completed the survey, had not ever heard of this app, but yet through me, I had given this company access to everything about all of my friends in my network and this gave Alexander Kogan, who then later handed over his data to Cambridge Analytica, through less than 300000 people access to the data of about 50 million people.

Kai: And Facebook in an initial attempt to control the story came out and its Vice President and Deputy General Counsel Paul Grewal wrote that the claim that this is a data breach is completely false because the researcher who made the app obtain the data from users who chose to sign up to his app and everyone involved gave their consent. Now that's obviously obfuscating because while the people who signed up might have given consent and even that's debatable because you just you know click a message that is put in front of you, arguably not many people read the Terms and Conditions.

Sandra: Yeah. So we can argue is it really informed consent if you're relying on terms of service or some things that people don't normally read, don't necessarily even understand.

Kai: But the problem here is that the friends of anyone participating in any of those apps certainly have not given their consent to have their data scraped off the platform.

Sandra: And Facebook's claim would be that somewhere in the terms of service and the settings that you have set for your Facebook account in some way you had actually given consent for your friends to give that consent to someone like Cambridge Analytica.

Kai: So to be clear this was part of Facebook's business model at the time. The researcher in scraping all this data of millions of people off the Facebook platform did not do anything wrong. In fact it was part of the incentive to third party app developers that they got access to that data that was part of the appeal to actually implement apps for the Facebook platform because let's not forget many of the games and apps available in Facebook at the time were free to use so the incentive really was access to the user data.

Sandra: So before we got to the real story let's look at how this has been covered in the news over the past week. First the story has been around the power that such an organisation would have to win the election on behalf of one political candidate. And this has been a story not only around the Trump campaign but also the European parliament said that they would investigate whether similar data misuse has taken place in Europe.

There has also been a new investigation open into the Kenyan elections to see whether it was Cambridge Analytica that helped win the elections on behalf of Uhuru Kenyatta, and the Washington Post went as far as to call this data neocolonialism. So really the charge here has been that data obtained by an academic was then misused for purposes other than what these people had given consent for. There has obviously been a lot of debate as to whether really Cambridge Analytica did have the power that it claims it had a number of political strategists and observers have debated across various media whether it really had contributed that much to the success of the various campaigns. There have been discussions in the Australian media about the firm's move to Australia where it has been regarded with quite a bit of scepticism. It went as far as a Labor source that's been quoted in The Sydney Morning Herald saying they are involved in some dark and dangerous shit, we are going nowhere near them. That is not to say that other similar companies haven't made inroads in Australia such as i360. So overall this has been around targeted messaging to influence people to do things they wouldn't otherwise do and the ability to without permission quickly amass data pertaining to more than 50 million users. Which brings us to the news of this morning: Facebook's response to it, which is still not the big story.

Kai: So just this morning Australian time Mark Zuckerberg finally after days of silence came out and apologised and said yes we made mistakes, it is our responsibility to protect your data and if we can't then we don't deserve to serve you. Now significantly, all of this is about what happened back in the day and he said okay we already in 2014 we have dramatically reduced data access, we'll conduct a full audit of any app with suspicious activity, we will ban any developers that did not agree to go through an audit.

Sandra: And they will launch a tool that will sit at top of your news feed that will show each and every person which apps have access to their data and allow them to revoke those apps permissions if they wanted to. And in theory that sounds like a pretty good response.

Kai: But is it because conveniently for Facebook they can now make this all about what happens in its ecosystem. The problem is exclusively located in what Zuckerberg says in the app developers they have failed Facebook, they did not adhere to the terms and conditions. They shared the data, we have already closed access to that API and we will monitor our app developers so not unsurprisingly he positions Facebook now as the trusted gatekeeper who will reign in the wrongdoing of the app developer community.

Sandra: So he said that they have been working to understand exactly how it could happen, that these people had access to so much information and ensure that this does not happen again.

Kai: So let's go back to The Wired article and yes it's true - Facebook in April 2014 announced that it would shut down the Open Graph API and they finally did so a year later, so they closed down the access of the developer community to that to level of data especially someone's friends network but they did so not necessarily out of moral or ethical grounds but they realised just how valuable this data was and that sharing it freely with any third party app developer would ultimately diminish Facebook's own value and the way in which it could exploit the data and that is the main story we want to talk about.

Sandra: So really the main story here is about Facebook's extremely successful business model that rests on their ability to micro target people in their network and to offer that service for a price to outside organisations.

Kai: Yes so let's not forget that since 2014 when they announced to shut down the open API, to close off external access to their data they have ramped up not only their data collection efforts but also their data analytics efforts that allows advertisers, so anyone prepared to spend money on the platform, to micro target messaging to particular users and user groups. So The Wired article actually mentions that as sort of a consolation prize when they announced that the Open Graph API would be shut down, they announced the Facebook Audience Network and that's an interesting development introduced at the time which allows Facebook to collect user data outside of Facebook because it allowed advertisers to place Facebook based ads on their websites. It also comes with the feature to and you might have seen this on the web where you can use your Facebook credentials to basically log in to other services which is very convenient but again allows Facebook to collect data off platform.

Sandra: It's also allowed for what Facebook pitched at the time as the power of Facebook ads off Facebook. So organisations could target you and then follow you around the web outside of the Facebook platform.

Kai: Which of course increases the amount of data that Facebook has about its users and therefore the power of its analytics and the targeting of messaging. And lets not forget Facebook was a major enabler not only of Cambridge Analytica in 2016 when they actually executed on the data that they had gathered but also for supposed Russian interference, these bots that posted fake news into people's news streams and anyone who engaged in what is now known as the election meddling saga.

Sandra: So really whilst stories such as Cambridge Analytica might take off the market a number of organisations that have taken huge advantage of the possibilities afforded by what were loopholes in Facebook privacy settings they are not fundamentally making an impact on the root cause of all of this which is the business model that a company like Facebook is predicated on.

Kai: I want to quote from The Wired article where it says referring to the Cambridge case that Facebook's misuse of user data goes far beyond this. It's born out of a naivete that there is nothing inherently troubling about carving up and monetising not just our personal data but our social interactions and our personalities. For the publications and academics that have been closely following Facebook for years this isn't news, for everyone else it is a timely wakeup call. So the question that this raises is is this going to be a moment now that these stories and problems of Facebook, and let's not forget of YouTube and Google as well are all over the news that we actually have an opportunity to talk about the business models of these companies and that some change will have to happen if we're not to see many more of these instances where data is being used to manipulate people at a large scale.

Sandra: So where does this leave us today? Well we could first of all try to use Facebook while giving it the minimum amount of personal data that we could disclose and there are a number of ways to do that and we'll include that in the show notes. It leaves us with a number of countries looking to regulate what Facebook can and cannot do with both access to the data and then monetising the data that they get out of their users. But to really obtain any meaningful type of change we would have to fundamentally change Facebook's business model.

Kai: Yeah and if we think about if every one of us was to go and change the privacy settings to minimum access for Facebook, that business model would already start to crumble, unrealistic as it is, but if everyone changed it Facebook would already have to look for different ways to monetise the platform because the micro-targeting wouldn't work at that scale. And so what I want to highlight is what exactly this business model looks like, now I want to use a metaphor right. We've heard a lot about if the product is free we are the product but really when I think of Facebook these days I liken it to a giant click farm and farming is actually I think the right metaphor here and if you think of how Facebook started out it started out as a nice family farm. You know it's all about a happy family but as the farm grew Facebook switched to a more industrial model of running the farm and extracting more value from its population, increasing its yield, and it also switched what it fed the user population from just the sharing of information between people, it switched to feeding them news. So the news feed very much changed the face of Facebook because it also gave an in to those so to speak food providers who weren't quite so genuine about the kind of food that was provided, fake news came in and the whole narrative around Facebook changed in the years between 2014 and today, to the extent that and there's been some articles lately that Facebook for the users really has become quite a disturbing experience where the mood is pretty bad, people are exposed to very polarising discussions so it really has become quite a toxic environment for many people. So the farm metaphor takes us to the point that really when you look at it from Facebook point of view it is farming for data, it is treating its user population as a way to extract value and it can influence how much value and data it extracts by the way in which it feeds it different news items and that's how the platform has been optimised for quite a while. Now, if we wanted to change that the model would have to change to one where value is coming more directly from the users by paying for the services that the users are being provided.

Sandra: But this is not as straightforward as it sounds is it, because for some people paying a certain fee say if you paid twenty dollars a month that might be negligible for a certain part of the population but this has affected for instance the Kenyan elections where probably those kinds of fees are not accessible to the vast majority of the population.

So allowing social platforms to be one thing for people who can afford it and something different for other parts of the population would be a potentially more dangerous thing altogether.

Kai: Oh absolutely so it would raise serious questions about equity and being a platform for everyone and not just the ones that can afford it but it raises also a more immediate issue for any user on Facebook. The moment I'm being asked to pay for the service I can then stand back and reflect on my experience and ask myself is it really worth paying money for what I'm getting here and that might actually be a crucial moment for many users where they decide no really that experience hasn't been that great lately and that might be the final straw that breaks the camel's back where people might say I'm quitting, I'm not going to pay for this kind of experience so if Facebook really wanted to switch to a user paid model they would have to first invest in the user experience and make that better which would come at the expense of harvesting the advertising dollars...

Sandra: But you could argue that the sharemarket is already foreshadowing the fact that there will have to be major changes in how Facebook runs its business. Facebook stock was down 7 percent on Monday alone and that continued and this has cost them more than 45 billion dollars in the first couple of days of this week alone.

Kai: Absolutely I think that's significant because it shows that investors are quite aware of the fragility of the business model that is solely based on exploiting user data and when that narrative takes a hit and it has taken some hits lately and it's likely to continue that discussion that it is the sharemarket that will actually send a strong signal for Facebook to rethink the way in which it monetises its platform.

Sandra: So whilst not wanting to take away anything from the significance of the Cambridge Analytica revelations we need to not forget that there is a bigger story behind it and it has to do with Facebook and its underlying business model.

Kai: And we also want to highlight that while all of this sounds gloomy and we're looking back at the election and supposed data breaches and exploitation of data, I think there's a big positive in this story because this might finally be the opening where we can have a positive discussion about what we want social media to become and where it might sit in society and all the good that can come from connecting people. So maybe this is the moment where we create a breakdown that will allow us collectively to step back and have that conversation on what social media's role in society will be because let's remember a few years back the narrative around social media was a very different one: Arab Spring social media to empower people, to inform people, so maybe we are finally at an inflection point where we can recover some of that narrative and have a positive discussion about how social media is going forward.

Sandra: So here's hoping it's different this time around.

Kai: So our next story comes from The Conversation, it's written by Kimberlee Weatherall who's a Professor and Associate Dean (Research) at the University of Sydney Law School and it's about the gig economy: it's titled "Gig platforms claims over worker chat groups: fraught territory indeed". So what's that about?

Sandra: The article talks about the food delivery platform Foodora which is one of the many food delivery platforms that function in Sydney and across Australia that right now is trying to claim intellectual property rights over a chat group that the people who deliver food for Foodora have been using to really exchange information about the shifts that they take, about the work that they do, the pay conditions, and really their experiences being part of this network.

Kai: So this comes off earlier news from this week in The Financial Review which reports that a Foodora courier was fired for refusing to quit a workers chat group. So this person Josh Klooger he was sacked after refusing to hand ownership of a worker chat group to the company. So apparently this is now the attempt by Foodora to use the courts to break their way in to that chat group and learn about what people are talking about there.

Sandra: So the article goes a fair bit into trying to unpack the types of intellectual property rights that Foodora might have over these chat conversations and really discussing the premise of whether the messages exchanged over these platforms are or are not protected by copyright.

Kai: So what Foodora claims is that when the drivers talk about their work conditions they are violating confidentiality in the kind of terms and conditions in their work contracts that they might disclose and then discuss with other people.

Sandra: And if we're discussing tips that the organisation really owns what employees would create over the course of their employment.

Kai: And therein lies the catch now where Kimberlee takes us through the illogical contradiction in the argument that Foodora makes because she points out quite rightly in my view with that Foodora takes these drivers to be employees because any employee who in the course of their work creates some intellectual property obviously that IP belongs to their employer but Foodora claims at the same time that these drivers are independent contractors. And she also says that not only could we argue that they are independent contractors so what they do is their copyright because any intellectual material that an independent contractor creates belongs to that contractor. But she also says that the discussions that they engage in in these chat groups are not part of their actual work so these discussions wouldn't actually happen in the course of performing their normal work which is delivering food. So the case that Foodora has is pretty weak.

Sandra: And as with most of the stories on The Future, This Week there is a deeper point we want to highlight here.

Kai: This is also not the real story again.

Sandra: And Professor Kimberlee Weatherall touches upon this when she asks us about the degree of control that a platform is trying to assert over its workers. And this is the more fundamental point and we've seen this before with Uber drivers organising over WhatsApp. Any attempt by workers to organise and try to regain some control over the type of work that they do on gig platforms has been met with hostility on behalf of the organisations.

Kai: Which is quite understandable because from the point of view of Foodora they gain the most when they treat everyone as an independent contractor and make people compete with each other for rides, for fares, so they have no interest in people actually talking to each other and organising but at the same time we see this as quite a positive move of these people to actually regain some identity as a group to actually socialise as work and exchange experiences with their co-workers.

Sandra: And potentially have an impact on the conditions of their work. Let's not forget this comes in an industry or in a setting where there are no trade unions that these people could be part of. But also at a time where in Australia at least we've seen a huge decline in trade union membership - in the 80's 45 per cent of people were union members and by 2016 this was down to only fourteen point five percent. So maybe this is a response, maybe this is a way to think about disrupting what trade unions as institutions look like and a positive step on behalf of the workers.

Kai: And of course it ties in with the discussion around conditions in the gig economy and the sharing economy more broadly where we could make the argument that we might have been dazzled by the technology and disruption narrative and the way in which these services offer additional convenience and better service for their customers that we forget that the conditions for the workers in the gig economy really combine the worst parts of contract work and work in a large organisation where on the one hand people are treated as contractors with no benefits, no insurance where they have to take care of all the kind of responsibilities around their role. But without the freedom that an independent contractor would have over deciding on where they're going to ride, setting prices and all the rest of it but then being subjected to the kind of hierarchical control that you would get in a traditional work relationship.

So really we see this as a move by these workers to regain some control and also some sharing of experiences to have a sense of identity at work which arguably is often more important to people than the actual pay that they earn.

Sandra: And also as importantly a good opportunity for us to start even having these more nuanced conversations around how workers can find a sense of identity and how workers can organise in the context of new forms of work.

Kai: Our third story comes from inc.com and it concerns one of the oldest tech companies in the world - GM General Electric - and the article is titled "How GE became a square-peg business in a round-hole world". The author makes the argument that General Electric has fallen on hard times. In December they announced that they were laying off 12000 employees worldwide in its massive power business and it reports on the fact that GE has been known as an extremely well run company that seems to have done everything right but that has missed the boat on where to actually put its attention and its money.

Sandra: So indeed in its 128 year history GE has produced some of the most iconic, most innovative products and services. It gave us the first light bulb, it gave us the first commercial power station, the first commercial nuclear plant, the first jet engine out of the U.S, huge advancements in plastic and in silicon, one of the most famous CEOs ever Jack Welch who has been running the company for over 20 years.

Kai: It gave us the C.T. scanner back in the 1970s. But as the article argues not too much in the way of really ground breaking inventions since then.

Sandra: So what's happened here?

Kai: The article makes the point that GE as one of the first companies to implement the Six Sigma as an efficiency program very successfully has been known as a company that works on continuous improvement, great on efficiency and productivity. And this is where the argument could go. You could say okay companies that focus on efficiency and doing what they do extremely well, I'm not very good at doing new things but that's not right because GE has always been a very innovative company and to their credit they have done what is often said to be an insulation against disruption and they implemented a start up style lean program where they unshackled internal innovation from the onerous requirements of large scale business plans, they implemented the idea of minimum viable products and done this quite successfully for a while yet still the article claims they have missed the boat.

Sandra: So on the surface it seems that they've done everything by the business school textbook. It seems they had done everything right.

Kai: And not just the old edition. The new edition as well right.

Sandra: And yet their stock value has halved and for the first time in 110 years it seems like it might get kicked off the Dow index.

Kai: So GE is a really interesting case study and of course we don't know where this case is going. The article says that GE should take a leaf out of the book of other companies such as IBM who have been known to be in trouble previously but have been able to reinvent themselves so the article claims that IBM today with artificial intelligence and quantum computing has yet again reinvented itself.

Sandra: Although you might argue the bets on quantum computing have not yet materialised.

Kai: Nor has Watson lived up to expectations yet.

Sandra: But coming back to GE.

Kai: The question is, is this just an episode and will GE bounce back from this? Or is this an example that shows that disruption is more than engaging in the right practices, you can run lean start up programs and innovation centres at large scale but if you're not really working in the right area, if you're not having the right ideas that doesn't really help you much. So what I want to stress is that disruptions are often changes in world view where a conversation at a large scale in an industry changes customer behaviour, changes and what counts as the relevant product changes and if a company is not part of that conversation and misses these shifts then no execution of a start up program will actually save the company and the article highlights that GE only in 2015 have invested a lot of money in buying from the French company Alstom their coal power turbine business. So a large scale investment that has locked GE into the coal and gas power generation business when renewables have made remarkable inroads in many countries.

Sandra: So even though GE is now set to build the world's largest offshore wind turbines in Melbourne and these are set to dominate Melbourne's skyline 260 metre tall wind turbines, you might argue that it might be too little too late and this is not just about missing cues in the larger conversation. This is also missing out on some very important megatrends. One of the major trends that we often discuss here is that of evolving communities, of demographic change.

And interestingly GE Capital which was bailed out by the federal government in 2008 have started to offer long term care insurance and turns out they've as many others underestimated the customers' longevity and their medical needs. People are now living longer than they used to and spend a lot more in aged care and having medical problems associated with ageing. And even though GE hasn't offered any new policies since 2006 they are still experiencing losses in 2018.

Kai: So we want to leave this story here and say GE is a really interesting case study that is still unfolding, we're going to keep an eye on this and come back to this on The Future, This Week in the near future because we believe that we can learn something quite significant about how disruption works and how companies might or might not react to disruption as it progresses. So we will keep an eye on this.

Sandra: So let's end the podcast with two very quick Future Bites on traffic.

Kai: So what is your story Sandra?

Sandra: So one that we obviously have to bring up is the Uber self driving car that struck a woman in Arizona and unfortunately this ended in a fatality. The car was a Volvo car and it did include a human operator so I want to keep an eye on how this conversation unfolds because so far the conversation has mainly been around absolving Uber of any fault. There was an article in The Verge and one in The Wall Street Journal looking at whether or not the technology and the algorithms performed as they were supposed to. Or was it something else at play?

Kai: And we want to highlight that despite some reports in the media this is not one of those cases where we need to discuss the ethics around self driving cars because there was no decision involved whether or not one or the other person had to be killed by the algorithm, this was an incident where someone stepped in front of the car, the car didn't break in time and they were struck. But just yesterday an article in CityLab came out where a Professor of Urban Planning at Arizona State University came out and said I'm taking my bike past that intersection every day and this is one of the least obstructed parts of the city, it's seven lanes wide open, no trees, no shadows. So really if a self driving car hits a pedestrian in that situation something must have gone wrong so (and he points out that absolving the algorithm and Uber of any faults by as yet unpublished undisclosed video materials looks a bit dubious) and he raises questions that this is something that will have to be discussed a bit further because he finds it implausible that neither the driver nor the algorithm would have seen the person coming.

Sandra: And it's really important to note here that we should not expect any technology to never have an accident. There can be circumstances where and that's why we call them accidents, it just happens, we are unable to brake in time to prevent a collision. However in this case we just do not have enough data now to make this pronouncement, but it's also not the basis for having the conversation about the ethics of autonomous vehicles. So we'll keep an eye on how this unfolds. What was your short story?

Kai: Mine is from Engadget and it reports on an ingenious way where a concerned citizen has employed artificial intelligence, machine learning for his own purposes, because this guy Alex Bell who bikes around New York City got fed up with how often bike lanes were blocked by delivery trucks or idling cars and complaining to the council apparently didn't get him very far, so he took it upon himself to prove with evidence with data how often this was actually happening. So what he did is he used the machine learning algorithm trained the algorithm with around 2000 images of different types of vehicles and bus lanes. He set the system to tell the difference between buses that are allowed to idle at a bus stop in those lanes and other vehicles that basically aren't. And then he applied his algorithm that learned to distinguish those images to 10 days of publicly available video from a traffic camera in Harlem and the results showed that the bus lane on the block that the camera covered was blocked 57 percent of the time while the two bike lanes were blocked 40 percent of the time. Based on these numbers Bell then determined that approximately 850 vehicles had blocked the bike lane during those 10 days and 1000 blocked the bus lane. And he then extrapolated this to the hundred and one miles of bus lanes and more than 400 miles of bike lanes in the New York City area and said he really hopes that this data will actually raise awareness for this problem and get the councils to act on it. And I think this is an ingenious way of a concerned citizen employing this new technology to raise awareness for a real problem.

Sandra: And of technology really empowering ordinary citizens to actually look at problems at scale.

Kai: So a real good news story.

Sandra: And a good time to end for this week.

Kai: End on a high.

Sandra: That's all we have time for this week.

Kai: Thanks for listening.

Sandra: Thanks for listening.

Outro: This was The Future, This Week made awesome by the Sydney Business Insights team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music is composed and played live from a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Stitcher, Spotify, SoundCloud or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to discuss please send them to sbi@sydney.edu.au.

*End of transcript*

Related content