This week: update my car, your phone will see you now, and power to the hackers. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

Tesla issues an over-the-air update to drivers to outrun Irma

iPhone face recognition

Hackers gain direct access to US power grid controls

Your car becomes more like an iPhone and receives software updates regularly

Why the car industry has a problem with over-the-air updates

Planned obsolescence from BBC

Apple gets rid of the iPhone home button (and fingerprint scanner)

iphone X launch

Is the new iPhone face recognition a gimmick or the future of smartphones?

Privacy and security concerns of Apple’s FaceID

Face recognition is getting more powerful and controversial

Meet Mike in our TFTW podcast

Real time Mike

Michal Kosinski and Yilun Wang face recognition study from Stanford University

An algorithm deduces the sexuality of people

The company that is shaping China’s face recognition technology

You can now pay for fried chicken by just scanning your face

Germany’s election software is dangerously hackable 

Equifax’ epic security breach

Alexa and Siri can be hacked with inaudible voice commands

Join us September 22 for DISRUPT.SYDNEY™ 2017

DISRUPT.SYDNEY™, in its 5th year, is Australia’s first and oldest disruption conference. Join our speakers and facilitators in imagining the future of business and society, what implications emerging technologies bring and how organisations can cope with and be managed in such environments.

More information and registration 

You can subscribe to this podcast on iTunesSpotify, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or

Send us your news ideas to

For more episodes of The Future, This Week see our playlists

Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Introduction: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. Every week we get together and look at the news of the week. We discuss technology, the future of business, the weird and the wonderful things that change the world. OK let's roll. Today in The Future, This Week: update my car, your phone will see you now and power to the hackers.

Sandra: I'm Sandra Peter. I'm the Director of Sydney Business Insights.

Kai: I'm Kai Riemer, Professor at the Business School and leader of the Digital Disruption Research Group.

Sandra: So Kai, what happened in the future this week?

Kai: Our first story is from the Verge and concerns Tesla. Tesla extended the range of some Florida vehicles for some drivers to escape Hurricane Irma. So a guy called in to Tesla and said I'd really like to outrun the storm but my Tesla won't take me there. So the guy has these so-called Tesla models 60 or 60D and that only offers a range of about 200 miles on one charge and he basically said that's not enough, I won't be able to get far enough. And Tesla was able to update the car over the air to extend the battery to 75 kilowatt hours which gave the guy exactly the 40 miles that he needed to basically outrun the storm.

Sandra: So an over the air update unlocked the full battery capacity of the Tesla car.

Kai: Which means that all those cars have the larger battery all along. And if you purchased the smaller model, all that happens is that Tesla will artificially restrict the battery capacity right?

Sandra: Through software to about 80 percent of the battery's capacity.

Kai: Yeah. Now what do we learn from this?

Sandra: Well first we learn that Tesla is also doing this. This is not new right?

Kai: No other car manufacturers have long done this and it's also common in other industries so sometimes it's just cheaper to build the same product at scale and then introduce product variations, by way of disabling features that are physically built into the product but disable them or use software to artificially restrict what the device or the car in this instance can do. Now, a family member who drives a Volvo wanted to activate a feature I think it was in seat heating and so he had to drive into the dealership and it turned out that the feature was actually built into the car so all the car dealer did was basically flip a software switch. So we know that this happens in other cars. But imagine if that guy would have had to drive into a Tesla dealership in the face of a storm coming in.

Sandra: Tesla actually doesn't have this dealership network so it isn't bound by any rules that prevent it from doing this over the air updates, unlike traditional car manufacturers that actually are bound by contractual arrangements with a network of dealerships that is highly beneficial to the dealerships and not the car companies.

Kai: So when something happens to your car or you want to extend your features you have to bring it into the dealership because it's their right to take a cut out of this service right? So this is why traditional car manufacturers find it hard to implement over the air because they run into trouble with their dealers.

Sandra: This is not to say that some traditional manufacturers aren't experimenting with this: General Motors for instance has announced some plans to offer over the air updates somewhere before 2020. They're actually working on new electric vehicle architectures and new entertainment systems that might offer this feature. But for now all of these companies are locked into a system that will not allow them to do this.

Kai: So what you might see as a weakness for Tesla that they don't have the dealership, in that instance becomes a strength. And if you think about the instances when you actually have to take your conventional car into a dealership is either for a service when you have to exchange fluids like oil and filters and things that Tesla cars don't have, or when there is a recall and in a lot of instances when something has to be fixed on your car it is in software now. And so Tesla is actually able to fix many things in your car or update the features of your car a lot faster than you would normally do and a lot more convenient and probably cheaper.

Sandra: At this point we should take a moment to remember the story that we did a couple of months ago around jeeps getting a different type of update over the air. A couple of years ago, a pair of hackers at a black hat security conference managed to hack a Jeep and figured out how to remotely take control of a Jeep Grand Cherokee which resulted in 1.4m SUVs being recalled to actual dealerships.

Kai: And there was also a funny story about a biker gang that was able to unlock and then steal Jeeps by way of stealing electronic keys and we can imagine that with an over the air portal into the software of a car that you could hack into and remotely unlock a car and start the car, manipulate the car, take it over in full drive, or just steal the car. So one of the concerns here is that the over the air update function creates a security weakness in those cars that allow hackers a way in.

Sandra: This is not to say that we're already in the world of The Fast And The Furious where we have car hacking stunts, Charlize Theron singlehandedly taking over hundreds of cars and using them for nefarious purposes. It takes quite a lot to hack one of these cars and to even hack one of the things that this car does whether it's the steering wheel or speeding or any of the sensors within the car. So we're not quite there, we only have a handful of people mostly researchers who try very hard to gain control of some part of the car. However being able to manipulate even one of these things for a certain number of cars could lead to some interesting situations.

Kai: Yes. And we all know that the more features become available and the more certain devices become available the more they become interesting for hackers as a target. So hackers will not develop the skill of hacking into something that they can then apply once or twice. But once you have thousands or millions of the same cars with the same potential security flaws out in the market it becomes much more attractive to then gain control of a car, lock the card down, and then have the owners or the manufacturer pay a ransom for example as we have seen with PCs around the world, only a few weeks ago, so imagine that would happen with cars that all of a sudden stop in the streets and can only be unlocked by way of paying those hackers.

Sandra: But hackers are not the only ones who might have an interest in manipulating the software that's on these cars. Car manufacturers themselves.

Kai: Now if we think about what happened in the diesel emissions scandal in particular and I hate to say it with German companies Volkswagen, Audi, and others who have been caught out manipulating the software in their car to make sure the cars score well under test conditions in their emissions but then under road conditions the car would revert back to a much dirtier set up that give the car performance but basically increases emissions past the allowed standard. With over the air updates they would have been able to much better hide the fact that they manipulated the software which was built into their car. They could just update the car once it leaves the factory floor and you could even...

Sandra: Go to great lengths to hide this when I'm doing the check up for my car.

Kai: Yeah. So in Germany for example check ups are only done by certain certified institutions that are legally able to do this. Not every garage and their dog like here in Australia. And everyone knows where those places are. So you could program the GPS coordinates of these places into the car and whenever you approach one of those centres.

Sandra: I would perform up to standard and when I leave I would revert back to whatever makes more economic sense.

Kai: Yes. Technically all of this becomes possible which raises the question then what is actually certified when a car is legally allowed onto the road when the manufacturer releases a new model when in fact the manufacturer can change, manipulate and alter the performance of the car, the features of the car, the behaviour of the car. We're talking auto pilots here, self driving features. So with all of these things potentially being able to be added to a cars' feature list after it has been certified as roadworthy by the authorities, that creates a whole slew of new issues from a legislative perspective but it also means that life becomes more complex potentially or more convenient for the consumers.

Sandra: Indeed. It seems Tesla is selling you the worst version of the car you will ever own because they will continue after selling your model to give you software updates that will make your car better and better. And this actually brings us to this idea of planned obsolescence. The fact that for the longest time the auto industry has been selling you models that have a certain lifespan whether that be 10 years or 15 years after which you will need to buy a new car. And the same thing happens with our mobile phones when after a couple of years the battery just cannot hold the charge that is sufficient for the phone to work so you end up buying a new one.

Kai: Yeah but even if you can replace your battery or your car can make it for 20 years, you know many consumers will want to have the latest version but with a car you can actually upgrade the features and you can add to the features of a car by adding self driving capabilities or the auto pilot whatever you call it later on then maybe they will hang on to their same car for longer because they have something new in that car every few months or years which means they might actually hang on to the same car for longer and not buy a new one every time a new one is released. So release cycles become longer and so for Tesla I think for a company that builds on the very idea of sustainability that might actually be a smart thing to do. The question is what that means for competition in other car manufacturers which might not have been on that same business model.

Sandra: One of the whole points that is made around planned obsolescence is that fundamentally firms have just been reacting to consumer preferences. We want to change our phones every few years so they give us a new phone every few years. So for the auto industry it's interesting to note, and this comes from a BBC article around planned obsolescence of technology that we'll put in the show notes that the car industry has been this fast driven, fashion driven business where you had to get a new car every few years but actually the consumer preferences have been changing there as well. And the U.S. Department of Transportation figures show that the average time that a car spends on the road used to be about five years back in the 1960s and the 1970s and is now over 11 years. So maybe consumer preferences are changing as well. Tesla could just be following what we now consider the norm beyond just wanting to be a sustainable business.

Kai: But planned obsolescence is nothing new right? It goes back all the way to the invention of the light bulb.

Sandra: Yes it does. If you think about the first electrical systems that have light bulbs the way they were installed in houses you got not only the electrical system but you also got the light bulbs that were maintained by the same company. So the idea was that the company that provided you with electricity also made sure you always have the latest light bulb. So there was actually no incentive to plan for obsolescence in those light bulbs.

Kai: And we're talking early 1900s here.

Sandra: Yep. The business model however changed when lighting companies actually have the light bulb moment where they figured out that actually if we made light bulbs disposable and we have people paid to replace them while we could move this cost onto the customers. This was the infamous phoebus cartel in the 1920s where a representative from top light bulb companies including German ones...

Kai:...Germans again.

Sandra: So you had Osram, you had General Electric, you had British subsidiaries of this. They actually colluded to reduce the life of the light bulbs to about 1000 hours. So you would have to replace them.

Kai: Again German engineering ingenuity at work here.

Sandra: Indeed. The details of this scamming emerged decades later and there was a lot of journalistic work involved in revealing this.

Kai: But it gave us this concept of planned obsolescence which we see in many consumer devices these days - whether it's actually built in technical breakdown of the devices or socially the fact that we always want to display our technological or financial prowess by sporting the latest gadgets that are released to the market, it is an important concept that drives the strategies of many businesses.

Sandra: But I want to raise another point made in the BBC article as well that it might be very simplistic to say that well they don't make them like they used to and they're trying to cheat us and scam us and so on because one of the by-products of planned obsolescence besides the fact that they do make a lot of money out of this is actually the rapid turnover of these goods leads to a lot of job creation and a lot of job growth because you employ more and more people by manufacturing and selling these things. So think about smartphones, think about the industries that have been created around these including smart phone cases and all the gadgets that come with it.

Kai: Absolutely. And in the case of the automotive industry it gave lots more consumers much more quickly much better safety features that were rolled out that way into the industry.

Sandra: Exactly. That's the innovation argument. So this planned obsolescence also means that you have to innovate faster and faster to satisfy these fashion cycles.

Kai: So on one level you can argue it's wasteful because you are creating more landfill and you're burning through resources much more quickly. But on the other hand you might increase the quality and the safety of your products and therefore benefit the consumers.

Sandra: So you get smarter phones, better cameras, faster processors and so on.

Kai: So the bottom line here is that with over the air updates and doing a lot of that in software we might actually get a bit of both. We might reduce the resources we need to produce the physical car but we might be able to give more safety features and to innovate purely on the basis of software which is a good thing.

Sandra: But Kai there's one category that actually bucks this trend where a planned obsolescence doesn't quite play out. And that's the luxury goods market. Nobody buys a Rolex knowing it's going to last you a couple of years and then you'll need to buy a new one. Yet innovation is huge in the luxury goods market.

Kai: Well speaking of innovation in the luxury goods market, Apple has announced a new iPhone - the iPhone 10 spelt iPhone X which arguably is a luxury product because in the US it cracks the thousand dollar mark. It is by far the most expensive iPhone yet. It basically sits in the price category of laptops or ipads. And there's also a $70,000 pure gold version.

Sandra: I'll let you buy that one.

Kai: Not produced by Apple themselves obviously. But no seriously.

Sandra: Our second story for the week comes from the Sydney Morning Herald. And it's called "With Face ID, Apple edges closer to the fraught age of facial recognition tech".

Kai: So Face ID or facial recognition is the big new feature in this new iPhone. Apple does away with the fingerprint scanner that was part of the home button which has to go because the new iPhone sports a screen edge to edge. And let's hear from Apple themselves about the new feature.

Apple Audio: Your iphone locked until you look at it and it recognises you. We call this Face ID. To make Face ID possible it took some of the most advanced technology we have ever created. We call this the true depth camera system. There's an infra-red camera, a flood illuminator, a frontside camera and a dot projector. There's also the proximity sensor, the ambient light sensor, the speaker and microphone. Every time you glance at your iPhone 10 it detects your face with the flood illuminator even in the dark. We built Apple's first ever neural engine. The neural engine is a state of the art ultrafast processing system that uses the highest density computing ever. Face ID learns your face even if you change your hairstyle, you decide to put on glasses, you're wearing a hat and it adapts to you as your face changes over time. The teams work hard to make sure the Face ID can't easily be spoofed by things like photographs. They've even gone and worked with professional mask makers and makeup artists in Hollywood to protect against these attempts to beat Face ID. The chance that a random person in the population could look at your iPhone 10 and unlock it with their face is about one in a million.

Sandra: So as gimmicky as this looks I can just look at my phone and it will do all these calculations to make sure it's me. Why wasn't the fingerprints scan good enough? Why do we need all this fancy technology besides the fact that it's really cool. Why do we actually need it? Why is it a good idea to go to all this trouble to put this into an iPhone?

Kai: Yeah, OK. So first of all there seemed to be some rumours that Apple didn't actually plan to get rid of the fingerprint scanner. They wanted to get rid of the home button which doubles up the fingerprint scanner and then integrate the fingerprint scanner into the screen of the phone. But apparently they couldn't get the technology to work at the reliability that they require and so they dropped the fingerprint scanner.

Sandra: So it might be coming back if they figure it out.

Kai: It might come back right especially if people do not quite warm to the whole Face ID thing. But the point is Apple would not go to all this length of building all these technologies into the iPhone just to unlock your phone which can be done in a much much simpler way. No. This technology actually has a much bigger application which is in augmented reality and virtual reality. And we already saw a glimpse of that on stage when Apple showed how you can animate emojis with your face expressions and they call this animojis where you can animate the little robot or their little poo face, this you know shit face really, with your facial expression and make it smile or make it frown and make it sad.

Sandra: And make you do what you're doing, imitate you.

Kai: Exactly, in real time. And so this brings us back to Meet Mike to Digital Mike, the photo realistic real time Avatar that Mike is going to present next week at Disrupt Sydney.

Sandra: And this is Mike Seymour doing this research at the University of Sydney Business School.

Kai: Exactly. There's two components to what Mike is doing: he has this incredibly detailed scan of his face where a lot of expensive technology was used to produce this face render. And he can drive this face in real time by wearing a helmet with infra-red stereo cameras that take the facial expressions off his face.

Sandra: So theoretically in the near future could this happen by me just using my iPhone to animate a virtual version of me that now Apple has scanned through my use of...

Kai: Absolutely. This is what we're talking about. What we're talking about is that Apple is rolling out to millions of users the technology that will be the basis with which you can drive a virtual representation of your face that might sit in an AR, in an augmented reality environment.

Sandra: Or if you chose to share your face with me I could actually drive your face.

Kai: Which makes this a really scary proposition. But you know imagine a more natural looking virtual conversation situation where you have your virtual Avatar, I have my virtual Avatar. I'm sitting in my office. You are wherever you are. But I have a virtual version of Sandra sitting on my couch which I can only view through my iPad screen holding it to my couch. And you could interact with me and I could talk into my iPad or iPhone for that matter and it would take off my facial expressions and then drive my avatar which is then virtually sitting in whatever location you're at.

Kai: So this are the kind of slightly creepy spooky new collaboration situations that we're talking about down the track.

Sandra: And it might be interesting to see to what extent are we going to play with this technology to take it beyond what we would be able to do let's say in 2D through video chat and take it into a 3D space where these avatars could be realistic, could be augmented, could make use of maybe just our voice to drive them rather also our facial expression so the possibilities there are endless.

Kai: Yes so we're talking the advent of a new technological platform and mind you Apple is also releasing ARKit, the augmented reality kit with the new IOS version that comes out at the same time. So they're creating a new playing field for companies to innovate on and they're giving those companies a new technology with this true depth sensing technology that is able to create a 3D model of a user's face which down the track might not only be used to drive an avatar but also to create a 3D representation of your face once those technologies advance.

Sandra: So it's worth looking a little bit at the concerns and what might go bad in this space. What are the implications of actually having this technology, not in the hands of researchers at a university, but rather in the hands of millions of people who just have a phone.

Kai: Absolutely. So face recognition has always carried a dystopian undertone and not surprisingly there's a number of articles coming out this week on the back of the iPhone announcement which raise all kinds of different concerns that tie in with facial recognition.

Sandra: So to me there are two types of concerns with this kind of technology. Some have to do with the technology itself and some are sort of broader implications into how this technology enters our lives and how companies and businesses make use of it. So first on around the technology itself, there are the usual types of implications - where will this data be stored and Quartz has a nice article summing up some of these and we'll make this available in the shownotes. Or what are the legal implications of opening your phone with your face? Could you be forced to do this?

Kai: So on the first question it's pretty clear that in Apple's case at least that data will stay on your phone in encrypted form so it won't be transferred over the cloud to Apple or any third party. So that's kept in a secure sandbox almost.

Sandra: So you won't be able to have this massive data breaches where hundreds of thousands of records are being hacked.

Kai: Likely not. You might be able to steal one person's face from their phone if that. But the second one is a more important question because if you think about the FBI cases around forcing people to unlock their phone or gaining access to the encrypted data which is a big case in the US a while back then it becomes pretty clear that the securest way to protect your phone is a passcode because no one can force you to tell them they're passcode. A finger print is probably a bit easier to obtain because you could just force someone's hand to touch the phone. But when we're talking face we don't actually have to have touch. I could just hold your phone in front of your face and unlock it.

Sandra: I would have to wait for you to blink if you close your eyes this doesn't work.

Kai: But it is something that you could do in passing and almost unnoticed to a person whereas fingerprint is much harder to obtain. So that might actually create new issues here when it comes to obtaining consent or unlocking your phone of a person being apprehended by the police or someone stealing a phone and then just unlocking it in passing as they walk away from the person.

Sandra: So Alyssa Newcombe, one of the tech reporters from NBC actually does raise this legal question which will have to be addressed - can you be forced to do this? But there are other issues with this technology as well. Things like what else will the sensors be used for? So we have all these sensors now in the iPhone that are used for the face ID. But are they tracking your face in other applications? Could it be used for instance in snapchat? Will Apple use this to track your face whilst you are making purchases or whilst you're browsing your music?

Sandra: Or will they allow other app providers to do such a thing?

Kai: So the data that companies could obtain from tracking your facial movements through these sensors could be invaluable for any company that does advertising but it could also be used to spy on you in various ways.

Kai: So will the user be able to control when the camera comes on or is this something that app providers can now do at any time they want to read your facial expression? Which brings us to the bigger issue. What can you actually read of a person's face expression? And it turns out that latest experimental research suggests: a lot.

Sandra: An article from The Guardian brings up this study by Michal Kosinski and Yilun Wang from Stanford University who recently posted the paper soon to be published in The Journal of Personality and Social Psychology which actually shows that facial recognition AI could be more accurate than humans at detecting the sexual orientation of people from pictures.

Kai: So the researchers used some 130,000 images to train an algorithm which was subsequently able to determine a person's sexual orientation whether they were heterosexual or gay with much greater accuracy than humans would do. 81 percent accuracy whereas humans would only do 61 percent accuracy and even...

Sandra:...when you gave them more than one picture of the person's face or up to five pictures they were 91 percent accurate at depicting this. But these were pictures from a dating site.

Kai: In which people presumably would portray themselves in accordance with their sexual orientation. So this is not to suggest that any random picture could be used to determine someone's sexual orientation with that accuracy which creates a lot of problems. So there is a real danger that we might actually build applications on the back of research such as that which would suggest that we can read all kinds of different traits from a person's face, sexual orientation, whether they are likely criminal offenders. We have seen studies that suggest you can read a person's IQ from their face. So if those technologies were rolled out there's a real danger that they can lead to harmful classifications because we should not forget that there is a large proportion of false positives.

Sandra: And there's a large proportion of forced dichotomies. So in this case for instance you were forced to be either straight or gay for the purposes of this study. There is no grey in these sort of areas. And let's not forget that we already have a lot of data stored on social networks so we don't actually need things like the new iPhone facial recognition. We have social network pictures. We have pictures in databases of drivers licences or I.Ds, passport I.D.s, and so on and so forth. But this would make it widespread and available to third parties as well.

Kai: And it would normalise the use of your face as an identification and your face information could enter all kinds of different context not just you know voluntary context where you use your face in that collaboration context we mentioned earlier but in all kinds of different context where corporations or other people might try to read off your face your emotions or certain personality traits. Our faces are one of our most unique features and there's a real danger that this might be used and abused in ways that we might not be able to control.

Sandra: And one that we might not be able to change. If your phone gets hacked you can actually change your password, if your e-mail account gets hacked you might be able to change your password or you might be able to change your email address or your phone number or other things. You really cannot change or face that easily.

Kai: No, you would have to go to great lengths to make these sort of changes.

Sandra: And once this transforms many aspects of our daily lives, so not just the interaction with our phones, this becomes increasingly important. So there's a good analysis in MIT's Tech Review around what is happening in China at the moment where Alibaba is enabling the employees in Shenzhen to actually enter the office building using their face instead of a swipe card I.D.. There is a train station in Beijing that matches passengers tickets to their government issued I.D.s and it scans their faces to do that. If their face matches what they have on record, then the system says well your ticket is fine, you're good to go. A number of these surveillance cameras that can spot people at various locations are already being used the around the world. Companies like Baidu for instance are also providing face recognition technology to a large number of platforms. So face recognition will actually influence not just your personal life but your work life and every aspect of your life.

Kai: And let's not forget that you can pay with your face at KFC as we discussed last week. But seriously I think many people find the prospect of face recognition and the fact that they can be identified from their face in public spaces quite creepy. And so as we roll all this technology into more parts of our lives and use it on our phones on a daily basis and use it potentially to pay from our phones and have our emotions or traits read of our faces by our phones, I think people might grow increasingly uncomfortable with the prospect of what can be done here. And so I equally believe that we will have to think through these issues more and that this will come up more often on the podcast as we go forward. And I think these discussions will only become more prevalent as these technologies become more widespread.

Sandra: So from the very small to the very large. Our last story this week is on infrastructure.

Kai: Power to the hackers. This story is from Wired magazine and it outlines some serious and really unprecedented hacking of the US and other countries' power grids where hackers were not only able to gain entry to computer systems but gain what is called operational control - the ability to actually use the software switches that turn parts of the grid on and off.

Sandra: So in a security breach that security firm Symantec revealed, unnamed hackers hacked more than 20 power companies in North America and across Europe. And in a handful of these cases had direct access to their control systems.

Kai: So the company was able to trace those instances back to a group which it calls Dragonfly 2.0. It's more than 20 cases, affects one company in Turkey but in particular US companies and the prospects are scary because it shows that hackers are now able to gain access to the deepest level of a crucial infrastructure with the ability to switch off whole parts of grids of some companies in the US which opens the door to sabotage and terrorist attacks of a new kind.

Sandra: So we've already seen this week that the ability to update your car over the air opens the door to hackers gaining control of your car as well. In this case it just happens to be with something much much bigger like infrastructure.

Kai: Yes so the story here is that the more we're creating the ability to access and control devices or infrastructure by way of remote access and software, for the purpose of making things more efficient or more convenient, at the same time we're opening the door to hackers who could potentially gain control because it turns out those entry points can never be fully protected.

Sandra: And just to show how widespread this problem is so this is every single type of business, government organisation and industry this week alone, and Wired does a nice job of summing this up, we've had the security hack of over 20 power companies. This week again we've had Equifax that confessed to being the target of a breach that stole 143 million Americans' data and one of the worst data steals ever.

Kai: We're talking credit ratings, social security numbers, credit card data, address data. A massive security breach.

Sandra: Google patched the flaw in Android this week as well that would allow hackers to take control of devices. And we saw that ultrasonic voice commands can actually hack things like Siri and Amazon Echo so you won't even hear the attack coming.

Kai: So the point being here that the microphones that detect your voice and read your commands can also be fed sound that the human ear can't hear but those microphones can pick up. So the hackers can potentially take over your phone or send commands to your Apple Watch that you can't hear.

Sandra: And let's not forget that this is not something that you can fix with a software update really because it actually takes vulnerabilities that use the hardware in those machines as well. So these problems are quite complex and widespread. And it doesn't only affect industries, companies, networks, it affects governments as well.

Kai: So there's an article which shows that the German Chaos Computer Club, a white hat hacker group that works to expose security flaws in all kinds of devices and networks, has found a serious security flaw in the voting software used in the upcoming German election.

Sandra: So if this were to happen at large scale the German hacking collective Chaos Computer Club (the CCC) showed that they could change every single submitted result in the German election.

Kai: And the point here is not that we can't fix this because now that this particular flaw is known we can and will actually fix this before the upcoming election. The point being that sometimes just the doubt being raised of the potential for an infrastructure to be hacked such as elections could potentially be used to put in question the validity of the election results itself.

Sandra: So the election process works because we trust the election process is fair and that it will reflect what we want but undermining trust in the election process might actually see us going back to paper versions.

Kai: Which might not be such a bad idea because paper in the end is the only medium that you can actually have a reliable recount. If you cannot trust the collection of the votes in the machines, how are you going to trust the ability to do a recount to revert to some ultimate truth about the election.

Sandra: Paper ballots will cost us a lot of money.

Kai: Well but that's my point right. There are some instances where the efficiency argument of going digital or creating a remote access to a device might not outweigh the potential risks or the potential for fraud and a lack of trust in this instance in the election process. My point is yes we gain a few hours in obtaining the election results but we're talking a four year term in office here vs. a few hours we gain in obtaining the election results so the question is is it worth it. Yes voting is easier, collecting votes is easier, obtaining the result is quicker and easier. But if we can't trust the result ultimately and people can raise doubt about the validity of the election maybe a paper-based...

Sandra:...even if the software is secure. So this is not about necessarily the software being hacked, but about people's belief that the software could have been hacked.

Kai: In particular in a world where we have learned that we can steal just about any secrets from government agencies, we can hack into power grids, we can steal Social Security numbers so in a world where we cannot really trust the advocacy of computer technology, the question then remains whether some mission critical processes in our public or government life might have to be beyond the use of digital technology.

Sandra: So this bigger issue of trust I think is one that we will have to come back to as well. But that's all we have time for today.

Kai: Thanks for listening.

Sandra: Thanks for listening.

Outro: This was The Future, This Week made awesome by the Sydney Business Insights team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. You can subscribe to this podcast on iTunes, SoundCloud, Stitcher or wherever you get your podcasts. You can follow us online, on Flipboard, Twitter or If you have any news you want us to discuss please send them to

Related content