Sandra Peter and Kai RiemerSandra Peter, Kai Riemer,
text
The Future, This Week 10 November 2017
This week: AI can’t see my cat, predicting is creating, and it’s a Musk. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.
The stories this week
00:36 – The BBC on Japanese and MIT researchers fooling AI
09:10 – JC Penney and Macys replace merchants with algorithms
17:57 – Elon Musk thinks we live in a computer simulation
Other stories we bring up
Fooling neural networks in the physical world with 3D adversarial objects
Synthesizing robust adversarial examples
MIT researchers report on how they fooled AI
Google’s AI thinks this turtle is a rifle
Construction worker’s automatic translation nightmare
99% Invisible Ep 229 the trend forecast
Bank of America on living in a simulation
Nick Boström’s simulation argument
A brief guide to embodied cognition
Maurice Merleau-Ponty and the importance of the body
Isaac Asimov 2016 Memorial Debate at the American Museum of Natural History
Does it matter if we live in a computer simulation
You can subscribe to this podcast on iTunes, Spotify, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.
Our theme music was composed and played by Linsey Pollak.
Send us your news ideas to sbi@sydney.edu.au.
For more episodes of The Future, This Week see our playlists.
Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.
Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.
Share
We believe in open and honest access to knowledge. We use a Creative Commons Attribution NoDerivatives licence for our articles and podcasts, so you can republish them for free, online or in print.
Transcript
Introduction: This is The Future, This Week on Sydney Business Insights. I'm Sandra Peter and I'm Kai Riemer. Every week we get together and look at the news of the week, we discuss technology, the future of business, the weird and the wonderful, and things that change the world. OK let's roll.
Sandra: AI can't see my cat, predicting is creating, and it's a Musk. I'm Sandra Peter, I'm the Director of Sydney Business Insights.
Kai: I'm Kai Riemer, professor at the Business School and leader of the Digital Disruption Research Group.
Sandra: So Kai, what happened in the future this week?
Kai: So this story appeared in many different places. We're discussing an article from the BBC titled AI image recognition fooled by single pixel change. The article reports on research done in Japan and also at MIT into fooling popular image recognition systems based on the form of machine learning that we've discussed previously so-called deep neural networks, the kind of networks that are trained with lots of pictures as training data and then learn to actually recognise certain patterns and pictures such as cats and dogs and all those kind of things and are widely used by companies such as Google or Facebook. Now what these researchers did is they created so called adversarial examples, pictures that are manipulated to fool the algorithm into seeing what isn't actually there.
Sandra: So what MIT's LabSix did for instance was manipulate an image of a turtle to fool the AI into recognising it as a rifle.
Kai: Or a cat in to recognising it as a ball of guacamole.
Sandra: And they did this quite well. The algorithm's second guess would have been broccoli, it's third guess mortar, burrito or a ladle or a soup bowl so it wasn't even close to thinking it was a cat.
Kai: A baseball was recognised as a cup of expresso.
Sandra: In each of these cases, and we'll include the links in the show notes, there's absolutely no way a human could have made that mistake. It's not even close. I can't even see the rifle on the table...
Kai: No there's a turtle, there's a cat quite clearly in the picture. No way anyone would...
Sandra: ...even think of recognising as guacamole. So what does this tell us? This tells us a couple of things: first is what the MIT researchers point out as well is that this is a significantly larger problem in real worth systems than previously thought. As these machine vision algorithms are rolled out quite widely and we have these in CCTV cameras (we've spoken about that before). Same thing would be rolled out in autonomous vehicles. This could be a significantly larger problem than we previously thought.
Kai: So we've previously made the point that these algorithms are probabilistic so they're never going to be 100 percent accurate but in many instances research has shown that deep neural networks machine learning can be as accurate or even better than humans in finding patterns in certain pictures such as cancer cells in MRI pictures, a widely used example. But what is new here is the extent to which these algorithms can be fooled with pictures that for a human quite clearly show certain things that the algorithm doesn't even come close to recognising.
Sandra: And that a human also would not recognise as having been manipulated.
Kai: No that's right.
Sandra: Neither the turtle nor the cat look any different. I cannot tell what they did to it. The algorithm obviously can, thinking it's guacamole.
Kai: So it exposes not only a general weakness in these algorithms but also a way to attack and fool and maliciously exploit visual recognition systems.
Sandra: So first let's talk a little bit about that because we've seen these sort of mistakes happen with other types of algorithms. We previously discussed a vet and an engineer who have failed the English language test required to stay in Australia even though they were native English speakers with a variety of university degrees. We've also seen this quite recently in a predictive policing case. The Israeli police had arrested a construction worker who had taken a picture of himself with the bulldozer with the caption reading good morning in Arabic which was translated by Facebook's automatic translation service which uses AI behind it, was translated as "attack them" in Hebrew or "hurt them" in English. The man was subsequently arrested.
Kai: So the point here is that without any human checking or anything the translation of the Arabic appeared on the man's Facebook stream and then another algorithm used by the Israeli police that polices Facebook for certain patterns in these posts, raises a red flag and automatically triggered the arrest of the person and it was only after several hours in interrogation that the mistake could be cleared up and the man be released. So this points to the general fallibility of these algorithms. In these cases both the English test and the Facebook case there wasn't any malicious intent or any case of fooling the algorithm but both cases showed that these algorithms are still probabilistic and therefore fallible.
Sandra: This story we think also raises another very interesting insight into the nature of deep learning. Quite often the analogy is made that these algorithms learn like we do.
Kai: Yeah. So we often read when these algorithms are introduced analogies with the human brains so supposedly these algorithms learn just like children.
Sandra: The algorithm playing Go, learn to play with simple moves and then more complicated moves and so on.
Kai: But even the Go one shows that because the algorithm came up with moves that no one had ever seen before that something else is going on there.
Sandra: And it discovered more complex moves before it discovered the simple ones.
Kai: That's right. So quite clearly these algorithms learn in quite a different way. And so we can see now that the way in which these algorithms learn is entirely black boxed and the intelligence that is embedded in these algorithms is of a very different kind because as we said no human would be fooled by these pictures when in fact guacamole and cats seem to be very close to each other, only a few pixels away. No human would class these together. So we can really see that the distinctions that the algorithms makes is entirely statistical and probabilistic, it has nothing to do with any experience of the objects in our world and the way in which humans classify things.
Sandra: Exactly. So a child would never grow up learning what a cat is and putting them in the same category as the guacamole they're having or mortar or indeed a ladle or a burrito. Not of the same kind.
Kai: So while these algorithms generally work really well and can do amazing jobs for us we mustn't forget that they are also fragile. They're fragile in the sense that they can be subjected to these attacks and they're also not fallible. But now let's talk about the nature of these attacks. Does that mean that anyone can now come up with these adversarial examples and attack any of those algorithms?
Sandra: So first of all you would have to have pretty good knowledge of what the algorithm is like so that you would know exactly how to fool the algorithm.
Kai: So in these instances these are publicly available algorithms that you can play around with but they are also widely used by companies on the Internet but you can use them and subject the algorithm to lots of inputs and see what outputs they come up with to then create those examples that will fool the algorithm.
Sandra: Let's not forget all of these algorithms will improve over time. Both Google and Facebook have since come out and they said they were looking at the exact same things, they're trying to build algorithms that are more robust to these sort of attacks. This does not preclude however an arms race where algorithms are getting better and we're coming up with more and more innovative ways to fool them.
Kai: And also let's not forget that these kinds of algorithms are used in security sensitive context such as antivirus software, spam filters. So there's certain incentives for people to actually engage in this arms race and try to fool the algorithm into letting through malicious code that disguises itself.
Sandra: There's quite a bit of research around the possibility for instance of using CCTV cameras to automatically detect weapons. There is an article we'll include in the show notes that looks at the detection of firearms and knives and whilst in this case we had a turtle that posed as a rifle, you could always have a rifle or a knife posing as a turtle. Much bigger problem.
Kai: And the MIT team have already said that they are trying to figure out how to tweak their technique so that it will work under more black box conditions with little information about how those systems work. So as the techniques to building these algorithms progress, techniques to fool the algorithms with these adversarial examples will also improve.
Sandra: So I'm sure we'll see this again.
Kai: Our second story is from Forbes Magazine titled 'JC. Penney and Macy's Replace Human Merchants with Data Algorithms'. This is an interesting story about two large American retailers both replacing merchant staff, so people in charge of purchasing and making buying decisions, with data analytics solutions so J.C. Penney ousted its chief merchant and instead they will empower, they say, the next level of buying decision makers with real time customer data-based analytics solutions to predict the next fashion and therefore making buying decisions on the basis of that. And the article says that they are taking a page or two out of the Amazon playbook looking to monetise the mounds of data generated in recent years from shoppers' digital and physical footsteps. Now Sandra what is going on here?
Sandra: So this story looks like a story about efficiency. Macy's and J.C. Penney are figuring out what products they should feature in their shops. Since most product searches start on Amazon and Amazon has been using data analytics and algorithms to do this for many many years now they're framing it as an efficiency story whereby employing the same kind of analytics that would look at previous behaviour of shoppers, what they have purchased, how often, where they have done so, would tell them what to feature in their shops and they're doing this across things like merchandising across things like planning and also the private brands that these shops own. This raises some questions around whether this is art or science. And really what do we think fashion retail is.
Kai: Yeah. So my understanding of fashion is that fundamentally you create fashion. Fashion trends are created. Now if you think about it, if more and more companies are employing these prediction techniques and they're just looking backwards on past data to predict the next trend, if everyone were to just predict trends who's going to create trends, how does that work?
Sandra: It used to work that many of these shops would actually set the high street fashion trends. Today however the world of fashion is quite different. The future is created by a handful of well really largely just one player WGSN a trend forecasting company. And I came across this wonderful episode the trend forecast episode of the 99% Invisible Podcast. This is a fantastic podcast hosted by Roman Mars. It focuses on design and architecture and it's a collaboration between the San Francisco Public Radio and the American Institute of Architects. Really great episodes on design and architecture and all sorts of activities around the world. So this is about a company called WGSN formerly "Worth Global Style Network" founded in London in the late 90s by a guy called Marc Worth. And this company predicts fashion trends up to two years in advance and predicts everything about them - the fabrics that will be used, whether they'll be miniskirts or athleisure, what kind of colours will be produced, down to databases of tens of thousands of patterns. I think the number was something like 70000 original design patterns that they would provide then to all of these companies, all the companies on High Street, all the companies that were used to shopping the most recent fashion from, would tap into the design trend the forecasting company pay a lot of money to figure out what will be in style in two years from now.
Kai: Do we know anything about their methodology?
Sandra: What we know is that they employ quite a few researchers who tried to figure out lifestyle trends. They try to figure out for instance if we'll be more into health lifestyles or whether most of us will be working from home if there is a trend around flexible work. And then we might be more likely to wear sweatpants in which case sweat pants will become a thing and we should have better looking sweat pants and then they create thousands of patterns. They decide what colours will be in trend and so on. They might look at the music industry, they might look at the world of work, they might look at the tech gadgets that we have and decide what will be the next trend. Here's Sarah Owen, a trend forecaster at WGSN on 99% Invisible explaining it.
Audio Sarah Owen: To me it's connecting the dots. It's pattern recognition. It's taking those cues and pairing that with that data that will kind of inform the future or create it. That's our tagline.
Kai: So what we're saying is this institute is coming out with these trend forecasts and then many players in that industry, fashion companies and retailers, will receive that information and they will then see what the future is like and probably act accordingly. So it's safe to say that the company is not actually predicting as much as they are creating those trends.
Sandra: Pretty much considering that everyone subscribes to this company to know what they should be having in stock let's say what the jeans of 2019 should look like. And considering that it's not only designers and retailers that are doing this it's also fabric companies that are doing this, paint companies that are doing this, interior decorating and design companies that are doing this, it creates the whole ecosystem and builds the trends.
Kai: And we have an interesting article in Fortune Magazine also from this week about Whole Foods, the retailer recently acquired by Amazon and they bring out their 2018 Top 10 Food trend predictions. And given that they're also the major retailer in the US for these kind of foods it is safe to say that what they sell as their predictions is more or less an announcement of what customers ought to shop next year because by their very power they have decided that this is going to be the trends. Which leaves us with one question. What is the role of these algorithms in J.C. Penney or Macy's in doing these Amazon style predictions. Are these companies really in a position to act like Amazon.
Sandra: If they put themselves in that position then they would be essentially competing with Amazon. All that Amazon has to do is to optimise the quantity and the speed at which they sell the products that other companies produce.
Kai: Because the whole point of Amazon is that they are selling everything and anything whereas Macy's and J.C. Penney they actually have to take care of selecting very carefully the range of products that they sell.
Sandra: And once these companies are out of the trend making industry it's just a matter of cost.
Kai: So is the announcement that we then heard this week about employing these algorithms more about a general efficiency narrative to please shareholders than a serious argument of becoming a different kind of retail.
Sandra: This is a good question. And whilst we don't know the answer in this case it points to two things. One is the need for these companies to actually make the efficiency argument and employ the algorithms and second, this increasing move of things like fashion, retail, food shopping, this increasing move towards science rather than art. A lot of these industries used to be about creating those trends not optimising for things that are emerging naturally or indeed in the case of fashion having one large company that sets the trends two years in advance.
Kai: So one thing we see here is that in a highly competitive market the pressure to get things right becomes such that it seems to make much more sense to employ an algorithm to accurately predict the future than to engage in the risky business of making judgements about what to create for an unknown future. So maybe it is this perceived uncertainty in the market that lets people reach for the supposed control in certainty of predictive algorithms which systemically if many of these producers and retailers engage in those practices will put more power into the hands of external players such as WGSN in coming up with their predictions to actually create shape and guide those markets so again we think this is an example of the effects of algorithmic management and this is only one example of where we are going to keep an eye on how algorithms shape industries as we move forward. Which brings us to our next story.
Sandra: It's a Musk.
Kai: We really think this should become a new segment on the podcast. Elon Musk has been at it again. We've been following Elon Musk and we're both great fans of his products, his cars, and his batteries and solar roof tiles and all the good things that Tesla is doing. But you Elon Musk is also a great entertainer and he really should have his own segment on the podcast. So this news story is from Inc.
Sandra: So in this week's "It's a Musk", Elon Musk thinks that we are all living in a computer simulation. So after his dire warnings that we're about to be annihilated by artificially intelligent robots who want to kill us.
Kai: That we therefore should either merge with the intelligence and become cyborgs or alternatively escape to Mars.
Sandra: Now he's completely convinced that we are almost certainly living in some version of the Matrix. We are all living in a computer simulation. Here's what he had to say.
Audio Elon Musk: Forty years ago we had Pong - like two rectangles and a dot. That was what games were. Now 40 years later we have photorealistic 3D simulations with millions of people playing simultaneously and it's getting better every year. And soon we'll have virtual, virtual reality, augmented reality. If you assume any rate of improvement at all then the games will become indistinguishable from reality, they'll be instinctual. Even if that rate of advancement drops by a thousand from what it is right now then you just say well let's imagine it's ten thousand years in the future which is nothing in the evolutionary scale. So given that we're clearly on a trajectory to have games that are indistinguishable from reality and those games could be played on any set-top box or on a PC or whatever and there would probably be you know billions of such computers or set top boxes. It would seem to follow that the odds that we're in phase reality is one in billions.
Kai: So his argument is essentially that if we look at the rate in which we have made progress in computers simulating technology and we extrapolate and we keep inventing and improving at that rate, it is entirely likely that at some point we're going to be able to just simulate something that is as close to reality and that we're probably going to do it and it's more than likely that someone has done it before and that us mere mortal creatures are actually part of someone else's simulation as we live our lives right now.
Sandra: This was in a report that actually was looking at the implications of virtual reality and they quoted a number of philosophers and scientists and other thinkers to come up with this 20 to 50 percent chance that we are in a simulated reality. So first let's look at where does this come from.
Kai: Well it's not Elon Musk making this up. This is actually coming straight from the philosophy of Oxford professor Nick Bostrom a Swedish guy, Director of the Future of Humanity Institute and a very well published philosopher who has engaged with essentially all the kind of questions that Elon Musk has channelled. Bostrom is writing about trends humanism the idea that humans will become cyborgs and merge with technology. This is going straight into Elon Musk's idea about the neural lace. He has also done work on risk and the future which has translated straight into Elon Musk's fear of AI and the machines uprising. And in 2003 he's also published a paper: 'Are we living in a computer simulation?' which is essentially what Elon Musk is talking about today.
Sandra: So that paper argued that one of three propositions is true. First that the human race is likely to become extinct before reaching a post human stage, and this has informed some of Musk's strategies as well - we must go to Mars. Second that any post human civilisation is unlikely to run a significant number of simulations of its evolutionary history or variations there off. And third that we are almost certainly living in a computer simulation. This is known as the simulation argument and it's a logical projection argument.
Kai: And the article in Inc. actually makes the point on the basis of this reasoning that the best out of these options that we can hope for is that we're actually already living in a simulation because otherwise we might actually drive us to our eventual extinction.
Sandra: So we better hope we're in one.
Kai: Yeah. So now what do we say about this argument?
Sandra: Does this make sense?
Kai: To me it's let's say nonsense, I could say bullshit. Now, first of all I don't think this is likely and I want to make the argument that this is based on some very strong assumptions about what the world is like. And the article actually mentions Rene Descartes distinction between body and mind and the idea that the mind is actually the locus of all intelligence and that we therefore can eventually come to a brain in a vat kind of state where we are going to be able to separate us from our bodies and upload us into a computer simulated environment. And indeed many of those fantasies are being tossed around in the literature at the moment in the same narrative that gives us the singularity and the machine uprising and the Matrix. Yes so again science fiction has inspired a lot of this thinking but at the same time the ideas of Rene Descartes have been thoroughly debunked in philosophy from many different angles.
Sandra: You're referring to the mind/body dualism here right.
Kai: Yes for example it discounts the body. A lot of our cognition and our experience and also the way in which we think about the world, the way in which we do mathematics is fundamentally grounded in the body. Think about how we do maths calculations depend on a sense of direction, up and down, left and right. A lot of our language and this is work by Lakoff who has shown that a lot of our language is based on spatial matter force. So the body is usually being discounted in these kind of arguments that reality is purely information based and therefore that we can recreate reality in a computer system. And it also is grounded in the assumption that the brain is essentially a hardware that runs a kind of information based software which is our consciousness, our experience and that therefore we can recreate reality in a computer machine. But this kind of base assumptions that this whole reasoning is founded upon discounts not only works in philosophy, on embodied perception for example, work of Maurice Merleau-Ponty the French philosopher. It also discounts work in neuroscience which shows quite conclusively that the brain isn't a kind of computer, it's an organism that grows that is entwined with the body and the sensory system and actually draws on the environment as a sensory input. So the idea to take the human experience out of the embodied environment into a kind of computer simulation is not as straightforward as this philosophy might have us believe.
Sandra: Other people have also examined whether simulations are indeed possible. This time let's turn to physicists. There is a couple of ways to think about it. First couple of physicists recently published in Science Advances tried to consider this notion of creating massive scale simulations using classical computer modelling and they've determined that this is in fact really impossible due to something that's known as the quantum hole effect which causes simulations to become exponentially more and more complex as the number of particles that we add to the system increases. Meaning that if we were to simulate the entire universe it would be impossible to do with current computational methods. Another way to approach this has been from quantum theory itself and quantum theory is increasingly being understood in terms of information and computation. And some physicists feel that really at the most fundamental level nature might not be laws or pure mathematics but rather pure information. This has made influential physicists like John Wheeler adopt this notion "it from bit" the fact that information bits zeros and ones like we have in computers are the building blocks of the world as we see it today. So if you think the whole universe as let's say a giant quantum machine then the whole structure of the universe is nothing more than information, than bits operations. So in this case reality is just information, whether we're a simulation or not a simulation, it's all just information. We can't be anything other than information so it doesn't really matter.
Kai: So while on the reasoning of Elon Musk and this very taken for granted notion that reality is basically the information that we perceive and therefore with technological progress we should be able to simulate it. There is also strong reasons from philosophy, physics, and neuroscience that speak against this or challenge some of the assumptions, but the question that we have a more pragmatic one is would actually matter whether we lived in a simulation or not. Would it change anything for us?
Sandra: It's not really obvious and this is counterintuitive but it's not really obvious why it should matter, why anything would change. And for this I again want to turn to physicists. And in this case Lisa Randall who is a Harvard physicist. She made a nice argument that the Isaac Asimov Memorial Debate that happened that the American Museum of Natural History and Scientific American reports on this whole debate and we'll include in the show notes but for her the argument was around the fact that nothing would change about the way we should see and the way we should investigate the world. And because of that this whole question around whether or not we live in a simulation, she's saying well this is not as interesting a question as we make it out to be. She also had something to say about the fact that it assumes quite a few things. It assumes another reality that is pretty much working the way ours does. It assumes that the things we do now or the things that we're interested in now for instance playing computer games is what every other advanced civilisation would be interested in or super human intelligence and that we are extrapolating from what we do now playing computer games and so on to what these super intelligent beings would like to do as well. So for her again it's the fact that nothing will change about how we should see the world and how we should investigate the world. So it's really not just a so what question as much as a question about how should we think about what we understand as reality.
Kai: I want to go back to the end of the article because almost in passing the author makes an interesting point about religion. Maybe just maybe these ideas and philosophies about there being a higher civilisation that runs a simulation with us or waiting for the impending singularity that will give us a higher machine intelligence which will depending on dystopian or utopian beliefs will either kill us or solve all our problems the mess we've created with climate change and everything, that those beliefs are really of a religious kind. Some higher power that is at work here. So maybe this is a natural human reaction and an escape from reality where technology has been elevated to almost a religious belief where the future and the uncertainty about the future is couched in ideas about technological progress and higher civilisations.
Sandra: What we would still need to show is that making such assumptions about what or who is out there and why, making such distinctions in what real is makes a difference to what we might do or see or observe around us or do in our everyday lives.
Kai: I do have to give Elon Musk credit for bringing up these ideas because I do think it's useful to talk about this. I do think it's useful to think this through and some of those things we might think they're bullshit but the conversation I think is useful because the way in which technology is such a big part of people's lives these days deserve scrutiny from different angles.
Sandra: So simulation or not this doesn't change our notion of reality in a meaningful way.
Kai: No it doesn't but it is kind of important to talk about this and it's just another technology type version of the age old question of the meaning of life and whether there is something bigger behind all of this that we can't quite grasp right.
Sandra: On The Future, This week, the meaning of life. That's all we have time for today.
Kai: That's all we have time for. Thanks for listening.
Sandra: Thanks for listening.
Outro: This was The Future, This Week. Made awesome by the Sydney Business Insight's team and members of the Digital Disruption Research Group. And every week right here with us our sound editor Megan Wedge who makes us sound good and keeps us honest. Our theme music was composed and played live from a set of garden hoses by Linsey Pollak. You can subscribe to this podcast on iTunes, Soundcloud, Stitcher or wherever you get your podcasts. You can follow us online on Flipboard, Twitter or sbi.sydney.edu.au. If you have any news that you want us to please send them to sbi@sydney.edu.au.
Close transcript