This week: medicine goes digital, AI for president and cameras killing keyboards. Sandra Peter (Sydney Business Insights) and Kai Riemer (Digital Disruption Research Group) meet once a week to put their own spin on news that is impacting the future of business in The Future, This Week.

The stories this week

Medicine is going digital and the FDA is racing to catch up

Hear me out: Let’s elect an AI as president

Your camera wants to kill the keyboard

Otsuka Pharmaceutical and Proteus Digital Health

Tim Cook is testing a glucose tracker for the Apple watch

Donald Trump, our A.I. president

Why A.I. is different to human intelligence

iCEOs in the Harvard Business Review

A picture is worth a thousand words


You can subscribe to this podcast on iTunes, Soundcloud, Stitcher, Libsyn or wherever you get your podcasts. You can follow us online on Flipboard, Twitter, or sbi.sydney.edu.au.

Send us your news ideas to sbi@sydney.edu.au.

For more episodes of The Future, This Week see our playlists.

Dr Sandra Peter is the Director of Sydney Executive Plus at the University of Sydney Business School. Her research and practice focuses on engaging with the future in productive ways, and the impact of emerging technologies on business and society.

Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus at the University of Sydney Business School. Kai's research interest is in Disruptive Technologies, Enterprise Social Media, Virtual Work, Collaborative Technologies and the Philosophy of Technology.

Introduction: The Future, This Week. Sydney Business Insights. Do we introduce ourselves? I'm Sandra Peter, I'm Kai Riemer. Once a week we're going to get together and talk about the business news of the week. There's a whole lot I can talk about. OK let's do this.

Kai: Today in The Future, This Week: medicine goes digital, AI for president and cameras killing keyboards.

Sandra: I'm Sandra Peter. I'm the director of Sydney Business Insights

Kai: I'm Kai Riemer. I'm professor here at the business school. I'm also the leader of the Digital Disruption Research Group. So Sandra what happened in The Future, This Week?

Sandra: Medicine is going digital and regulation is trying to play catch up.

Kai: So the first article is from Wired magazine and it talks about the FDA, the regulator in the USA, the Food and Drug Administration to play catch up with the increases in digital medicine in medical devices, smartphone apps, algorithms that are capable of spotting specious moles and quantifying blood flow and the regulator finds itself in a position where Silicon Valley and technology companies are innovating at a pace that far outstrips the cycle of approval and review that the agency would normally apply to traditional medicine so they have a problem.

Sandra: So what aspect was the problem of speed? So whilst you used to have technology used for only making things like pacemakers for instance and will take really years for manufacturers to develop this product so the regulatory approval process would take many many years and people have time to catch up all also time to catch up with the developments that were happening around the code and around software going into more and more complex problems. It got harder to match the pace and especially there's the sheer amount of innovation that is happening whilst you used to have one two three companies doing this or maybe 20 years ago. You now have hundreds of companies developing apps that are ever increasing speed and sometimes the black box is becoming even bigger and bigger. So the FDA and other regulatory bodies are finding it quite difficult to match this and many are not high risk products but there are certainly developments in the high risk category as well. For instance Tim Cook is testing a glucose tracker for the Apple Watch. So there was an interesting report from CNBC that Apple has already begun feasibility tests with a tracker that connects with the Apple Watch that monitors glucose levels.

Kai: And we mentioned previously that the Apple Watch is more and more becoming a medical device. It already measures steps and heart count and...

Sandra: But this would be an actual real medical development. If you had a glucose tracker that does not go under the skin it is an amazing development. If you can keep track of your blood sugar levels whilst having a non-invasive device.

Kai: So the article says that the FDA used to be quite attuned to time cycles which manufacturers would develop and then put forward for review with their medicine. Not only is Silicon Valley and the tech companies are developing things at a faster speed they also create variants of their apps and their devices at much faster speed. So previously you might have a medicine and then an update might come out a couple of years later but with the speed at which software iterates and devices are coming to market the cycles have gotten much shorter. So the traditional approval processes just can't cope with what is happening now that digital technologies find themselves in the medical space and they do need approval because they do measure certain health aspects and people acting on it might potentially have serious consequences for their health.

Sandra: See the glucose monitor we were talking before. So any regulatory body now seems to have two big choices on their hands. One is to actually try to translate the way they do current regulation for the sort of digital paradigm that would mean actually hiring a whole bunch of people to inspect the code and to try to understand what is going on and to try to keep up.

Kai: And the article says that it’s not easy to hire away engineers and technicians from Silicon Valley because the FDA is hardly able to match the same salaries and work conditions and so they find themselves in a competition with tech companies for this talent which they are not able to actually match.

Sandra: Also they lack the structure to accommodate the types of apps that we're seeing now. So traditionally places like the FDA are organized by specialty. You've got people who are experts in cardiology who look at pacemakers and so on. You've got people looking at MRI machines and whatnot. But a lot of the algorithms and they AI being developed in the space is trying to cut across a number of these fields so who do you hire and how do you place them in the structure that was traditionally the FDA.

Kai: You basically need specialists in sensor technology or specialists in algorithms and AI. And so people who can judge how well these technologies work regardless of the application that they fall on.

Sandra: Exactly. And even this morning we've seen an article coming out where in the US the Food and Drug Administration received the new drug application for an adjustable sensor that is part of a tablet that they have to now review. So what happens then? Well the FDA has or any other regulatory body in the medical space has a choice between trying to keep up and trying to fight a losing battle or rethinking from the ground up how they do regulation in this space. And that unfortunately comes with immense problems as well. So the solution that the FDA is considering right now is rather than reviewing each line of code for all the devices and all the software applications that they come in and really assessing each of these items on their own merits, they want to flip the assessment process on its head. So think more like airports are more like what you would do when you try to go through security. If you're a trusted person in the US you can get on TSA approved list that says this is someone we know they've travelled a lot before. We know they never carry dangerous things so they don't have to take their shoes off. They don't have to take the laptop out of the bag to go through customs. Well that's what they're considering doing.

Kai: So they're moving from actually testing each medical device or each app each drug to auditing and certifying the developers and putting them on a wait list. So if you are a trusted medical device or medical app, your apps will be pre-approved which leads to a whole lot of new things that we have to think about.

Sandra: One of the bad things that we have to think about is that as you've mentioned before some of these applications and even software applications have real life implications. You know if there is a mole and the app tells you well it's not cancer and you don't go to see a doctor who can tell you well actually it is.

Kai:..You should have come eighteen months ago...

Sandra: Exactly.

Kai: Exactly.

Sandra: There are some real dangers here. On the other hand there are some benefits in that many of these applications would rather than taking 7,10, 15 years to get to market rush through this process you might have lifesaving applications that actually make it to patients or to consumers to users much much faster.

Kai: The dilemma is do I not approve something that is potentially very useful quickly or do I let things that are potentially problematic through the approval process by having certified a company you trusted and you know who says that a company that produces a great device one day might not actually produce something that doesn't work the next day. So that's the problem we're talking about.

Sandra: There's also the question of incentives. So on an earlier podcast we had the conversation about the Apple Watch and about the lack of real medical applications on it. We talked about the fact that there was so much regulation especially in the US with the FDA that there wasn't in the interests of companies like Apple or like Google to develop or to jump through all of those hoops. Now should those hoops be removed? Last year alone for instance Google's venture capital arm manages about two and a half billion dollars invested half of that in the healthcare space. Other companies would have more of an incentive to invest in the healthcare space.

Kai: All of this is complicated by the fact that many of the technologies that we're talking about especially machine learning and AI are fundamentally un-auditable and we've touched on this previously. Traditional algorithms that work on a certain if then logic you can basically audit every line of code and then determine what it does so you can be certain that what it does is what it is supposed to be doing. With machine learning that that's fundamentally different. Right. The algorithm is trained on a set of data and learns by way of the reorganization of its internal neurons to detect certain patterns and it does that with a certain accuracy and you can put it through certain tests but you don't actually know how it will react in each single instance especially not when you put it out into the field and you are no longer under laboratory under test conditions. People take photographs of their skin under all kinds of light. So it's really very difficult to audit and put an approval stamp on it and say this has a certain efficacy. So you cannot actually put it through the same rigorous process as you would with a more traditional drug. So the question now for me is then will there be some form of signage or some form of seal which will say this app was tested or this app is untested. Will there be a warning that comes with this. All of this has to be discussed going forward.

Sandra: And the question then becomes if some of these apps do get some sort of seal of approval or indeed approval from the FDA do they then automatically become part of the insurance system. Do they become part of Medicare?

Kai: And do insurances and Medicare Medicaid then pay for something that is not actually tested and what happens if afterwards the efficacy is questioned, and it turns out that the thing didn't work as it was supposed to work, who is then liable. So there's a lot of questions down the track.

Sandra: But yet the alternative would not be much better. That would mean the development of a parallel market where you have all these apps they are untested. However people do make decisions especially if they're not part of a traditional insurance scheme. They do make decisions based on this. There are quite a few other markets where people are not as concerned as they are in places like the US with regulating these apps developing markets or even bigger markets such as China.

Kai: So it's clear that the FDA has to do something about this. The question that I have is that is adopting the same ethos that Silicon Valley has the right idea because let's face it Silicon Valley operates on a break it fast break it off and fail often kind of ethos and that might not actually be acceptable in the health industry where failure can have serious consequences for health and people's lives. So this is interesting going forward. Not doing anything is not an option. They have to do something. But it is very hard for them to keep up with that speed.

Sandra: So a conversation I'm sure will come back to in the future.

Kai: Yes we'll be moving to our next story which is also centered around the US and it concerns the president and it concerns AI. Now there was an article again in Wired magazine which is titled "Hear me out let's elect an AI as president". Now the author makes the argument that many people haven't been all that happy with recent elections and presidents and he then goes on to speculating what it would be like if we elected a president and wouldn't that be a great way to overcome many of the problems that we have with politicians in the US, with Congress blocking each other ideologies being at play, personal preferences and all the kind of human stuff that gets in the way of good rational governing. So the idea being that if we had an AI and we gave it a good goal which is maximizing happiness for the many, wouldn't that be a great way of governing a country.

Sandra: Yes and the article even points out that this AI might learn that it is a good idea to tweet a little bit less.

Kai: Yes and so we could have a Democrat and a Republican. I could have different goals and we could implement into the algorithm the different platforms and programs of the parties and we could then let people elect the algorithm that they want as their president. And life could be so much better. Now there are problems right?

Sandra: Yes indeed there are some complex issues that this AI would need to sort. Some of them seemed pretty straightforward you know things like the Constitution should it be interpreted literally or should they be adapted to the way times evolve? Or how do you tackle issues such as poverty and now even within the University of Sydney Business School, we've seen different approaches to how we think about poverty whether this is the domain of government, is the domain of private organization of not for profit organization or of groups of stakeholders coming together. And then there are questions of consequences right. Many of the decisions that have been made by real presidents have had unintended consequences and probably so would those of an AI no matter how good we make it. So how would you tackle those?

Kai: The article quotes President Obama who said in 2009 by the time something reaches my desk that means it's really hard. So the article could end there and just conclude that AI is really not fit for purpose here. But no the article says that algorithms can now win at the game of Go. They have made all these advances in detecting cancer in MRI images. So look at all the advances that we have made with AI. So if we innovate on this path forward surely we will be in a position where AI can make better judgments than humans about more complex more difficult issues. But this is where the problem lies. I call this the extrapolation fallacy just because we have mastered that certain pattern matching exercises we have not created anything that resembles human judgment.

Sandra: So just because computers win at chess or go does not mean that we will manage to build Skynet or the Matrix or hell. 

Kai: No making judgment in situations that are novel and that people have not encountered previously is quite different to learning from past data to do pattern matching right. So we should not forget how these algorithms work. Each of those algorithms has to be trained on past known data according to a known success criterion. So it's quite different to deciding in novel circumstances that we haven't encountered before the difficult situations that we want any leader to face and make judgments in, thinking forward into the future about things that we as humanity haven't encountered before, compared to using an algorithm in a well understood area which is you know finding melanoma in pictures. So they're fundamentally different kinds of cognition. So to suggest that an AI could actually replace a human in making judgments in difficult situations in novel situations is just preposterous. Or should we say bullshit.

Sandra: In fact there was another article this week in the New York Times can Donald Trump our AI president which actually tries to look at what this premise would look like in real life.

Kai: Yes this article is written by a guy named Robert Burton who is a neuroscientist and knows a lot about how the brain works and how these algorithms work and he makes quite an interesting argument. He says we have struggled collectively in the media, as a society, people, to make sense of the way in which Donald Trump acts. We have psychological analysis, people have labelled him a narcissist, megalomaniac and all of these kind of psychoanalytic analysis and he says look the closest we have to understand the way in which he acts is to liken him to a current machine learning AI. So in fact this guy makes the argument that if we put an AI into the president's seat it would act exactly like Donald Trump is acting.

Sandra: So the same reasons that make it difficult for us to understand why Trump is doing some of the things that he's doing would play out in the case of real AI. To basically understand what his decision making process is we wouldn't have to know what are the basic assumptions that he makes, whether there are positive values or negative values and how they compete with each other the moment he tries to make a more complex decision.

Kai: Yes the article says when we strip away all moral ethical and ideological considerations from his decisions and see them strictly in the light of machine learning his behaviour makes perfect sense. So the article goes on to describe how these machine learning technologies work that all they really want to do is get the best outcome, so win a game essentially, so they train on past data to win the game. So take the election - you figure out who wants to hear what message what gets you the most vote, you play to that you will win the election. So you're just perfecting playing the game of how I win the election. Now an algorithm doesn't have a sense of temporality it is not bound to any moral judgment. It is not bound to any particular ramifications in the future. So the article makes the argument that it doesn't really matter what you say during the election. You're not beholden to this. All that counts is now to win on a different set of criteria.

Sandra: And then this criteria will change so the next game that you will have to play it will be according to different criteria. Yes winning in the next iteration once you become president is on different sets.

Kai: So it's all about winning a set of games.

Sandra: So if you think about it it's a bit like making deals.

Kai: Yes his description of the art of the deal, everything is always a deal. It doesn't really have a certain temporality, a history or a future to it it's all about winning the next game and the parallels don't end there. So equally the article says that algorithms cannot explain themselves so famously these AIs act like black boxes so much like we might not understand why certain decisions are coming out of the White House. Machine learning algorithms also cannot explain why they make a certain decision. They just react to input data and they come to an output. But they do not have a moral compass, an ideology or anything guiding their decision making. So they are essentially not able to make the difficult decisions in situations that we have not encountered before. In essence they are fundamentally unsuited and ironically this article makes the argument that if you put an AI in to the presidency you get something like the Trump presidency which goes to show that while AI are an efficient solution for well understood problems they are really not fit for purpose when it comes to the complex situations that we as humans encounter where human judgment is needed.

Sandra: And this sort of argument applies equally to something like the presidency as to CEOs of organizations and whilst we've covered the conversation looking at how you run a country, discussions in the Harvard Business Review for example even going back a couple of years ago have also revolved around having AI to fill in the role of a CEO of an organization and there are a couple of examples there from Palo Alto Institute for the future that has an ICEO which is basically just the virtual management system that automates more complex work by trying to divide it into smaller tasks which is a managerial task you can probably get fairly good at this but it doesn't make decisions.

Kai: No and the argument is of course appealing right in the face of bad leadership, bad management. We want a better solution. We reach for the rational seemingly objective algorithm as the solution. This is what we do these days right. Technology is the solution to all problems. Unfortunately pattern matching algorithms are not the answer to bad leadership. The answer to bad leadership is better leadership it's not replacing humans with machines. And as the article says the Trump presidency is the best example for the kind of incoherent decision making we might get if we actually were to implement something like this.

Sandra: So we're not yet at the point where we will have an AI CEO or possibly an AI president?

Kai: No and the current path of AI based on pattern matching will not get us there. The kind of judgment that humans make is fundamentally of a different kind. It's a different form of intelligence than their reaction to past learned data. The forward thinking future thinking decision making is cognition of a different kind. Even if it's hard to accept. Algorithms are not the solution here.

Sandra: So far today two topics without a clear solution or a way forward. So let's try to keep it three in a row. Our last story for today: Your camera wants to kill the keyboard. So last week at Google's developer conference Google presented Google lens, this new visionary computer technology. There's actually a vision technology that would turn your phone camera into a search engine. So you would point your camera at let's say a plant and he will tell you what kind of a plant it is, you would point your camera at the tree and it will tell you if it's going to grow in your garden, you would use the camera of your phone as the input device for a search much as you do your keyboard now. But you wouldn't need to actually book a reservation at a restaurant would just rather take a picture of your new restaurant and have a table at 8 O'clock.

Kai: We have a little clip here for you.

Audio: "And so today we are announcing a new initiative called Google Lens. Google Lens is a set of vision based computing capabilities that can understand what you're looking at. And help you take action based on that information. So how does it work? So for example if you run into something and you want to know what it is say a flower, you can open Google Lens from your system, point your phone at it and we can tell you what flower it is. It's great for someone like me with allergies. Or if you've ever been at a friend's place and you've crawled under a desk just to get the user name and password from a Wi-Fi router you can point your phone at it and we can automatically do the hard work for you.

Kai: Sandra are you as excited?

Sandra: Not really. Whilst there are some really interesting applications in that I would love to just point my camera at the plant and know exactly what kind of a plant it is. There are much fewer opportunities for me to interact with the medium around me so as appealing as it is the idea of walking all the way to the Opera House taking a picture of it to find out what's on tonight and buy tickets...

Kai:...It would be easier to just type into the search box: Opera House. I mean it's an interesting technology that you take a picture of something and then an algorithm will deduce what you want with this object and then find information about it or give you options of where you can buy it. But think about it. How often do you have the problem that there is an object in plain view and you either want information about it or you want to purchase it. Mostly when I want to purchase something it's not here because that's why I want to purchase it right. So do I have to find a picture first of a thing that I want to purchase to take a photo of it. You know this is sounds awfully complicated. So I think we have an interesting solution here. I don't seem to have that problem as much as the excitement in the clip seems to suggest. Mostly I find it reasonable straightforward to just type into the search box what I need and yes sometimes it might be interesting to find some information about an object that I come across but this seems to me a very niche problem.

Sandra: One that you might use in a museum or in the many other places but one that we think will come to complement something like the keyboard rather than replace it. And we know there are candidates for replacing the keyboard. Everyone wants us to talk to our machines rather than type things in. So that's a whole different conversation. But both Google and Facebook and Snapchat have gotten very excited about this idea of augmented reality and augmented imagery, things that they claim that you simply cannot do with text. And whilst we all know that a picture is worth a thousand words probably camera might not be the new keyboard.

Kai: No and if you think about what you commonly search for you know using Google or other search engines a lot of this will not be about physical objects that you can take a picture of it will be about ideas, things you find on the Internet, digital things.

Sandra: I think though there is a very interesting story that's embedded in some of the assumptions that some of these companies make. For Facebook in particular but also for Google, this is a new language. This pictures allow you to do things you weren't able to do with text.

Kai: But that presents a problem right. I mean we're talking emojis here, we're talking taking pictures and sending them to people. So pictures as a form of communication that are supposedly richer than language.

Sandra: Richer than language but also controlled by an organization. So at the same Google conference Google also redesigned some of their emojis. Now the little blobs are gone and we're getting new redesigned things that look a bit more mainstream. But some of those emojis were things that we used to express certain feelings, like what you feel when someone is being late and you send them a sticker with an angry cat pointing at the watch. If the company changes those it changes the way I express what I feel.

Kai: It takes away your language.

Sandra: It does. And it's something that no company had the power to do before. If you only have text to express yourself that is the same and people cannot take that away for you. But if there is a cat that winks in a certain way I want to be able to keep that. Not only can I at this point not migrate that freely across platforms if you're on Viber, you have a different set of stickers than if you're on Google Hangouts than if you're in IMessages or on we chat. But now they have the power to take them away from me.

Kai: So you're saying we're putting our language in the hands of corporations and they are now in charge of what we can and can't say?

Sandra: To some extent yes.

Kai: But there's another reason why Google likes us to take photographs of our surroundings. Right?

Sandra: Exactly. The good news about Google collecting all of these pictures and potentially all sorts of really creepy news is that whatever picture you take it helps Google or Facebook or Snapchat learn a little bit more about you learn a little bit more about your environment. So you're training the algorithms that recognize things but you're also training them about your behaviour.

Kai: So this is true to how Google operates right. So with everything that we do we help them perfect their algorithmic knowledge of the world and therefore it enables them to create other services for us. So it's a bit of both ways right. They give us the opportunity to communicate with each other in pictures and with emoticons but at the same time we are training their algorithms to then do more of these things down the track.

Sandra: Maybe one day they will get good enough to become president.

Kai: But that's all we have time for today.

Sandra: See you next week.

Kai: Thanks for listening.

Outro: This was The Future, This Week, brought to you by Sydney Business Insights and the Digital Disruption Research Group. You can subscribe to this podcast on Soundcloud, iTunes or wherever you get your podcasts. You can follow us online, on Twitter and on Flipboard. If you have any news you want us to discuss please send them to sbi@sydney.edu.au.

Related content