Imported from Digital Frontrunners 2019
A conversation about AI & Ethics - Digital Frontrunners 2019
1,981 views
View transcript
[Applause] welcome to the stage Carly Kai and dr. Kristen bar tourism so we're gonna talk with some experts about some of the deeper ethical moral and philosophical questions concerning this thing called artificial intelligence um this is Chris Thoresen who's from Iceland he has been focusing on AI for 30 years he runs the Icelandic Institute for intelligent machines a professor of computer science at rec Ubik University um but also has been an entrepreneur so he's seen both theoretical and the application sites among other things he's worked with Honda's ASIMO which hasn't yet taken over the world but you know how to build constraints into systems and the opportunities karlie kind is running a new Institute in the UK called the ada lovelace institute which is working to make sure that artificial intelligence and data analytics are working in the public interest are on our side looking at everything from biometrics to face recognition formally working in privacy law and data protection law and then Massimo Pythagorean though here is from the corporate side he's a partner at PwC in Italy who's in charge of new ventures but also responsible for the ethical legal and governance framework of their AI strategy he's got a background in data and AI um but I want to get a sense firstly of what is obsessing our team who are thinking about these questions and I'm gonna start with Carly who you're relatively new in this organization you've decided not to take government money or corporate money so you've got a research grant you've got a team of half a dozen day-by-day what are the things top of mind there's two things that I'm thinking about at the moment one is as a result of a book I read recently called the technology trap which looks at how the similarities between the first Industrial Revolution and the current digital revolution and one of the takeaways from that book is that revolutions such as the one we're undergoing at the moment the benefits of them are rarely felt by the generation that experienced it in fact the benefits often delayed to future generations and as a result you see backlash you see in the current context tech clash you see public unrest public dissatisfaction so how do we ensure that the benefits of this digital revolution accrue to those who are experiencing and undergoing it at the moment so that's one of the things I'm thinking about and the second thing I'm thinking about is what does changes in technology mean for societal values David you mentioned in your speech earlier you know what are our society's values and how the technology's challenged or reinforced those values and some of the ones that have come up for all ready today are things such as human agency or solidarity community equity in diversity do we need new social values or do we need technologies to reinforce and redefine and and further respect existing social values so those are the some of the things that we're grappling with as we set up the Ada Lovelace Institute and when you talk about the backlash are we talking specifically about people losing their jobs I think that's part of it I think public distrust in technologies public distrust in the media for example we've seen you know rapid advancement of communication technologies we've seen a number of things develop like fake news deep faith that you gave some great examples of and as a result we've seen declining public trust in mass media and that filters out all sorts of things including public trust in democratic institution um these technologies should be exciting us it should be making our lives better but people have the sense that their lives aren't getting better they're getting more uncertain their jobs are on the line they can't trust government they can't trust the media so how do we build public the legitimacy of these technologies for the public and ensure that they actually trust them and are excited about technological developments it does seem there's a time lag as you suggest between the benefits potentially of these technologies and the way that's received by the wider public um there's a lot of people doing jobs that are not much fun it's not much fun driving a truck across the continent and autonomous driving will do that job and those people will be retrained to do things possibly more satisfying but there's nobody saying to these people in the interim we're gonna take care of you we're going to give you that education I think that's part of the problem but I mean you say it's not very much fun driving a truck it may be very integral to that person's identity that they're attracted reiver I suspect that truck driving as a domain has a very strong community and people really associate with being a truck driver so beyond what is that person going to do for work and for income how are they going to have an identity how they're going to redefine what their social purpose is in the world if they're told you don't need to drive a truck anymore you know you can have any number of other more luxurious jobs is that what they want do they feel like they are still contributing to society and I think that's even more fundamentally problematic than the retraining angle which is you're not wrong it retraining these people for other jobs is part of the challenge but also working out how they it impacts their kind of sense of identity and social purpose I think ISM there's a more fundamental challenge and it's difficult to imagine how governments or companies alone can grapple with that challenge I think it needs to be much more integrated conversation with whereby the public and the public interest is is kind of first and foremost in conversations about why did you decide that the ada lovelace Institute should be independent of these big companies and of government you touched on a little bit in your speech labor which is you spoke about this kind of phenomena of ethics washing I think it's quite problematic that ethics has been co-opted by some in the private sector ending government to be fair to act as a kind of catch-all term for we're doing good without without there being anything there to back it up any process any demonstrable evaluation of the impacts of technologies etc simply establishing an ethics counsel an ethics code of conduct hopefully Chris can talk about his experiences in this regard doesn't necessarily mean you're doing good as a company or as a government so I think that term is being slightly co-opted and linked to that is that there is a lot of private sector funding going into ethics conversations into ethics bodies Facebook for example has just given a huge grant to an institute in Germany to set up an ethics Research Institute and I think company should be doing that but I do think that having independent actors on the scene is equally important for a dynamic conversation there needs to be a body who speaks only on behalf of the public interest Chris do you trust these corporates to give money for independent research to some extent but I think there always has to be oversight and of course we we should not let the the Wolves guard the henhouse that's for sure and and you know the the idea of setting up sort of mutual monitoring mechanisms in society is I think integral to to a balanced society to democracy so what does the Icelandic Institute for intelligent machines do so so aaam is is pretty much like like digital hub Denmark with a special focus on the cutting edge technologies of automation so and if you're familiar with the fki Germany who celebrated their 30th anniversary last year so we're a little bit behind Germany on these things but but basically the societies is when governments are recognizing now that in in order to to deal with the rapid increase speed and deployment of new technologies there has to be more focused and controlled effort and how that's done and so it's about economic payoff actually in part because you have these these grants from from industry for basic research and that results in typically in papers and you know maybe a few demonstrations but that that work can sit on a shelf and unless someone from industry happens upon a paper you know and realizes how they could be used seven years from now you know it's gonna it's gonna disappear into the either so so so this effort of of strategically linking projects and needs outcomes from academia and needs of industry and and trying to to bridge that with with mostly with prototypes and sort of honing down of the original blue sky ideas what's keeping you awake at night what is obsessing you well you know I got interested in AI when I was twelve I didn't know the term at the time it had already been coined but I sort of saw you know when I'm in my mid-20s you know we'll have robots do everything so I might be able to work a little bit on that cool idea but you know probably it'll all be solved before I get it enough to do it and and we're still not there so I'm kind of excited and what keeps me awake is all of the because you know in my naive youth I saw you know of course we're gonna do that of course we're gonna have machines do all of the stuff that we don't want to do ourselves and then we have a choice of what what else we want to spend our time on but all of the unforeseen issues that are cropping up that were very hard to foresee I mean the side the negative side effects there's there's a lot of them which one specifically above the ring you most yeah I just got back from China and you know it's hard not to to think about that you know watching BBC every hotel we were in the protests in Hong Kong came up within 30 seconds typically much less the screen went blank so it's a very very blatant reminder and they don't really I mean it's really weird because they they turn off this screen and it's like okay so everyone sees they're turning off the screen when the BBC is reporting from Hong Kong so they don't want anyone to know so surely the the Chinese people you know can go somewhere else to find that information but I realized it's not that's not really the purpose the purpose is like we can do what the hell we want and they're using massive amounts of AI to to enable this but they also have employed I think more than half the population of Iceland to sit there monitoring the internet and television newscasters on abuse imminent blatant abuse this of course other more benign ones but this is the one that scares me the most Massimo you're thinking about how a commercial entity like PwC can both use artificial intelligence to help its clients but the constraints you need to build in to the system to do it in the right way and give some examples of how PwC is working with clients using AI and then maybe we can talk about some of the things you wouldn't do yeah we work with our clients almost any area with with regard way from manufacturing with digital twins to chat both virtual assistants any kind of advanced analytics with machine learning to more sophisticated applications especially in financial services industrial products retailer consumer goods almost in any industry but we decided to develop a program a specific specifically for AI Attucks because if on the one hand we we are absurd about the opportunities with applications as you said that it is not much fun driving a track although there is a community of people that enjoy it to be from their identity viewpoint truck driver but there are things that unintentionally AI application I can do because basically thing that we have two categories of problems with regard to a ethics on the one end we have the final goal of a specific application like you know the case in China so they want to control their people through AI applications on the other end especially because of the use of machine learning algorithms you can have AI applications that unintentionally can cause for instance discrimination which is one of the main examples we are very much focused on on the latter right so we need to we want to find a way to manage risks for companies that are developing AI applications but of course don't have any malevolent intention in terms of you know what they want to do with their applications so and for me there are two things that are really important on the one end you need to define or identify the ethical principles you want to follow in this area I think that the debate is pretty mature especially especially at the European level so the the EU is very much aware about AI Attucks they appointed in a expert group that issued recently a paper with their guidelines about headaches and there are a number of initiatives at the U level that work in this direction but on the other end what keeps me awake at night is to make them in the AI principles operational so how can you translate the principles you want to follow about furnace and interpretability and explain ability human agency in something that is really pragmatic and basically you have two possible ways the first one is computational ethics so basically you can do that with other technologies and it is a bit weird because you want to control algorithms with other algorithms so basically there are algorithms or machine learning algorithms that can detect discrimination in other algorithms and the other way to do it to do that is through governance so you have to set up the right governance process to be certain that the whole lifecycle of AI development is monitored right from AI strategy so what you want to do all the way down to maintenance right and development and how you buy from third parties AI applications which is a very big problem of course because you cannot get access to the code and you have to figure out other mechanisms to control what they are doing so it is a very complex and multidisciplinary endeavor that is in my view particularly important so we're talking about this thing called ethics like it's something very well understood and everybody agrees what it is but I think we need to drill down into what we won't do if we're following an ethical code give me a couple of examples massimo of things that you won't do under PwC's ethical code yeah I think that the number of ethical calls that are out there is impressive we went through 100 plus code of conduct from universities and labs in companies like Google and Amazon so I don't think that now it the the the number of principles crystallized and there are basically five or six main principles that have been accepted a couple of them fairness which is about discrimination interpretability and explained ability which has to do with the right to the truth so basically the right for anyone for any citizen to know how any application is working human agency which is the other one data privacy is probably the fifth one and around those principles at least as far as companies are concerned you can work out your AI ethics strategy and as a consequence the the implementation plan which I was referring to when I spoke about computational ethics you hear these companies talk about these things that they will and won't do is it enough just declaring the framework I think it's an excellent starting point I worry that it focuses too much on what we want the technology to be rather than what we want our society to be so I completely agree with Massimo that fairness privacy agency etc excellent kind of structures that we should measure technologies against but they don't necessarily be answer the question of but how is this gonna change how we live and do we want that outcome so a good example might be facial recognition technology of course it's quite a kind of buzz tech at the moment everybody's talking about it and that's because I think it's it's like a very evocative realization of some of this new technologies if we can make facial recognition technology fair and so that doesn't discriminate against people of color and women which it does at the moment if we can make sure that it's built on the basis of datasets which respect Dana privacy if we can make it accountable if we can factor in some of those concerns around human agency nevertheless do we want societies in which facial recognition technology is everywhere how is it going to change our relationship with the state how is it going to change our relationship with each other how does it normalize surveillance and how does the normalization of surveillance hamper creativity or radical ideas or progressive movements and I think there's that broader question of not just what technology do we want to see but what society do we want to see I think it's an impossible question for companies to answer themselves and I think the starting point is what do we want the technology to look like but that broader conversation has to be something that we all have together so if you had in the room the heads of some of our biggest listed companies what would you ask them to do I think I would say let's not use society as a testbed for technologies that were not sure yet how they're going to change society let's try to think through some of these issues and essentially move slower and fix things rather than move fast and break things let's slow things down a bit before we actually roll out some of this stuff so that we do actually understand the societal impact before we forge ahead I think rolling back is a really difficult thing to do it requires you know we've seen it around the kind of data protection field of regulation where technology moves much faster than regulation could move so all these companies went ahead and started doing all these practices now we have things like the GD P R that's trying to crawl some of that back and it's very difficult actually so I suppose what I would urge them to do is to move more slowly I realize that's kind of an alpha mode to many companies and in particular to the tech sector which prize innovation and speed over thoughtful slow testing and reiteration but nevertheless I think what sake is so vast and and problems and that I think we need to take things a little so yeah I think I completely agree and I think you know of course everyone wants to be at the cutting edge or even the bleeding edge and so they want to you know that goes from everything to from universities up to companies to governments etc and and they think you know artificial intelligence is the next thing but we're actually in the in the age of artificial stupidity in my opinion the idea of having a machine that you can't really really trust over see another machine that you can't really trust is just silly idea frankly what we need to get to and I'm a technology developer so you know I'm not an ethics lawyer so I you know I do like to think big and what kind of society we want to live in etc I think democracy is a pretty good idea but so but I've been focusing a lot on in the last decade on machines that can that can self reflect and I think the only way to really get to you used the word understanding a lot and you're talking about AI and and it used to be that people put understanding in quotation marks when they talked about it in the context of AI you can look at the literature actually I did it I did a study of this when it comes down to these machines don't really understand anything and that's the problem one of the issues that keeps hitting the big tech companies is opposition by their staff to working for government particularly military parts of the government yet if we look back at some of the great tech innovations of the last century it's often been the military that invest and create things like GPS like radar that we will rely on so Chris is there a problem in itself in companies helping the military with their machine intelligence that diverts so his historically military funds have an at Vantage over the rest of society because of the history of the funding and the the problem is that there's a bias then for for new and researchers who think you know well we're gonna have to if we want to get big funds we have to go to the military and that means basically building technologies that are that are specifically designed to track intimidate and kill people and I think most people will have some at some level a problem with that and certainly I have had that for a long time and I I think that there this diverts the attention of research and development to away from the good applications in fact which I think are vastly more numerous Carly does the Lovelace Institute have a problem with military research I think it's to be honest I think it's a it's a distraction to focus solely on military applications of AR in technology I think it's very problematic and it and I think it's good that tech workers are taking a stance on this but I also think that you know technology developed in the private sector even independently of government can one day be acquired by government can one day be acquired by the military so whether or not a company works with military directly doesn't necessarily stop that tech from later being acquired I think that other government development of tech can be problematic as well I was just reading James's love looks new book that Nova seen and he was talking about working for NASA in the sixties developing space technologies and the realization that that was actually being used for the development of nuclear weapons as well and as a kind of NASA scientist that you know realization that the tech you're developing for one purpose can later be used for another so I don't feel like there's a clear bright line between whether developing military tech or not I think that there's all of these technologies can be used around across a large range of application so you have to I think foresee the consequences in all different applications and not necess Kelly just draw a line that we're not working with military I think that's sufficient so there's a lot of people here running businesses some startups some bigger companies do you think they have a responsibility to publicly declare their ethical policy should they have a page on their website that's the ethical code of conduct yes I think so because it's really important to understand that their attitude that worth AI attics and again I I want to be very pragmatic as I completely acknowledge what you said about the broader picture so I'm not advocating computational ethics as the only means to deal with the headaches but on the other end especially the private sector you have to do something right because intelligence is a competitive thing that they must use to be you know even more competitive in the future but on the other end as I said there are unintentional consequences that they need to at least monitor but if they want to do that they needed to be aware of the possible risks they are running in terms of ethics so GD P R for instance is helping very much these companies because it forces them to to pay attention to some aspects that are very important for instance interpretability the right to the truth that I mentioned earlier on so if they want to be ethically aware they need to say well you dear mr. customer or au government or anybody any stakeholder of the broader AI community have the possibility to understand how my artificial intelligence applications work which doesn't you know getting access to the cold of course there are a number of things that can be done from a logical perspective for instance a through adversary attacks that can be used to explain how an algorithm works protecting IP protecting the intellectual property on the other end again especially with regard to bias of firmness and human agency a contained Code of Conduct I mean very focused very specific in my view would help a lot yeah I agree I think it's important and so on my considerations of the military in 2015 turbulent put out an ethics policy that was strictly focused on the military in fact I think we should extend it to to basically cover more broadly just any abuse or violation of human rights actually do you think it says so so this was in 2015 I believe we were the first a research lab to have an explicit policy and I urged the folks at the fki to do so and they convinced me that well they're pretty much following the same rules but they don't say it explicitly and I'm hoping that they will sooner rather than later and and and more labs and there was a I think it's clear past robotics that in Canada that around the same time I think also that was in 2015 was the first robotics company to say explicitly they would not work on on weapons so I'm gonna look a little bit into the future because this area is moving very quickly and whatever we're talking about now is going to be irrelevant by the time we get off the stage where do you all see the next level of concern the threats of some of these emerging technologies what's gonna follow concern about face recognition maybe in a five-year framework if I think slightly beyond that there are two things that I think about these are slightly longer term one is the environmental impact of AI technologies I think that there's a lot of focus at the moment about how AI is gonna solve climate change and not very much conversation going on about how it's going to contribute to climate change for example well there has been recent studies which show that training of a single neural network has the same as it has a much larger carbon footprint than a single car does in the course of its life and it's very difficult I think to equate carbon outputs when it because the AI ecosystem and supply chain is so large and varied but I do think that if we think about a world in which machines are kind of power amounts and everywhere how are we going to make sure that that's also a sustainable future and and I think it's just worth that conversation happening now the other thing I'm think about longer term is taxation and the distribution of the financial benefits of AI and how can we if the truck driver for example that you gave if we do get to a point in which a large percentage of labor is replaced by machines how do we distribute the financial benefits of that is it through taxation is it through a universal basic income are there other economic mechanisms that we can look to so those are kind of longer-term future things about societal impact I guess there's a small number of companies acquiring most of the talent and there's a bidding war for the few thousand people who are specialists in the field and salaries are going up I guess there is a risk that if you're not in one of those big companies you're gonna be commercially disadvantaged yeah I think that's right I mean and I think that question or sermon AI talent is a not is a sustainability question and in some respects as well because I there's as you alluded to big tech companies requiring most of the AI talent and that's impeding those same people going in to potentially public interests tech development or a government AI and expertise so how sustainable is the the pipeline I suppose is another question we need to think about longer term Chris what are we going to be worrying about well you know Norbert Wiener wrote a remarkable book in 1950 called the human use of human beings where he foresaw much of what's happening the the trend towards a lower Willian future I think is something that all democracies would will need to seriously look into and I would throw in a pet peeve of mine that is not very much talked about which is the size of companies you know there's been there's a history of talking about the size of government and and limiting its reach into people's lives and I really do think we need to have that same conversation about companies Messimer conferred on your futurist have what's gonna be the next debate for me what is really important to start discussing now is artificial general intelligence I'm not saying that we get there because of course it is impossible to say now but there is a pretty significant part of the scientific community that is almost certain that we'll get to that point and this would represent a completely different game they talk about the existence alone risk that is a completely different thing it is not about AI attics anymore it is about a world with a different type of intelligence that is not necessarily similar to ours right and the the possible impact in terms of social interactions breath to democracy but also in terms of our own existence of meaning is unbelievably important so starting to discuss about that now can give us a couple of advantages first of all we can have a better view about the current risks of artificial intelligence because I am certain that even if we don't reach that point we don't get a court official general intelligence there could be an asymptotic curve to that point where the possible impacts are almost the same right so starting now is very important and then of course if it happens we will be prepared it's not that there isn't research money going into these discussions I mean Elon Musk Yan Talon of funding organizations like the future of life Institute the Center for the Study of existential risk what's so bad about artificial general intelligence why should we be worried I have to disagree slightly Massimo I find some of that conversation again a diversion from the real risk that we face now it's really hard to know because first of all I think the expertise is consecrated it concentrated in so few minds around an AGI that depending on who you speak to it's either happening in five years or it's never happening and it's I mean my experience is listening to podcasts on this and I'm one day I'm convinced it's happening in five years and we're all going to be afraid so I find it very difficult to kind of get to the bottom of the truth and so my default is to just focus on the current you know the tangible rest in the next few years in terms of why would it be so bad I I mean I do agree with Massimo that would it would just be a fundamental shift about how we conceive of ourselves as humans and and what it means to relate to other forms of intelligence I'm not sure how much preparation we can do for that now in terms of tangible steps I do think that diversifying the workforce and the expertise that is working on artificial general intelligence may prime it for being a better outcome that is if we only have people that talk and look like Elon Musk and others in Silicon Valley thinking about this in developing this tech then we're sure to create an AGI that is more problematic than if we have a much more diverse workforce we insure kind of diversity of gender and race and background and lived experiences and not people that live in a Silicon Valley bubble then I think we're more likely to end up with a AGI that is more consistent with kind of broader societal values so the live mites have some thoughts on that because I've devoted my professional career and thinking about AGI now it used to be called AGI because you know for the first two-thirds of my professional work AI was a GI and and it took me a while to realize that it that what what everyone was doing and call day I wasn't actually IDI but the the there's a lot of silly ideas floating around about AGI that are based on our current AI that aren't going to be around when AGI comes around and it is going to come what is the risk the risk I think in think about the risk the best way to start that discussion which I wholeheartedly support is to think of an analogy to society and think of it's because we already have AG eyes that's you know the gold standard for AGI is this the human mind so it's in some aspects gonna have very similarities to that there's there's gonna be similar risks authorities with the idea AGI as there are risks associated with people and we you know we have systems like prisons and we have the the judiciary system and everything you know a lot of those rules are gonna a lot of those mechanisms are not gonna work for machines that are at that level but a lot of the issues are gonna be the same also don't forget machines will always need energy and so at that inflection point we there will be a way just like you can track nuclear efforts new in nuclear power and nuclear weapons there will be methods for tracking the potential risks associated state that's doing something with a GIS there there will have to be regulations and new rules that will for that specifically but it will kind of bear resemblance will recognize it sort of because we already know it from the current society I wonder that you and I have we have one final quick question with a very curious intelligent well-connected group of people in the room we can start quite a useful conversation in the breaks today and like you Walter from an ethical framework from a philosophical framework suggest an idea to leave the room with something that people should start thinking about now that relates to the power to the potential of the machine and we've talked about what it's going to mean for jobs for equity and society for the role of government but is there a final thought that people here can reflect on today I would my challenge I suppose in particular to startups would be to think about how you could bring real people in to understand and and learn about your technology and kind of test some of the values that it is inherent in that tech I mean I'm really interested in this the the potential for different forms of public engagement and public consultation and it's more relevant I think for government deployment of tech but I do think it could potentially be used for methodology for the private sector as well part of its you know essentially focus-group testing but you know can you talk to the public about your technology can you understand how they feel about various elements of it and what it's going to do to their lives and can you feed that back into your own ethics policies that would be one challenge I will put briefly Massimo I completely agree with you it's probably the most important thing I mean they need to engage with their community of users or stakeholders because if they develop their application in a sort of ivory tower they miss an opportunity to test out what is a really or tanta for their users and my final point is about the contextualization process they needed to put in place ethics is not rocket science right it is a cultural thing and they need to understand what is a relevant from a medical viewpoint in the community where they live or in the community where they do business with that does for me the most important thing you have 30 seconds Chris to send this away you are with some reflectiveness yeah so I think it's my final thought and since I use the six seconds now I travel trade very we really need to think on all levels broadly how we can make this technology benefit everyone and not be collected at the hands of a very few Thank You team thank you for everybody here for joining us in this philosophical communion Massimo Chris Charlie thank you [Applause]