< Back to News
Artificial Intelligence: who is responsible?
June 4, 2018
< Back to News
Artificial intelligence has become a vital part of our daily lives and promises to play a more important role in our society. However, the question of responsibility remains, raising huge challenges. The Partech Shaker hosted Arnaud Touati, lawyer at ALTO Avocats, who talked about these key issues.
THE BASICS OF ARTIFICIAL INTELLIGENCE
The Laws of Asimov Robotics: the fundamentals of AI
- A robot cannot harm a human being or remain passive if the human is eventually exposed to danger, he must help if needed.
- A robot must obey the orders of a human being, unless these orders are contradictory.
- A robot must protect its own existence while not violating the first two rules.
These rules date from the 50s but are still relevant today. Google, for example, has written the 3 laws of Asimov into its source code: these laws must be respected by anyone programing algorithms.
Three difficulties are related to development of AI
- Disqualification: some companies won’t work on AI because they consider AI to be too complex or surreal.
- No interest in developing AI: AI development is inspired by how a human brain learns and develops. However, today we are unable to reproduce a human brain because we only understand an infinitesimal part of it.
- The questions of learning: we draw heavily on the learning ability of a brain, but to improve AI, we need data. Europe is a major exporter of data, especially to the GAFAM. We are struggling to build real AI champions because we lack data, while American or Chinese companies are collecting all this data.
AI has to learn to understand its written and spoken environment
- Spoken environment: Siri works well
- Written environment: progress is considerable, as demonstrated by "Google Allo". Natural language is the basis of AI in written language: AI is capable of interpreting everyday words. All the research carried out by Google, Apple, Amazon, Microsoft ... is precisely to develop this ability to understand things, even to anticipate them.
- Visual environment: Like Google Image, some algorithms are so powerful that they can recognize you at a distance of 100 meters, a distance at which even your family wouldn’t be able to recognize you. Google's facial recognition algorithm is truly amazing.
A VAGUE LEGAL FRAMEWORK
The development of AI is governed by a legal framework that is vague or non-existent in France. Some claim that a robot has a legal personality, its own rights, its own obligations. I don’t.
The responsibility of an algorithm is not the same as that of a humanoid. In terms of computing power, amazing progress has been made so far. For example, the power of our cell phones is greater than the super computers used to send Armstrong to the moon. Also, the sector is consolidating, through an increasing number of acquisitions... The sector is evolving a lot, as we become more aware of its immense potential. After chess, Go and poker, the general public now realizes that the power of algorithms can exceed that of humans. It is plain to see that AI will affect all of us.
I believe that we are at the beginning of a new era in AI existence, for several main reasons:
- The considerable increase in computing power, in computers, and particularly in mobile phones, since this is where data will be exported from;
- The question of learning techniques, driven by the "deep learning", "machine learning" and more recently "reinforcement learning" techniques;
- The phenomenal development of neuroscience enabling us to progress in these learning techniques. The more we understand how the brain works, the more we can improve AI.
How is AI being implemented today?
- conversational agents, such as Siri, Google now, Cortana via Windows;
- use of AI in crime prevention (USA): increase in statistics availability to be used to predict potential crimes;
- scientific sector: in the medical field, robots can do a far better job than doctors;
- autonomous cars: we are observing an important development by companies such as Uber or Tesla that are ahead of their time on the subject; in 5-10 years, semi-autonomous cars will be operating in France;
- financial sector, illustrated by the gradual disappearance of traders, replaced by algorithms.
The dangers of AI
Apple's Wosniak, Microsoft’s Bill Gates ... these famous people from the IT right? industry are warning us about the dangers of AI. But the contradiction is that they are the companies that are investing the most.
The future of AI
I recently met a company that is working on an AI system that photographs sensitive or risk areas, to potentially anticipate a famine or poverty situation for example. The idea is to anticipate a problem in certain geographical areas to be able to intervene upstream in advance. They have already made great progress on the project. This is a concrete example of very positive uses for AI.
I'm going to talk about another use of AI, which rather scared me. I met an IBM employee who is working on personal assistant software - such as Google Home, Eco Zone etc. ... These assistants will map your outfits and are connected to your calendar and the weather. They know your tastes, those of your wife, your favorite restaurant, your work place ... In practice, they will do everything for you. This is not my dream, but my worst nightmare. We will lose the power of free-will and our spontaneity. Today, our movements can be tracked through our mobile phones and our computer, but not yet in our own home!
THE QUESTION OF RESPONSIBILITY
How does this work in terms of liability: should a legal personality be granted to a robot or not? I’m against this, but let’s discuss it.
Under French law, there are 3 types of responsibility that could potentially be applied in the algorithms or robot area.
- extra-contractual civil responsibility (article 1240 of the civil code): "any fact of a person who causes another damage": this article is automatically eliminated, because we are not speaking about a person, but about an algorithm or humanoid;
- responsibility for things (article 1242 of the Civil Code): "we are responsible not only for the damage caused by our own acts but also for the damage caused by objects that are in a person’s custody”. This assumes you have control and direction over the object. However, the very principle of robot autonomy implies that technically you have the direction, but not the control. You do not have it. If we start from the premise that these robots are partially autonomous, it means that you do not have full control. That's what happened in the Uber accident. It is difficult to claim human liability, when everything is being done specifically to remove Human action from this role and let the robot potentially act alone.
- responsibility for defective products: To me, this responsibility does not apply because if an IA product causes damages, it is not necessarily because it is defective, but could be because the event had not been programmed or imagined.
The notion of parental responsibility could be adapted, but does one really have authority over a robot, like a child? The most reasonable solution could be a shared responsibility between the human owning the robot, who will use it and the designer / builder of the robot who is not necessarily the algorithm designer. Sometimes there is a separation between the robot manufacturer and the algorithm developer. In these cases, there should be sharing of responsibility between entities to determine who is really liable. Is it the person, is it the algorithm designer or is it the robot manufacturer? In this case, it takes competent forensic experts to determine the outcome, which is complex.
The question of the legal personality of robots is also fundamental: Will we allow a robot to have rights and obligations, to have its own legal personality? I think that this is unsuitable for many reasons.
If you give a legal personality to a robot, it means that you give it rights and obligations. Technically, this means that you give it some form of autonomy, so why should it not also claim some rights? Robots are not fully autonomous today, but their level of autonomy could progress. Some confreres say that we must respect the dignity of the robot. Why? I personally believe that we should respect animal dignity before respecting robot dignity.
The link between AI and employment: I believe that AI will impact employment in an extremely important way, with Humans losing their jobs to robots, who will have rights that some humans don’t have! I am fiercely opposed to the legal personality of the robot.
Estonia is the only country really working on the legal personality of the robot. Research is underway in the European Union, but many people are opposed to it. Recently, 200 robotic experts, researchers and entrepreneurs said at a large congress that they were fiercely opposed to robots being granted a legal personality.
UBER CASE ANALYSIS
The first fatal accident involving an autonomous car happened on March18th in the United States: a Uber car hit a pedestrian and killed her. How would a similar case have been treated in France. Who is liable?
First track: the driver of the car is liable as he should have kept some control over the vehicle. If the car drives autonomously, the aim is not for the human to maintain control, as the goal of AI and the driverless car is specifically that the human relinquishes this control. In this context, we could not, in France, move towards the responsibility of the driver.
Second track: Uber's responsibility. It was initially said that sensors were the problem (it is the power of these sensors which make it possible to avoid a shock). This was not the case, as the pedestrian was considered to be a "false positive", meaning that she was considered to be an object that the car did not need to avoid. In fact, the pedestrian was under the influence of drugs and she crossed the road in the middle of the night, not at all on a pedestrian crossing.
In the end, everything was settled with a fat check. It would have been interesting if it had gone to US jurisdiction. I wonder what would have happened in France. Presumably, there would have been a division of responsibilities. In France however, it is extremely rare that the victim is identified as a cause of limitation or exemption of responsibility when you cause an accident. So, Uber could have been deemed responsible. At the same time, the car manufacturer is the designer of the algorithm. Could it have been considered that the driver should have regained control? We don’t know what the French court would have decided.
You'd like to come to Partech Shaker's next events? Register on the Meetup group and do not miss any event!