УДК: 394.1 ГРНТИ: 10.23.21 DOI: 10.32415/jscientia.2019.08.01
LEGAL CAPACITY OF ARTIFICIAL INTELLIGENCE
M. D. Shapsugova О
The Institute of State and Law of The Russian Academy of Sciences 10 Znamenka St., 119019 Moscow, Russia
S3 Shapsugova Marietta - shapsugova@gmail.com
Digitalization of the economy makes research in the field of artificial intelligence relevant. The introduction of robots in all spheres of human life gives rise to problems of responsibility for the actions of artificial intelligence, for example, in the event of an accident involving an unmanned taxi. No less relevant is the problem of intellectual property in the results of intellectual activity. Who should be considered the author of a work created by artificial intelligence: the robot itself, man? These practical problems entail the need for a scientific and theoretical understanding of the personality of the robot. The article explores the basic approaches to understanding artificial intelligence, its types. It is determined as the degree of autonomy of artificial intelligence can determine its legal personality.
Keywords: artificial intelligence, legal personality, robots, entrepreneurial law, legal capacity.
ПРАВОСУБЪЕКТНОСТЬ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА М. Д. Шапсугова
Институт государства и права РАН Россия, 119019 г. Москва, ул. Знаменка, 10
IS3 Шапсугова Мариетта Дамировна - shapsugova@gmail.com
Цифровизация экономики делает актуальными исследования в области искусственного интеллекта. Внедрение роботов во все сферы человеческой жизни порождает проблемы ответственности за действия искусственного интеллекта, например, в случае аварии с участием беспилотного такси. Не менее актуальна проблема интеллектуальной собственности, создаваемой в результате интеллектуальной деятельности. Кого следует считать автором произведения, созданного искусственным интеллектом: самого робота, человека? Эти практические проблемы влекут за собой необходимость научного и теоретического понимания личности робота. В статье рассматриваются основные подходы к пониманию искусственного интеллекта, его виды, определяется как степень автономности искусственного интеллекта, может предопределять его правосубъектность.
Ключевые слова: искусственный интеллект, правосубъектность, роботы, предпринимательское право, правоспособность.
Intriduction. Artificial intelligence is associated with the idea of a "singularity" of events, which implies the indistinguishability of the natural and the artificial. However, the desire of the "natural" to "artificial" itself is associated with the achievement of the "border" of the spread of the artificial. As civilization develops, the degree of mutual influence, the interaction of natural and artificial reality increases. At the same time, the artificial today has substantially supplanted the subjective world of a person, his subconscious, but, on the whole, the spiritual, intellectual seeks to enter into a relationship of harmony with everything artificial, including artificial intelligence [1].
The introduction of the concept of "electronic face" into scientific circulation is primarily due to the specifics of a fundamentally new subject of law. This concept is intended to reflect its essence and legal specificity. At the same time, you can focus on the conceptual series "electronic face" - "artificial intelligence" ("electronic individual") - "robot". Moreover, artificial intelligence, the carriers of which are robots that meet certain criteria, must be considered as the basic component of an electronic face. Therefore, first of all, it is advisable to turn to such a category as "artificial intelligence" [2].
Presidential Decree 05/09/2017 No 203 approved the Strategy for Information Society Development in Russian Federation for 2017-2030 (from now on - the Strategy), introduce concepts digital economy and digital ecosystem. In order to implement the Strategy, the Government of the
Russian Federation dated July 28, 2017, No. 1632-r approved the program "Digital Economy of the Russian Federation."
Digitalization of the economy makes research on this subject relevant. Artificial intelligence has spread to almost all spheres of human life: production, trade, medicine, transport. Increasingly, the question of the legal personality of electronic persons is being raised.
So, recently, the world met with a citizen of Saudi Arabia -the robot Sofia. This event seriously made us think about the robot as a subject of law, the features of its legal status.
Introduction to the scientific turnover of the concept of "electronic face" is due primarily to the specifics of a fundamentally new subject of law. At the same time, you can focus on the conceptual series "electronic face" - "artificial intelligence" ("electronic individual") - "robot". Moreover, artificial intelligence, the carriers of which are robots that meet certain criteria, must be considered as the basic component of an electronic face. So, first of all, it is advisable to refer to the taco-categories as "artificial intelligence" [2, p. 40]
An electronic face in science is understood as a carrier of artificial intelligence (machine, robot, program) that has a humanlike mind, the ability to make decisions that are conscious and not based on the algorithm laid down by the creator of such a machine (robot) and, therefore, endowed with certain rights and obligations [3, p. 359].
Discussion. In the United States, a bill on the future of artificial intelligence (Future of artificial intelligence act of
2917, 115th Congress USA) is under consideration [7].
In this bill, artificial intelligence is understood to mean:
(A) Any artificial systems that perform tasks in changing and unpredictable circumstances, without significant human control, or which can be trained based on their experience and improve their performance. Such systems may be developed in computer software, physical equipment, or in other contexts that are not yet considered. They can solve problems requiring human perception, cognition, planning, training, communication, or physical action. In general, the more humanlike the system in the context of its tasks, the more it can be said that it uses artificial intelligence.
(B) Systems that think like a person, such as cognitive perception and neural networks.
(C) Systems that act like humans, such as systems that can pass the Turing test or another comparable test through natural language processing, knowledge representation, automatic reasoning, and learning.
(D) A set of methods, including machine learning that seeks to approach some cognitive task.
(E) Systems that operate rationally, such as intelligent software agents and embedded robots that achieve goals through perception, planning, reasoning, training, communication, decision making, and action.
Also, the bill introduces the concepts of general and narrow artificial intelligence. Under the general artificial intelligence refers to the future conceptual system of artificial intelligence, which demonstrates, apparently, intelligent behavior, at least as advanced as in humans, in the range of cognitive, emotional and social behavior. The term "narrow artificial intelligence" means an artificial intelligence system that is designed for specific applications, such as strategy games, language translation, auto-controlled cars, and image recognition.
We believe that the legal personality of an electronic person must correspond to the level of its autonomy.
A.A. Zhdanov identifies two types of artificial intelligence -autonomous and subordinate. Autonomous artificial intelligence is characterized by adaptability, emotional apparatus, freedom of decision-making, subordination to oneself. In contrast to autonomous, subordinate artificial intelligence solves intellectual problems previously solved only by man. These systems automate some intelligent functions and solve situational problems. Such systems are subordinate to man and his target functions [5].
In the scientific literature, the question of whether a smart robot should be vested with rights is discussed, ethical issues of using robots are raised. Analogies with the slave system are given, in which the slave was also not considered a person.
Substantiation of the concept of legal personality of artificial intelligence carried out in several ways: a legal personality as an individual in a truncated form as the fiction of the legal person as a tax to the legal status of the animal (in which the rights of property distributed by the Civil Code).
It is also proposed to consider the robot as a particular type of property.
Morhat P. M. Identifies the following approaches to solving the question of the legal personality of a unit of artificial intelligence [4, p. 300-301]:
- The concept of an individual subject concerning an electronic person (the legal personality of an artificial intelligence unit, correlated (comparable, comparable with the legal personality of a person);
- The concept of the collective subject of law in relation
to an electronic person (the legal personality of an artificial intelligence unit, correlated with the legal personality of a legal entity
- The concept of special limited legal personality of electronic persons in the context of agent relations.
We believe that phenomenology is the most applicable to substantiating the legal personality of artificial intelligence. Artificial intelligence is a new independent phenomenon in legal science. Therefore, it makes no sense to justify legal personality based on previous legal constructions. We believe the electronic person is a specific quasi-subject of law, endowed with individual elements of legal personality. The volume of the legal personality of such a person depends on the degree of autonomy of artificial intelligence.
So, in the Resolution of the European Parliament dated 02.16.2017 Civil law on robotics, special attention is paid to the so-calledd "smart" robots, to which the document has the following attributes:
- Autonomy through sensors or by exchanging data with its environment ( interconnectivity ), the ability to analyze data;
- Self-learning ability
- Physical support (physical body)
- Adapting behavior and action to its environment
- Lack of biological life
In addition, various kinds of ethical questions arise: should counterparties know that they interact with a robot, and not with a person, for example.
Therefore, the issue of information disclosure is being disclosed in a new vein. It is proposed to oblige to provide information on the number of robots used by the company and on interaction with the robot.
The European Parliament Resolution of February 16, 2017 2015/2013 (INL) P8_TA-PROV (2017) 0051 recommends that when assessing the consequences of a legislative proposal, the EU Commission should take into account and analyze all the possible effects of any legal decisions in this area, including:
a) creating a mandatory insurance system, where possible and necessary, for specific categories of robots. Just as in the case of car driver liability insurance, manufacturers and owners of robots must insure the risks of potential harm to robots;
b) ensuring the real use of the funds of the compensation fund, and not just a formal guarantee of the payment of compensation. Funds from this fund should be used to pay compensation in cases where insurance does not cover the damage caused by the robot;
c) partial exemption from liability of the manufacturer, developer, owner or user of the robot, provided that they contribute funds to the compensation fund, as well as if they jointly insure liability to guarantee compensation for damage caused by the robot;
d) the decision to create a common compensation fund for all smart autonomous robots, or, conversely, to create separate funds for each category of robots. It should also be decided whether it will be necessary to make a one-time contribution to the fund when introducing the robot into civilian circulation, or whether contributions will need to be made throughout the life of the robot;
e) ensuring that the relationship between the robot and the compensation fund can be traced by assigning each robot a specific registration number entered in a separate registry in the EU. With this number, each person interacting with the robot will be able to get more information about the fund, from which money will be collected for compensation, to receive
information about cases of limitation of liability in cases of damage to property, about the persons forming the fund, and their functions, and also any other necessary details;
f) on granting in the future robots a special legal status. Thus, at least the most advanced autonomous robots can be created as electronic entities and be liable for damage caused by them in those cases when they make decisions autonomously or otherwise independently interact with third parties.
In the international aspect, it is noted that the existing international private law rules in respect of road traffic accidents in force in the EU do not require immediate drastic changes due to the development of autonomous vehicles. At the same time, simplification of the existing dual system for determining applicable law (based on Regulation (EU) No 864/2007 of the European Parliament and the Council, as well as the Hague Convention of May 4, 1971 on the law applicable to traffic accidents) will contribute to legal certainty and limit the list of options for choosing the most profitable jurisdiction.
In this connection, it is expedient introduced five amendments in international agreements such as the Vienna Convention on Road Traffic of 8 November 1968 and the Hague Convention on the law applicable to road traffic accidents. This will enable unmanned driving. EU commissions, EU member states, and industry representatives need to implement the provisions of the Amsterdam Declaration as soon as possible.
It is also argued that it is necessary to give a generally accepted definition of smart autonomous robots in the EU. In this case, it is necessary to define their subcategories (if necessary) and take into account their following characteristics:
- the ability to become more autonomous, using sensors and/or exchanging information with the environment (compatibility) and analyzing it;
- the ability to learn based on experience gained and in the process of interaction;
- availability of the form of physical support for the robot;
- the ability to adapt their actions and behavior following environmental conditions.
Responsibility problem. The next important issue is the issue of liability. In the case of a smart robot endowed with the property of autonomy, the question arises of liability for damage caused to it.
Under current law, electronic persons do not have legal personality. Therefore, they cannot be held responsible for the actions of third parties (owner, user, operator or manufacturer).
A competent subject is distinguished by the presence of his own will. The higher the autonomy of the robot, the more volatile it is. If the robot does not have its own will (slave artificial intelligence), it can fill a person (man).
Besides, an intellectual component of behavior - the ability to be aware of their actions and their consequences - is very important.
The ability to bear responsibility is closely related to the ability to recognize guilt.
Although the issue of responsibility for the actions of the robot has not been resolved due to the lack of special legal regulation on this issue, many companies that use or intend to use artificial intelligence in their activities have chosen a liability insurance model.
So, the Yandex company, which launches unmanned taxis, has chosen just such a compensation model.
Given the possible danger posed by the use of artificial intelligence and in order to determine the delinquency and
control the use of artificial intelligence, electronic records must be taken into account. In this regard, it is proposed to register smart robots and maintain registers of robots and their owners.
According to P.M. Morhat, it is justified to single out the following basic models for determining responsibility for the actions of an artificial intelligence unit that entailed the onset of harmful consequences (including in the field of intellectual property law):
- a model of a real actor's tool, in which an artificial intelligence unit is presumed to be fundamentally innocent ("innocent") agent, instrument of the actual perpetrator of the offense;
- a model of natural probable consequences, within the framework of which it is presumed that the artificial intelligence unit implements actions that are a natural, logically logical and probable consequence (derivative) of its production / programming, and the person who created and / or programmed the machine is presumed to show criminal negligence;
- model of direct responsibility of the artificial intelligence unit directly for its actions (or inaction);
- model of quasi-substitutional responsibility (responsibility for the negligence of others) of the owner and / or operator of the artificial intelligence unit for the inability to properly interpret the intentions and actions of this unit and to prevent these actions [4].
We believe that it is necessary to take into account the positive experience of the EU countries. In the Resolutions of the European Parliament recommendations were developed on the development of civil legislation in the field of responsibility of artificial intelligence. Whatever legal decision is envisaged in relation to the civil liability of robots and AI, regulatory legal acts that will be developed in the future should in no case: firstly, limit the types and extent of damage that can be compensated (at least if we are not talking about harming only property); secondly, to limit the forms of compensation that can be received by the injured party only on the grounds that the damage was not caused by a person.
Future regulatory legal acts should be developed based on an in-depth analysis conducted by the Commission in order to establish which of the approaches to liability should be applied: based on the principle of objective responsibility or the principle of risk management.
It is necessary to develop a system of compulsory insurance, according to which the manufacturer may be required to insure autonomous robots manufactured by him. In addition to the insurance system, it is necessary to create a reserve fund of funds. Funds from this fund will compensate for damage that is not covered by the insurance amount.
Any decision regarding the responsibility problem with robotics and artificial intelligence should be made based on global research and existing developments in the field of robotics and neuroscience. At the same time, scientists and experts should be able to pre-evaluate all the risks and consequences of the decision" [3].
In the existing system of law, there is no place for the legal personality of artificial intelligence. In order to prove this, British scientists and lawyers applied to the patent offices of three countries on behalf of artificial intelligence [6].
Complexity arises already in the first stage since artificial intelligence cannot be indicated as an applicant and an inventor.
The main relevant concepts for resolving the issue of the
holder of rights to the results of intellectual activity produced with the actual or legally significant participation of an artificial intelligence unit or produced completely autonomously directly by the artificial intelligence unit itself:
- as a full author of the results of intellectual activity created by him;
REFERENSES
- as a co-author of a person in creating the results of intellectual activity);
- as an employee, creating the results of intellectual activity, presumptive and positioned as an official work;
- as a tool with which works are created;
- one approach denies authorship [4, p. 33].
1. Pushkarev AV. The philosophical foundations of artificial intelligence. Diss. Ufa, 2017. (in Russ.). [Пушкарев А.В. Философские основания искусственного интеллекта. Дисс. ... канд.филос.наук. Уфа, 2017].
2. Yastrebov OA. The Legal Capacity of Electronic Person: Theoretical and Methodological Approaches. Proceedings of the Institute of State and Law of the RAS. 2018;13(2):36-55. (in Russ.). [Ястребов О.А. Правосубъектность электронного лица: теоретико-методологические подходы // Труды Института государства и права РАН. 2018. Т. 13. № 2. С. 36-55].
3. Uzhov FV. Legal personality of Artificial Intelligence. Gaps in Russian legislation. 2017;(3):357-360. (in Russ.). [Ужов Ф.В. Искусственный интеллект как субъект права // Пробелы в российском законодательстве. 2017. № 3. С. 357-360].
4. Morhat PM. The legal personality of artificial intelligence in the field of intellectual property law: civil law problems. Diss. Moscow, 2018. (in Russ.). [Морхат П.М. Правосубъектность искусственного интеллекта в сфере права интеллектуальной собственности: гражданско-правовые проблемы. Дисс. ... докт.юрид.наук. М., 2018].
5. Автономный искусственный интеллект / ПостНаука. URL: https://postnauka.ru/books/38231.
6. Leo Kelion. AI system 'should be recognised as inventor'/ BBC News. URL: https://www.bbc.com/news/technology-49191645.
7. H.R.4625 - FUTURE of Artificial Intelligence Act of 2017 / The official website of the US Congress. URL: https://www.congress.gov/bill/115th-congress/ house-bill/4625/text.
Received 16.07.2019