Background

Back in 2017, the European Parliament was prompted by the European Commission to issue a proposal on the subject of granting e-personality in its resolution regulating highly advanced robotic inventions with autonomy. There were mixed reactions to this proposal; among the critics there was a group of scientists who addressed the Commission in their open letter in  April 2018 and condemned such a move, by characterising such a scenario as utopian and conflicting with canons of logic.

In October 2020 the European Parliament issued three Resolutions on the ethical and legal aspects of Artificial Intelligence software systems (“AI”): Resolution 2020/2012(INL) on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and related Technologies (the “AI Ethical Aspects Resolution”), Resolution 2020/2014(INL) on a Civil Liability Regime for Artificial Intelligence (the “Civil Liability Resolution”), and Resolution 2020/2015(INI) on Intellectual Property Rights for the development of Artificial Intelligence Technologies (the “IPR for AI Resolution”).

It is without doubt that the Resolutions underline the contribution of Artificial Intelligence in domains such as heath, transports, investment, among others. Nonetheless,“there are concerns that the current Union legal framework, including the consumer law and employment and social acquis, data protection legislation, product safety and market surveillance legislation, as well as antidiscrimination legislation may no longer be fit for purpose to effectively tackle the risks created by artificial intelligence, robotics and related technologies.” The Resolutions reflect a highly sceptical approach; any subjects pertaining to moral and legislative issues on AI ought to be shaped and examined under the scope of a regulatory environment that ensures legal safety for businesses, narrows down any legislative uncertainty and legal gap and safeguards people’s rights as they are stipulated in the Treaties and the Charter of Fundamental Rights. The Parliament provides certain proposals in terms of legislation and law shaping, in each Resolution accordingly.

The Parliament has been currently occupied with other interrelated subjects, such as  the use of AI in education, culture and the audiovisual industry. At the same time, the European Commission is awaited to publish the first regulation on artificial intelligence.

The e-personality, as introduced in the EU Parliament proposals, reflects on the question of legal personality and whether it would be legitimate to be granted on AI. This is further interconnected with the question of intelligent agency, which, in various jurisdictions across the EU, has been discussed from different perspectives.

In Germany, for example, a number of scholars are concerned that intelligent agents can become quite unpredictable; the so called ‘autonomy risk’ (Autonomierisiko). According to these scholars the autonomy riskcan potentially create‘responsibility gaps’ (Verant- wortungslücken), which in practice necessitates the establishment of legal frameworks specifically for intelligent agents; all these scholars unanimously agree on the legal personality being awarded to AI.

Tort law, when it comes to strict liability in the German legal order, relies on fault. Aa a fundamental legal principle, this entails that the plaintiff will be compensated upon successful proving that their rights were undermined by an unlawful act of the defendant (Rechtsgut,) who acted purposely or with negligence. Here it is useful to emphasise on a fundamental difference with English law: when a wrongful act takes place, there is no liability immediately for the person of charge for the act of their vicarious agent. The claimant is burdened with the weight of proof that the intention for wrongdoing was also present for the superior/supervisor independently (Sec. 831 para. 1 German Civil Code). Therefore, can we imagine a scenario where a tort case is built upon the liability of an intelligent agent for their wrongful act, which involves granting tort responsibility to intelligent agents as a necessary first step requiring the ability to be a tortfeasor?  Such a perception provides a safe defence system for the alleged wrongdoer  AI: it could be easily argued that their wrongful act was not predicted, not “foreseeable”, because of the autonomy risk involved.  

The implications of rights and obligations on legal personality

Within a jurisdiction, it is not always certain that all persons are subject to the same obligations and rights.

Legal personality can be granted to a person with the establishment of no obligations but only rights; when first conceived as a concept, in 1972, as well as implemented on a constitutional level, in Ecuador, where rights were awarded to Nature. It can be supported, however, that such a type of ‘personality’ constitutes nothing more than a legal contrivance that enables human individuals to act on behalf of a non- human rights holder, rather than requiring them to establish standing in their own capacity.  In New Zealand, by contrast, trustees were established to act on behalf of the environmental features given personality. In any circumstance, it seems inapposite to the reasons for considering personality of AI systems.

Could it be viable to allow granting AI with a personality characterised by obligations only? It could answer some initial concerns, but it could be potentially unclear how the legislative gaps in accountability would accordingly be addressed. For instance, civil liability can result in the award of damages in case of wrongdoing; however, this can only be achieved if there is property owned.

Upon incorporation of businesses, the legal personality they possess enables them to become fully liable in a way they can sueand be sued, to enter into contracts, to incur debt, to own property, and to be convicted of crimes. However, upon comparison to the rights of natural persons, the threshold of constitutionally granted rights to legal persons are the point of many debates. As a general rule, persons that constitute incorporations are not entitled to the same rights compared to a natural. In international law,“States enjoy plenary personality and international organisations may have varying degrees of it.”

And what about the question of AI in managerial positions in a corporation? The appointing of a computer-generated program under the name Vital as a board member by a Hong Kong venture capital firm in 2014 is such an example.This ‘nomination’ was regarded as a mere status of observation for the ‘board member’; similarly to the case of Saudi citizenship awarded to Sofia, it was more of a symbolic rather than legal act.  It is not impossible that the AI board member could have been delegated with a restrictive form of obligations and rights; under no circumstances, however, can they be exonerated for managerial responsibilities.  It is a requirement by many jurisdictions that the directive role be appointed to natural persons only, although another corporation can serve in the board as well in some legal systems. This was a possibility under English law until 2015.In the USA, Shawn Bayern developed this argument, since there are some legal gaps in US business entity law that allows the incorporation of limited liability companieswithout any human participation.This entails a paradoxical interpretation of that law, where a natural person creates a company, adds an AI system as a member, then resigns; but suggests the manner in which legal personality might be adapted in the future.

In USAlaw there is a provision under ‘Law of War Manual of the United States Department of Defense’, where in paragraph 6.5.9.3 of the Law of War (June 2015) it is stated that :“Law of War Obligations of Distinction and Proportionality Apply to Persons rather than the Weapons Themselves.” It is emphasised that «the law of war does not require weapons to make legal determinations, even if the weapon (e.g., through computers, software, and sensors) may be characterized as capable of making factual determinations, such as whether to fire the weapon or to select and engage a target». It can be argued that weaponry constating of artificial intelligence cannot demonstrate any responsibility as legal agents; nonetheless the aforementioned USA Act should be narrowly regarded as an interpretation of the military of the obligations under International Law.

As the question of legal personality, or personhood, as it is widely known in the USA, is an elusive and debated concept, what can be argued for some general moral guidelines for AI?

The beneficial traits attributed to the use of AI technology cannot be overseen; the NSTC Committee on Technologyissued in 2016 a report, published by the Executive Office of the President,where there was a proposal for the consideration of a legal and ethical guideline for AI. More specifically, in Strategy 3 of the Report, it was underlined that, regarding the ethical aspect of AI, “within the limits of what is technologically feasible, therefore, researchers must strive to develop algorithms and architectures that are verifiably consistent with, or conformto, existing laws, social norms and ethics—clearly a very challenging task”.This should be interpreted quite cautiously, however; it is not implied that legal personality can be awarded to AI according to the report but is not excluded either that these advanced creations can behave according to a primitive system of legal and moral guidelines, as they are embedded in their software.

Taking into consideration that Systems of Artificial Intelligence (SAI) can demonstrate a level of autonomy on decision making, we need to assess possible scenarios where individuals can be affected by the autonomous decisions made by SAI. The decisions of SAI need to be examined on how not to interfere and overlap with the rights of natural persons, when actively operating in society, even if driven by good intentions; and how can we ensure that the damage caused by such systems is compensated.

Conditioned that SAI are given a legal status as entities there is no certainty that in case of wrongdoing, they will be able to ensure compensation; what might possibly take place is a shift of responsibility for the damage caused by SAI, upon a natural person. After all, according to one of the principles of compensation of damage, the offender compensates the latter personally or by a person who is responsible for the actions of the offender. Some may believe that the problem at hand could be solved by holding the developers, operators or producers of a SAI liable. Others may think that it would be sufficient to reprogram or shut down the system.

Shifting the accountability for the wrongful acts of the SAI upon their operators, developers and programmers could not be a viable solution, because of the operational system of SAI, which relies on autonomy and independent decision making of the machine, combined with their programmed ‘nature’ to acquire new learning experiences based on their stimuli and interaction with the real world.It is further noted in Article 12 of the UNCITRAL Secretariat on the United Nations Convention on the Use of Electronic Communications in International Contracts that any person, either natural or legal, should be held accountable and liable for any activity created through messages by computers programmed for use by these entities, natural or not.This provision, however, is problematic; it does not pertain to messages of SAI with autonomy, but rather automatically generated content, whose errors and subsequently, natural or legal persons behind them, are easily traceable. In SAI, with the extend of autonomy in their decision-making process, we cannot connect the dots of their potentially wrongful acts with liability of the people behind the creation, production, operation, etc. Therefore, making the developers, operators or users of Systems of AI liable for the results of independent decisions of such systems would be more complicated than it might seem; their degree of liability would be undue and out of comparison.

Is it time to grant legal personality to AI?

The legal framework should reflect the perception of Systems of Artificial Intelligence as subjects with autonomous decision-making capacities, taking into consideration the legal implications. Particularly in cases of investments, an agent can benefit from the help of SAI, because they can provide their programming skills to maximize profit and offer better financial insight. The system will not only perform better analysis and present more reliable conclusions than a human, it could also serve as the main tool to prevent abuse on the part of the agent.

Thesystems of artificial intelligence possess traits that are in accordance with the prerequisites of grantinglegal personality: they are autonomous in terms of decisions and advanced and acquired learning through their interaction with their surroundings and can provide solution -oriented and strategical thinking. They are advanced systems of intelligent learning, capable of interacting with other subjects of the law; therefore, it is imperative that the protection of the rights of the latter are secured and reinforced through the clarification regarding the personality status of AI.  So far, in EU and national level, there is no legislative framework which grands AI any other status rather than objects of the law; this uncertaintyis further supported by liability for damages caused by the SAI during operation.

TheEuropean Parliament stressed out that AI has the potential to ‘unleash a new industrial revolution, which is likely to leave no stratum of society untouched’ , by generating ‘legal and ethical implications and effects’. It is highly necessary that ‘a specific legal status [...], so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.’

It was announced only few days ago that under the Portuguese chairmanship of the Council of the European Union, Portugal will seek to adopt a comprehensive EU legal framework on AI, by prioritising the rights of users, the transparency and protection of the fundamental EU values of privacy and human rights, with a thorough consideration of the risks involved, as well as cooperating with the US government for this reason. According to the economy minister, Pedro Siza Vieira,it is pivotal to shape the legal framework for AI, because “…it is now clear that artificial intelligence is the basis for enhanced productivity and has great potential for growth.”He emphasised that “the standards of society and individuals should be respected in the area of artificial intelligence and the algorithms involved.”

Maria Apostolidou

Maria is a trainee advocate, expected to become a Greek licensed barrister in spring ’21. She completed her LLB at the Aristotle University of Thessaloniki (with an Erasmus+ year at the Université Libre de Bruxelles), and continued her LLM in International Commercial Law at the University of Strathclyde. She is interested in commercial law, with an emphasis on e-commerce, intellectual property, international business law, and EU legislation affairs.