It is a commonplace occurrence today that computer programs, which arise from the area of research in artificial intelligence known as intelligent agents, function autonomously and competently;1 they work without human supervision, learn, and, while remaining 'just programmed entities', are capable of doing things that might not be anticipated by their creators or users.
In short, leaving philosophical debates about the true meaning of 'autonomy' aside, they are worthy of being termed 'autonomous artificial agents'.a And on present trends, we, along with our current social and economic institutions, will increasingly interact with them. They will buy goods for us, possibly after carrying out negotiations with other artificial agents, process our applications for credit cards or visas, and even make decisions on our behalf (in smarter versions of governmental systems such as TIERS2 and in the ever-increasing array of systems supporting legal decision-making3). As we interact with these artificial agents in unsupervised settings with no human mediators, their increasingly sophisticated functionality and behavior create awkward questions. If it is a reasonable assumption that the degree of their autonomy will increase, how should we come to treat these entities?
The artificial agent is better understood as the means by which the contract offer is constituted.
Societal norms and the legal system constrain our interactions with other human beings (our fellow citizens or people of other nations), other legal persons (corporations and public bodies), or animal entities. There are, in parallel, rich philosophical discussions of the normative aspects of these interactions in social, political, and moral philosophy, and in epistemology and metaphysics. The law, taking its cues from these traditions, strives to provide structure to these interactions. It answers questions such as: What rights do our fellow citizens have? How do we judge them liable for their actions? When do we attribute knowledge to them? What sorts of responsibilities can (or should) be assigned to them? It is becoming increasingly clear these questions must be addressed with respect to artificial agents.4 So, what place within our legal system should these entities occupy so that we may do justice to the present system of socio-economic-legal arrangements, while continuing to safeguard our interests?
The Contracting Problem
Discussing rights and responsibilities for programs tends to trigger thoughts of civil rights for robots, or taking them to trial for having committed a crime or something else similarly fanciful. This is the stuff of good, bad, and simplistic science fiction. But the legal problems created by the increasing use of artificial agents today are many and varied. Consider one problem, present in e-commerce: If two programs negotiate a deal (that is, my shopping bot makes a purchase for me at a Web site), does that mean a legally binding contract is formed between their legal principals (the company and me)?
A traditional statement of the requirements of a legally valid contract is that "there must be two or more separate and definite parties to the contract; those parties must be in agreement i.e., there must be a consensus ad idem; those parties must intend to create legal relations in the sense the promises of each side are to be enforceable simply because they are contractual promises; the promises of each party must be supported by consideration i.e., something valuable given in return for the promise."5
These requirements give rise to difficulties in accounting for contracts reached through artificial agents and have sparked a lively debate as to how the law should account for contracts that are concluded in this way. Most fundamentally, doctrinal difficulties stem from the requirement there be two parties involved in contracting: as artificial agents are not considered legal persons, they are not parties to the contract. Therefore, in a sale brought about by means of an artificial agent, only the buyer and seller can be the relevant parties to the contract. This entails difficulties in satisfying the requirement the two parties should be in agreement, since in many cases one party will be unaware of the terms of the particular contract entered into by its artificial agent. Furthermore, in relation to the requirement there should be an intention to form legal relations between the parties, if the agent's principal is not aware of the particular contract being concluded, how can the required intention be attributed?
Artificial Agents as Legal Agents
One possible solution, which would require us to grant some legal standing to the programs themselves,7 would be to treat programs as legal agents of their principals, empowered by law to engage in all those transactions covered by the scope of their authority. We would understand the program as having the authority to enter into contracts with customers, much as human agents do for a corporate principal. Some of its actions will be attributed to its corporate principal (for instance, the contracts it enters into), while those outside the scope of its authority will not. The 'knowledge' it acquires during transactions, such as customer information, can be attributed to the corporate principal, in the way that knowledge of human agents is. Lastly, the established theory of liability for principal-agent relationships can be applied to this situation. The details of this solution aside, the most important aspect here is that, unlike a car, a program is neither a thing nor a tool; rather, it is an entity with legal standing in our system.
In granting the status of a legal agent to a computer program, we are not so much granting rights to programs as protecting those that employ and interact with them. Understanding appropriately sophisticated programs as legal agents of their principals could be a crucial step to regulating their presence in our lives. It will enable us to draw upon a vast body of well-developed law that deals with the agent-principal relationship, and in a way that safeguards the rights of the principal user and all concerned third parties. Without this framework, neither third parties nor principals are adequately protected. Instead, we find ourselves in a situation where increasingly sophisticated entities determine the terms of transactions that affect others and place constraints on their actions, though with no well-defined legal standing of their own. Viewing a program as a legal agent of the employer could represent an economically efficient, doctrinally satisfying, and fair resolution that protects our interests, without in any way diminishing our sense of ourselves.
Rights and Legal Personhood for Artificial Agents
There are two ways to understand the granting of rights, such as legal agency, to artificial agents. Rights might be granted to artificial agents as a way of protecting the interests of others; and artificial agents might interact with, and impinge on, social, political, and legal institutions in such a way that the only coherent understanding of their social role emerges by modifying their status in our legal systemperhaps treating them as legal agents of their principals, or perhaps treating them as legal persons like we do corporations or other human beings. And when they enjoy such elevation, they must conform to the standards expected of the other entities that enjoy standing in our legal system.
Artificial agents have a long way to go before we can countenance them as philosophical persons.
The question of legal personality suggests the candidate entity's presence in our networks of legal and social meanings has attained a level of significance that demands reclassification. An entity is a viable candidate for legal personality in this sense provided it fits within our networks of social, political, and economic relations in such a way that it can coherently be a subject of legal rulings. Thus, the real question is whether the scope and extent of artificial agent interactions have reached such a stage. Answers to this question will reveal what we take to be valuable and useful in our future society as well, for we will be engaged in determining what sorts of interactions artificial agents should be engaged in for us to be convinced that the question of legal personality has become a live issue.
While the idea of computer programs being legal persons might sound fanciful, it is worth noting the law has never considered humanity a necessary or sufficient condition for being a person. For example, in 19th century England, women were not full persons; and, in the modern era, the corporation has been granted legal personhood.c The decision to grant personhood to corporations is instructive because it shows that granting personhood is a pragmatic decision taken in order to best facilitate human commerce and interests. In so doing, we did not promote or elevate corporations; we attended to the interests of humans.
Artificial agents have a long way to go before we can countenance them as philosophical persons. But their roles in our society might grow to a point where the optimal strategy is to grant them some form of limited legal personhood. Until then, we should acknowledge their growing roles in our lives and make appropriate adjustments to our legal frameworks so that our interests are best addressed. Indeed, this area requires an international legal framework to address the ubiquity of artificial agents on the Internet, and their deployment across national borders.d I have merely scratched the surface of a huge, complex, multidisciplinary debate; in the years to come, we can only expect that more complexities and subtleties will arise.
1. The fast-growing literature on agent technologies is truly gigantic; for introductions, see M.J. Wooldridge, Reasoning about Rational Agents, MIT Press, Cambridge, MA, 2000, and N.R. Jennings and M.J. Wooldridge, Eds., Agent Technology: Foundations, Applications and Markets, Springer Verlag, 1998.
3. The Proceedings of the International Conferences on Artificial Intelligence and Law (http://www.iaail.org/past-icail-conferences/index.html), and the journal Artificial Intelligence and Law are rich sources of information on these systems.
4. A very good source of material on the legal issues generated by the increasing use of artificial agents may be found at the Law and Electronic Agents Workshops site: http://www.lea-online.net/pubs
6. There is a large amount of literature in this area; some very good treatments of the contracting problem may be found in: T. Allan and R. Widdison, "Can computers make contracts?", Harvard Journal of Law and Technology 9 (1996), 2552,; A. Bellia Jr., "Contracting with electronic agents," Emory Law Journal 50, 4 (2001), 1063; I. Kerr, "Ensuring the success of contract formation in agent-mediated electronic commerce," Electronic Commerce Research 1, (2001), 183202; E. Weitzenboeck, "Electronic agents and the formation of contracts", International Journal of Law and Information Technology 9, 3 (2001), 204234. Various international trade agreements such as those formulated by the UNCITRAL or national legislations such as the UCITA have not as yet resulted in clarity in these areas.
7. S. Chopra and L. White, "Artificial agentsPersonhood in law and philosophy," in Proceedings of the European Conference on Artificial Intelligence, 2004 and S. Chopra and L. White, A Legal Theory for Autonomous Artificial Agents, University of Michigan Press, to be published.
a. Jim Cunningham has pointed out that a certain degree of autonomy is present in all programs; consider Web servers or email daemons for instance. One might think of intelligent agents as a move toward one end of the spectrum of autonomy.
c. In his Max Weber Lecture, "Rights of Non-humans? Electronic Agents and Animals as New Actors in Politics and Law," Gunther Teubner notes that animals were often treated as legal actors including being brought to trial.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.