Research and Advances
Artificial Intelligence and Machine Learning Review articles

Can Automated Agents Proficiently Negotiate With Humans?

Exciting research in the design of automated negotiators is making great progress.
Posted
  1. Introduction
  2. The Negotiation Environment
  3. The Information Model
  4. Human-Agent Negotiations
  5. The Main Challenges
  6. Tackling the Challenges
  7. The Diplomat Agent
  8. The AutONA Agent
  9. The Cliff-Edge Agent
  10. The Colored-Trails Agents
  11. The Guessing Heuristic Agent
  12. The QOAgent
  13. The Virtual Human Agent
  14. The Rule of Thumb for Designing Automated Agents
  15. Conclusion
  16. Suggestions for Future Research
  17. Acknowledgments
  18. References
  19. Authors
  20. Footnotes
  21. Figures
  22. Tables
automated agent illustration

Negotiations surround our everyday life, usually without us even noticing them. They can be simple and ordinary, as in haggling over a price in the market or deciding on a meeting time; or they can be complex and extraordinary, perhaps involving international disputes and nuclear disarmament14 issues that affect the well-being of millions.

While the ability to negotiate successfully is critical for any social interaction, the act of negotiation is not an easy task. Something that might be perceived as a “simple” case of a single-issue bilateral bargaining over a price in the marketplace can demonstrate the difficulties that arise during the negotiation process. In fact, it may demonstrate the complexity of negotiation and the modeling of the environment. Each of the two sides has his or her own preferences, which might or might not be known to the other party. And if some of these preferences conflict, reaching an agreement requires a certain degree of cooperation or concession.

Keeping all this in mind, negotiation is an attractive environment for automated agents. The many benefits of such agents include alleviating some of the efforts required of humans during negotiations and assisting individuals who are less qualified in the negotiation process, or in some situations, replacing human negotiators altogether. Another possibility is for people embarking on important negotiation tasks to use these agents as a training tool, prior to actually performing the task. Thus, success in developing an automated agent with negotiation capabilities has great advantages and implications. The design of automated agents that proficiently negotiate is a challenging task, as there are different environments and constraints that should be considered.

The negotiation environment defines the specific settings of the negotiation. Based on these settings, different considerations should then be taken into account. In this article, we focus on the question of whether an automated agent can proficiently negotiate with human negotiators. To this end we define a proficient automated negotiator as one that can achieve the best possible agreement for itself. This, of course, also depends on the preferences of the other party and thus adds complexity to the design of such an agent.

Back to Top

The Negotiation Environment

The designer of an automated agent must take into account the environment in which the agent will operate. The environment determines several parameters that dictate the number of negotiators taking part in the negotiation, the time frame of the negotiation, and the issues on which the negotiation is being conducted. The number of parties participating in the negotiation process can be two (bilateral negotiations) or more (multilateral negotiations). For example, in a market there can be one seller but many buyers, all involved in negotiating over a certain item. On the other hand, if the item is common, there may also be many sellers taking part in the negotiation process.

The negotiation environment also consists of a set of objectives and issues to be resolved. Various types of issues can be involved, including discrete enumerated value sets, integer-value sets, and real-value sets. A negotiation consists of multi-attribute issues if the parties have to negotiate an agreement that involves several attributes for each issue. Negotiations that involve multi-attribute issues allow making complex decisions while taking into account multiple factors.18 The negotiation environment can consist of non-cooperative negotiators or cooperative negotiators. Generally speaking, cooperative agents try to maximize their combined joint utilities (see Zhang40) while non-cooperative agents try to maximize their own utilities regardless of the other sides’ utilities.

Finally, the negotiation protocol defines the formal interaction between the negotiators—whether the negotiation is done only once (one-shot) or re-peatedly—and how the exchange of offers between the agents is conducted. A common exchange of offers model is the alternating offers model.32 In addition, the protocol states whether agreements are enforceable or not, and whether the negotiation has a finite or infinite horizon. The negotiation is said to have a finite horizon if the length of every possible history of the negotiation is finite. In this respect, time costs may also be assigned and they may increase or decrease the utility of the negotiator.

Figure 1 depicts the different variations in the settings, along with the location of each system that is described in the section “Tackling the Challenges.” For example, point D in the cube represents bilateral negotiations with multi-attribute issues and repeated interactions, while point B represents multilateral negotiations with a single attribute for negotiation and a one-shot encounter.

The negotiation domain encompasses the negotiation objectives and issues and assigns different values to each. Thus, an agent may be tailored to a given domain (for example, the Diplomat agent22 described later is tailored to a specific domain of the Diplomacy game) or domain independent (for example, the QOAgent24 also described later).

Back to Top

The Information Model

The information model dictates what is known to each agent. It can be a model of complete information, in which each agent has complete knowledge of both the state of the world and the preferences of other agents; or it can be a model of incomplete information, in which agents may have only partial knowledge of either the states of the world or the preferences of other agents (for example, bargaining games with asymmetric information), or they may be ignorant of the preferences of the opponents and the states of the world.33 The incomplete information can be modeled in different ways with respect to the uncertainty regarding the preferences of the other party. One approach to modeling the information is to assume that there is a set of different agent types and the other party can be any one of these types.

Back to Top

Human-Agent Negotiations

The issue of automated negotiation is too broad to cover in a short review paper. To this end, we have decided to concentrate on adversarial bilateral bargaining in which the automated agent is matched with people. The challenges in this area could motivate readers to pursue this field (note that this sets the focus and leaves most auction settings outside the scope of this article, even though automated agents that bid in auctions competing with humans have been proposed and evaluated in the literature; for example, Grossklags and Schmidt11).

Automated Negotiator Agents. The problem of developing an automated agent for negotiations is not new for researchers in the fields of multiagent systems and game theory (for example, Kraus20 and Muthoo26). However, designing an automated agent that can successfully negotiate with a human counterpart is quite different from negotiating with another automated agent. Although an automated agent that played in the Diplomacy game with other human players was introduced by Kraus and Lehmann22 some 20 years ago, the difficulties of designing proficient automated negotiators have not been resolved.

In essence, assumptions in most research are made that do not necessarily apply in genuine negotiations with humans, such as assuming complete information or the rationality of the opponent negotiator. In this sense, both parties are assumed to be rational in their behavior (for example, the decisions made by the agents are described as rational and the agents are considered to be expected utility maximizing agents that cannot deviate from their prescribed behavior). Yet, when dealing with human counterparts, one must take into consideration the fact that humans do not necessarily maximize expected utility or behave rationally. In particular, results from social sciences suggest that people do not follow equilibrium strategies.6,25 Moreover, when playing with humans, the theoretical equilibrium strategy is not necessarily the optimal strategy.38 In this respect, equilibrium-based automated agents that play with people must incorporate heuristics to allow for “unknown” deviations in the behavior of the other party. Moreover, when people are the ones who design agents, they do not always design them to follow equilibrium strategies.12 Nonetheless, some assumptions are made, mainly that the other party will not necessarily maximize its expected utility. However, if given two offers, it will prefer the one with the highest utility value. Lastly, it has been shown that whether the opponent is oblivious or has full knowledge that its counterpart is a computer agent can change the overall result. For example, Grossklags and Schmidt11 showed that efficient market prices were achieved when human subjects knew that computer agents existed in a double auction market environment. Sanfey34 matched humans with other humans and with computer agents in the Ultimatum Game and showed that people rejected unfair offers made by humans at significantly higher rates than those made when matched with a computer agent.

Automated Agents Negotiating with People. Researchers have tried to take some of these issues into consideration when designing agents that are capable of proficiently negotiating with people. For example, dealing only with the bounded rationality of the opponent, several researchers have suggested new notions of equilibria (for example, the trembling hand equilibrium described in Rasmusen30). Approximately 10 years ago, Kasbah, a seminal negotiation model between agents designed by humans, was presented in the virtual marketplace by Chavez and Maes.5 Here, the agent’s behavior was fully controlled by human players. The main idea was to help users in the negotiation process between buyers and sellers by using automated negotiators. Chavez and Maes’s main innovation was not so much the sophisticated design of the automated negotiators but rather the creation of a multiagent negotiation environment. Kraus21 describe an automated agent that negotiates proficiently with humans. Although they also deal with negotiation with humans, there is complete information in their settings. Other researchers have suggested a shift from quantitative decision theory to qualitative decision theory.36 In using such a model it is not necessary to assume that the opponent will follow the equilibrium strategy or try to be a utility maximizer. Another approach was to develop heuristics for negotiations motivated by the behavior of people in negotiations.22 However, the fundamental question of whether it is possible to build automated agents for negotiations with humans in open environments has not been fully addressed by these researchers.

Another direction being pursued is the development of virtual humans to train people in interpersonal skills (for example, Kenny19). Achieving this goal requires cognitive and emotional modeling, natural language processing, speech recognition, knowledge representation, as well as the construction and implementation of the appropriate logic for the task at hand (for example, negotiation), is in order to make the virtual human into a good trainer. An example of the researchers’ prototype, in which trainees conduct real-time negotiations with a virtual human doctor and a village elder to move a clinic to another part of the town out of harm’s way is given in Figure 2.

Commercial companies and schools have also displayed interest in automated negotiation technologies. Many courses and seminars are offered for the public and for institutions. These courses often guarantee that upon completion you will “know many strategies on which to base the negotiation,” “Discover the negotiation secrets and techniques,” “Learn common rival’s tactics and how to neutralize them” and “Be able to apply an efficient negotiation strategy.”1,27 Yet, in many of these courses, the agents are restricted to one domain and cannot be generalized. Some of the automated agents cannot be adapted to the user and are restricted to a single attribute negotiation with no time constraints. Nonetheless, human factors and results of laboratory and field experiments reviewed in esteemed publications9,29 provide guidelines for the design of automated negotiators. Yet, it is still a great challenge to incorporate these guidelines in the inherent design of an agent to allow it to proficiently negotiate with people.

Back to Top

The Main Challenges

The main difficulty in the development of automated negotiators is that in order to negotiate proficiently with a human counterpart, they must be able to work in settings with both opponents with bounded rationality and incomplete information. The difficulty can also stem from the fact the humans are also influenced by behavioral aspects and by social preferences that hold between players (such as inequity-aversion2 and reciprocity4). Thus, it is difficult to predict individual choices.

Tackling the issues of bounded rationality and incomplete information is a complex task. To achieve this, an automated agent is required to have two interdependent mechanisms. The first is a decision-making component that works via modeling human factors. This mechanism is in charge of generating offers and deciding whether to accept or reject offers made by the opponent. The challenge behind this mechanism does not lie in the computational complexity of making good decisions but rather in reasoning about the psychological and social factors that characterize human behavior. The second component is learning, which allows the agent to infer the opponent’s preferences and strategies, based on his actions.

Another inherent problem in the design of the automated agent is the ability to generalize its behavior. While humans can negotiate in different settings and domains, when designing an automated agent a decision should be made whether the agent should be a general-purpose negotiator, that is, will be able to successfully negotiate in many settings and be domain-independent,24 or the agent will only be suitable for one specific domain (for example, Ficici and Pfeffer,8 Kraus and Lehmann22). Perhaps the advantage of the agent’s specificity is the ability to construct better strategies that could allow it to achieve better agreements, as compared to a more general-purpose negotiator. This is due to the fact that the specificity allows the designer to debug the agent’s strategy more carefully and against more test cases. By doing so, the designer can fine-tune the agent’s strategy and allow for a more proficient automated negotiator. Agents that are domain independent, on the other hand, are more difficult to test against all possible cases and states.

The issue of trust also plays an important role in negotiations, especially when the other side’s behavior is unpredictable. Successful negotiations depend on the trust established between all parties, which can depend on cheap-talk during negotiations (that is, unverifiable information with regard to the other party’s private information7) and the introduction of unenforceable agreements. Based on the actions and information each party can update its reputation (for better or for worse) with regard to the other party and thus build trust between the sides. Some of the systems we review below do allow cheap-talk and unenforceable agreements. Building trust can also depend on past and future interactions with the other party (for example, one-shot interaction or repeated interactions). Due to limited space, we do not cover the issue of trust in detail. Readers are encouraged to refer to Ross31 for a comprehensive review on this topic.

Another important issue is how automated agents can be evaluated and compared. Such an evaluation is important in order to select the most appropriate agent for the task at hand. Yet, no single criteria is defined. The answer to the questions of “what constitutes a good negotiator agent?” is multifaceted. For example, is a good agent an agent that:

  • Achieves a maximal payoff when matched with human negotiators? But will it also generate these payoffs when matched with other automated agents, which might be more accessible than human negotiators, and which also exist in open environments?
  • Generates a maximal combined payoff for both negotiators, that is, the agent is more concerned with maximizing the combined utilities than its own reward?
  • Allows most negotiations to end with an agreement, rather than one of the sides opting-out or terminating the negotiations with a status-quo outcome?
  • Is domain dependent and its technique suitable only for that domain or one that is domain independent and can be adapted to several domains? This might be an important factor if an agent is required to adapt to dynamic settings, for example.
  • Behave in such a manner that would leave its counterpart speculating whether it is an automated negotiator or a human one?

In this article we do not define what or whether there is a best answer. We also do not claim a best answer indeed exists. Yet researchers should take these and other measures into consideration when designing their agents. Perhaps certain criteria and benchmarks are in order to allow an adequate comparison between automated agents.

Here we review automated agents that incorporate the two mechanisms of decision making via modeling human factors and learning the opponent’s model. By doing so they try to tackle the aforementioned challenges in bilateral negotiations. While many automated negotiators’ designs have been suggested in the literature, we only review those that have actually been evaluated and tested with human counterparts. This is mainly due to the fact that in order to test the proficiency of an automated negotiator whose purpose is to negotiate with human negotiators, one must match it with humans. It is not sufficient to test it with other automated agents, even if they were supposed to have been designed by humans as bounded rational agents, due to many of the reasons previously mentioned.

Back to Top

Tackling the Challenges

Here we describe several automated agents that try to tackle the challenges and proficiently negotiate in open environments. All of these agents were evaluated with human counterparts. It is worth noting that most of these agents use structured (or semi-structured) language and do not implement any natural language processing methods (with the one exception of the Virtual Human agent). In addition, the agents vary with respect to their characteristics. For example, some are domain-dependent, while others are domain-independent and are more general in nature; some use the history of past interactions to model the opponent, while others only have access to current interaction data. Figure 3 depicts a general architecture for an automated agent design. We begin by describing the oldest agent of all of them—the Diplomat agent.

Back to Top

The Diplomat Agent

Over 20 years ago Kraus and Lehmann developed an agent called Diplomat22 that played the Diplomacy game (see Figure 4) with the goal to win. The game involves negotiations in multi-issue settings with incomplete information concerning the other agents’ goals, and misleading information can be exchanged between the different agents. The negotiation protocol extends the model of alternating offers and allows simultaneous negotiations between the parties, as well as multiple interactions with the opponent agents during each time period. The issue of trust also plays an important role, as commitments might be breached. In addition, as each game consists of several sessions, it can be viewed as repeated negotiation settings.

The main innovation of the Diplomat agent is probably the fact that it consists of five different modules that work together to achieve a common goal. Different personality traits are implemented in the different modules. These traits affect the behavior of the agent and can be changed during each run, allowing Diplomat to change its ‘personality’ from one game to another and to act nondeterministically. In addition, the agent has a limited learning capability that allows it to try to estimate the personality traits of its rivals (for example, their risk attitude). Based on this, Diplomat assesses whether or not the other players will keep their promises. In addition, Diplomat incorporates randomization in its decision-making component. This randomization, influenced by Diplomat‘s personality traits, determines whether some agreements will be breached or fulfilled.

The results reported by Kraus and Lehmann show that Diplomat played well in the games in which it participated, and most human players were not able to guess which of the players was played by the automated agent. Nonetheless, the main disadvantage of Diplomat is that it is a domain-dependent agent, that is, suitable only for the Diplomacy game. Since the game is quite complex and time consuming not many experiments were carried out with human players to validate the results and reach a level of significance. Yet, at the time Diplomat did open a new and exciting line of research, some of which we review here.

We continue with a more recent agent also constrained to a specific domain and involving single-issue negotiations. However, it takes into account the history of past interactions to model the opponents.

Back to Top

The AutONA Agent

Byde3 developed AutONA—an automated negotiation agent. Their problem domain involves multiple negotiations between buyers and sellers over the price and quantity of a given product. The negotiation protocol follows the alternating offers model. Each offer is directed at only one player on the other side of the market, and is private information between each pair of buyers and sellers. In each round, a player can make a new offer, accept an offer, or terminate negotiations. In addition, a time cost is used to provide incentives for timely negotiations. While the model can be viewed as one-shot negotiations, for each experiment, AutONA was provided with data from previous experiments.

In order to model the opponent, AutONA attaches a belief function to each player that tries to estimate the probability of a price for a given seller and a given quantity. This belief function is updated based on observed prices in prior negotiations. Several tactics and heuristics are implemented to form the strategy of the negotiator during the negotiation process (for example, for selecting the opponents with which it will negotiate and for determining the first offer it will suggest to the opponent). Byde also allowed cheap-talk during negotiations, that is, the proposition of offers with no commitments. The results obtained from the experiments with human negotiators revealed that the negotiators did not detect which negotiator was the software agent. In addition, Byde found that AutONA is not sufficiently aggressive during negotiations and thus many remained incomplete. Their experiments showed that at first AutONA performed worse than the human players. Thus, a modified version that fine-tuned several configuration parameters of the AutONA agent, improved the results that were more in line with those of human negotiators, yet not better. They conclude that different environments would most likely require changing the configurations of the AutONA agent.

We now proceed with agents that are applicable to a larger family of domains: The Cliff Edge and Colored Trails agents.

Back to Top

The Cliff-Edge Agent

Katz and Kraus16 proposed an innovative model for human learning and decision making. Their agent competes repeatedly in one-shot interactions, each time against a different human opponent (for example, sealed-bid first-price auctions, ultimatum game). Katz and Kraus utilized a reinforcement learning algorithm that integrates virtual learning with reinforcement learning. That is, offers higher than an accepted offer are treated as successful (virtual) offers, notwithstanding they were not actually proposed. Similarly, offers lower than a rejected offer are treated as having been (virtually) unsuccessfully proposed. A threshold is also employed to allow for some deviations from this strict categorization. The results of previous interactions are stored in a database used for later interactions. The decision-making mechanism of Katz and Kraus’s Ultimatum Game agent follows a heuristic based on the qualitative theory of Learning Direction.35 Simply speaking, if an offer is rejected at a given interaction, then at the next interaction the proposer will offer the opponent a higher offer. In contrast, if an offer is accepted, then during the following interaction the offer will be decreased. Katz and Kraus show that their algorithm performs better than other automated agents. When compared to human behavior, there is an advantage to their automated agent over the human’s average payoff.

Later, Katz and Kraus17 improved the learning of their agent by allowing gender-sensitive learning. In this case, the information obtained from previous negotiations is stored in three databases, one is general and the other two are each associated with a specific gender. During the interaction, the agent’s algorithm tries to determine when to use each database. Katz and Kraus show their gender-sensitive agent yields higher payoffs than the generic approach, which lacks gender sensitivity.

However, Katz and Kraus’s agent was tested in a single-issue domain with repeated interactions that are used to improve the learning and decision-making mechanism. It is not clear whether their approach would be applicable to negotiation domains in which several rounds are made with the same opponent and multi-issue offers are made. In addition, the success of their gender-sensitive approach depends on the existence of different behavioral patterns of different gender groups.

The following agents are tailored to a rich environment of multi-issue negotiations. Similar to the agent proposed by Katz, the history of past interactions is used to fine-tune agents’ behavior and modeling.

Back to Top

The Colored-Trails Agents

Ficici and Pfeffer8 were concerned with understanding human reasoning, and using this understanding to build their automated agents. They did so by means of collecting negotiation data and then constructing a proficient automated agent. Both Byde’s AutONA agent3 and the Colored-Trail agent collect historical data and use it to model the opponent. Byde used the data to update the belief regarding the price for each player, while Ficici and Pfeffer used it to construct different models of how humans reason in the game.

The negotiation was conducted in the Colored Trails game environment12 played on a n×m board of colored squares. Players are issued colored chips and are required to move from their initial square to a designated goal square. To move to an adjacent square, a player must turn in a chip of the same color as the square. Players must negotiate with each other to obtain chips needed to reach the goal square (see Figure 5). Their learning mechanism involved constructing different possible models for the players and using gradient descent to learn the appropriate model.

Ficici and Pfeffer trained their agents with results obtained from human-human simulations and then incorporated their models in their automated agents that were later matched against human players. They show that this method allows them to generate more successful agents in terms of the expected number of accepted offers and the expected total benefit for the agent. They also illustrate how their agent contributes to the social good by providing high utility scores for the other players. Ficici and Pfeffer were also able to show that their agent performs similarly to human players.

In order for the Colored-Trails Agent to model the opponent, prior knowledge regarding the behavior of humans is needed. The learning mechanism requires sufficient human data for training and is currently limited to one domain only.

Gal10 also examines automated agent design in the domain of the Colored Trails. They present a machine-learning approach for modeling human behavior in a two-player negotiation, where one player proposes a trade to the other, who can accept or reject it. Their model tries to predict the reaction of the opponent to the different offers, and using this prediction it determines the best strategy for the agent. The domain on which Gal et al. tested their agent can also be viewed as a Cliff-Edge environment, more complex than the Ultimatum Game, upon which Katz and Kraus evaluated their agent.16

Gal et al. show that the proposed model successfully learns the social preferences of the opponent and achieves better results than the Nash equilibrium, Nash bargaining computer agents, and human players.

We now continue with agents that are domain-independent, and we propose an agent that has greater generality than the aforementioned agents.

Back to Top

The Guessing Heuristic Agent

Jonker et al.15 deal with bilateral multi-issue and multi-attribute negotiations that involve incomplete information. The negotiation follows the alternating offer protocol and is conducted once with each opponent. Jonker designed a generic agent that uses a “guessing heuristic” in the buyer-seller domain.a This heuristic tries to predict the opponent’s preferences based on its offers’ history. This is under the assumption the opponent’s utility has a linear function structure. Jonker et al. assert that this heuristic allows their agent to improve the outcome of the negotiations. Regarding the offer generation mechanism, they use a concession mechanism to obtain the next offer. In their experiments, the automated agent acts as a proxy for the human user. The user is involved only in the beginning when he inputs the preference parameters. Then the agent generates the offers and the counteroffers. When comparing negotiations involving only automated agents with negotiations involving only humans, the agents usually outperformed the humans (in the buyer’s role). Yet, in an additional experiment they matched humans versus agent negotiators. In this experiment, humans only played the role of the buyer. When comparing the human vs. agent negotiations to that of only automated agents, the humans attained somewhat better results than the agents (in the buyer’s role), based on the average utilities. The authors believe this should be accounted to the fact that humans forced the automated negotiators to make more concessions then they themselves did.


If we look into the design elements of all the agents mentioned in this article, we cannot find one specific feature that connects them or can account for their good negotiation skills.


The next agent also deals with bilateral multi-issue negotiations that involve incomplete information. Nonetheless the negotiation protocol is richer than that of the Guessing Heuristic agent.

Back to Top

The QOAgent

The QOAgent24 is a domain-independent agent that can negotiate with people in environments of finite horizon bilateral negotiations with incomplete information. The negotiations consider a finite set of multi-attribute issues and time constraints. Costs are assigned to each negotiator, such that during the negotiation process, the negotiator might gain or lose utility over time. If no agreement is reached by a given deadline a status quo outcome is enforced. A negotiator can also opt-out of the negotiation if it decides that the negotiation is not proceeding in a favorable manner. Similar to the negotiation protocol in the Diplomat agent’s domain, the negotiation protocol in the QOAgent‘s domain extends the model of alternating offers such that each agent can perform up to M > 0 interactions with the opponent agent during each time period. In addition, queries and promises are allowed that add unenforceable agreements to the environment.

With respect to incomplete information, each negotiator keeps his preferences private, though the preferences might be inferred from the actions of each side (for example, offers made or responses to offers proposed). Incomplete information is expressed as uncertainty regarding the utility preferences of the opponent, and it is assumed there is a finite set of different negotiator types. These types are associated with different additive utility functions (for example, one type might have a long-term orientation regarding the final agreement, while the other type might have a more constrained orientation). Lastly, the negotiation is conducted once with each opponent.

As for incomplete information, the QOAgent tackles the problem by applying a simple Bayesian update mechanism, which, after each action tries to infer which utility best suits the opponent (when receiving an offer or when receiving a response to an offer). For the decision-making process, the approach used by the QOAgent is more of a qualitative approach.36 While the QOAgent‘s model applies utility functions, it is based on a non-classical decision-making method, rather than focusing on maximizing the expected utility. The QOAgent uses the maximin function and the qualitative valuation of offers. Using these methods the QOAgent generates offers and decides whether to accept or reject proposals it has received.

Lin et al.24 tested the QOAgent in several distinct domains and their results show that the QOAgent reaches more agreements and plays more effectively than its human counterparts, when the effectiveness is measured by the score of the individual utility. They also show that the sum of utilities is higher in negotiations when the QOAgent is involved, as compared to human-human negotiations. Thus, they assert, it is indeed possible to build an automated agent that can negotiate successfully with humans. However, it is also important to state that their agent has certain limitations. They assume there is a finite set of different agent types and thus their agent cannot generate a dynamic model (and perhaps a more accurate one) of the opponent. In addition, they have not shown whether their agent can also maintain high scores when matched with other automated agents, which is an important characteristic of open environment negotiations. Moreover, the QOAgent does not scale well when numerous offers are proposed, which can cause its performance to deteriorate.

Finally, we conclude with a description of a more complex type of agent that incorporates many features, far beyond the negotiation strategy itself.

Back to Top

The Virtual Human Agent

Kenny et al.19 describe work on virtual humans used for interpersonal training for skills, such as: negotiation, leadership, interviewing, and cultural training. To achieve this they require a large amount of research in many fields (such as, knowledge representation, cognitive and emotional modeling, natural language processing, among others). Their intelligent agent is based on the Soar Cognitive Architecture, which is a symbolic reasoning system used to make decisions.

Traum et al. discuss the negotiation strategies of the virtual human agent in more detail.37 In their paper they describe a set of strategies implemented by the agent (for example, when to act aggressively if it seems that the current outcome will incur a negative utility, or when to find the appropriate issue on which to currently negotiate). The strategy chosen each time is influenced by several factors: the control the agent has over the negotiations, the estimated utility of an outcome and the estimated best utility of an outcome, the trust the agent bestows the opponent and the commitment of all agents to the given issues. The virtual agent also tries to model the opponent by reasoning about its mental state.

Traum et al. tested their agents in several negotiation scenarios. One of these scenarios is a simulation for soldiers that practice and conduct bilateral engagements with virtual humans, and in situations in which culture plays an important role. In this case, the different actions can be selected from a menu that includes appropriate questions based on the history of the simulation thus far. The second domain requires trainees to communicate with an embodied virtual human doctor to negotiate and convince him to move a clinic, located in a middle of a war zone, out of harm’s way (see Figure 2). Their prototypes are continuously tested with cadets and civilians. Traum et al. are more concerned with the system as a whole and thus they do not provide insights with respect to the proficiency of their automated negotiator. Regarding the environment, they state that the subjects enjoy using the system for negotiations and that it also allows them to learn from their mistakes.

Traum also report some of the existing limitations of their system. Currently, the virtual agent cannot consider arbitrary offers made by a human negotiator. In addition, more strategies are required to better cover the environment’s rich settings. They also state that the negotiation problem can be addressed more in depth (following other researchers who have focused mainly on the negotiation field), rather than in breadth (as presently conducted in their system).

Back to Top

The Rule of Thumb for Designing Automated Agents

We should probably begin with the conclusion. Despite the title of this section, there may not be a good rule of thumb for designing automated negotiators with human negotiators. The accompanying table summarizes the main contributions made by each of the reviewed agents. If we look into the design elements of all the agents mentioned in this article, we cannot find one specific feature that connects them or can account for their good negotiation skills. Nonetheless, we can note several features that have been used in several agents. Agent designers might take these features into consideration when designing their automated agent, while also taking into account the settings and the environment in which their agent will operate.

The first feature is randomization, which was used in Diplomat, QOAgent, and also (though not explicitly) in the Cliff-Edge agents. The randomization factor allows these agents to be more resilient (or robust) to adversaries that try to manipulate them to gain better results on their part. In addition, it allows them to be more flexible, rather than strict, in accepting agreements and ending negotiations.

The second feature can be viewed as a concession strategy. Both the AutONA agent and the Guessing Heuristic agent implemented this strategy, which influenced the offer-generation mechanism of their agent. A concession strategy might also have a psychological effect on the opponent that would make it more comfortable for the opponent to accept agreements or to make concessions on his own as well.

The last feature common in several agents is the use of a database. The database can be built on previous interactions with the same human opponent or for all opponents. The agent consults the database to better model the opponent, to learn about possible behaviors and actions and to adjust its behavior to the specific opponent. A database of the history can also be used to obtain information about the behavior of the opponents, if such information is not known, or cannot be characterized, in advance.

Lastly, though not exactly a feature, but worth mentioning, is that none of the agents we reviewed implemented equilibrium strategies. This is an interesting observation and most likely is due to the fact that these strategies have been shown to behave poorly when implemented in automated negotiators matched with human negotiators, mainly due to the complex environment and the bounded rationality of people. In some cases,21 experiments have shown that when the automated agent follows its equilibrium strategy the human negotiators who negotiate with it become frustrated, mainly since the automated agent repeatedly proposes the same offer, and the negotiation often ends with no agreement. This has been shown in cases in which the complexity of finding the equilibrium is low and the players have full information.

Back to Top

Conclusion

In this article we presented the challenges and current state-of-the-art automated solutions for proficient negotiations with humans. Nonetheless we do not claim that all existing solutions have been summarized in this article. We briefly state the importance of automated negotiators and propose suggestions for future work in this field.

The importance of designing an automated negotiator that can negotiate efficiently with humans cannot be understated and we have shown that indeed it is possible to design such negotiators. By pursuing non-classical methods of decision making and a learning mechanism for modeling the opponent it could be possible to achieve greater flexibility and effective outcomes. As we have shown, this can also be accomplished without constraining the model to the domain.

Many of the automated negotiation agents are not intended to replace humans in negotiations, but rather as an efficient decision support tool or as a training tool for negotiations with people. Thus, such agents can be used to support training in real-life negotiations, such as: e-commerce and electronic negotiations (e-negotiations), and they can also be used as the main tool in conventional lectures or online courses, aimed at turning the trainee into a better negotiator.

To date, it seems that research in AI has neglected the issue of proficiently negotiating with people, at the expense of designing automated agents aimed to negotiate with rational agents or other automated agents.39 Others have focused on improving different heuristics and strategies and the analysis of game theory aspects (for example, Kraus20 and Muthoo26). Nonetheless, it is noteworthy that these are important aspects in which the AI community has certainly made an impact. Unfortunately, not much progress has been made with regard to automated negotiators with people, leaving many un-faced challenges.

Back to Top

Suggestions for Future Research

The work is far from complete and the challenges remain exciting. To entice the reader, we list a few of these challenges here:

The first challenge is to enrich the negotiation language. Many researchers restrict themselves to the basic model of alternating offers whereby the language consists of offers and counteroffers alone. Rich and realistic negotiations, however, consist of other types of actions (for example, threats, comments, promises, and queries), as well as simultaneous actions (that is, each agent can perform up to M > 0 interactions with the other party each time period). It is essential these actions and behaviors are modeled in the automated negotiators to allow better negotiations with human negotiators.

Another challenge, also discussed previously, is the need for a general-purpose automated negotiator. With the vast amount of applications and domains, automated agents cannot be restricted to one single domain and must be adaptable to different settings. The trade-off between the performance of a general-purpose automated negotiator and a domain-dependent negotiator should be considered and methods for improving the efficacy of a general-purpose negotiator should be sought. Achieving this will also contribute to the feasibility of comparing between different automated agents when matched with people. Preliminary work on this facet is already under way by Hindriks et al.13 and Oshrat et al.,28 however, we believe the aspect of generality should be addressed more by researchers. In this respect, metrics should be designed to allow a comparison between agents. To achieve this, some of the questions described earlier regarding “what constitutes a good negotiator agent?” should be answered as well.

In addition, argumentation, though dealt with in the past, still poses a challenge for researchers in this field. For example, about 10 years ago Kraus23 presented argumentation as an iterative process emerging from exchanges among agents to persuade each other and bring about a change in intentions. They developed a formal logic that forms a basis for the development of a formal axiomatization system for argumentation. In particular, Kraus identified argumentation categories in human negotiations and demonstrated how the logic can be used to specify argument formulations and evaluations. Finally, they developed an agent that was implemented, based on the logical model.

However, this agent was not matched with human negotiators. Moreover, there are several open research questions associated with how to integrate the argumentation model into automated negotiators. Since the argumentation module is based on logic and thus is time consuming, a more efficient approach should be used. In addition, the current model is built on a very complex model of the opponent and therefore should be incorporated in the automated negotiator’s model of the opponent. In order to facilitate the design, a mapping between the logical model and the utility-based model is required.

To conclude, in recent years the field of automated negotiators that can proficiently negotiate with human players has received much needed focus and the results are encouraging. We presented several of these automated negotiators and showed it is indeed possible to design such proficient agents. Nonetheless, there are still challenges that pose interesting research questions that must be pursued and exciting work is still very much in progress.

Back to Top

Acknowledgments

We thank David Sarne, Ya’akov (Kobi) Gal, and the anonymous referees for their helpful remarks and suggestions.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Variations of the negotiation settings.

F2 Figure 2. Example of virtual humans’ negotiations.

F3 Figure 3. Architecture of a general agent’s design.

F4 Figure 4. The Diplomacy game.

F5 Figure 5. The Colored-Trail game screenshot.

Back to Top

Tables

UT1 Table. Main contributions of each agent.

Back to top

    1. Bargaining negotiations course; https://www.irwaonline.org/eweb/dynamicpage.aspx?webcode=205 (2008).

    2. Bolton, G. A comparative model of bargaining: Theory and evidence. American Economic Review 81, 5 (1989), 1096–1136.

    3. Byde, A., Yearworth, M., Chen, Y.-K., and Bartolini, C. AutONA: A system for automated multiple 1-1 negotiation. In Proceedings of the 2003 IEEE International Conference on Electronic Commerce (2003), 59–67.

    4. Charness, G. and Rabin, M. Understanding social preferences with simple tests. The Quarterly Journal of Economics 117, 3 (2002), 817–869.

    5. Chavez, A. and Maes, P. Kasbah: An agent marketplace for buying and selling goods. In Proceedings of the first international Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology (1996), 75–90.

    6. Erev, I. and Roth, A. Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibrium. American Economic Review 88, 4 (1998), 848–881.

    7. Farrell, J. and Rabin, M. Cheap talk. Journal of Economic Perspectives 10, 3 (1996), 103–118.

    8. Ficici, S. and Pfeffer, A. Modeling how humans reason about others with partial information. In Proceedings of the 7th International Conference on Autonomous Agents and Multiagent Systems (2008), 315–322.

    9. Fisher, R. and Ury, W. Getting to Yes: Negotiating Agreement without Giving In. Penguin Books, 1991.

    10. Gal, Y., Pfeffer, A., Marzo, F. and Grosz, B.J. Learning social preferences in games. In Proceedings of the National Conference on Artificial Intelligence (2004), 226–231.

    11. Grossklags, J. and Schmidt, C. Software agents and market (in) efficiency: a human trader experiment. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 36, 1 (2006), 56–67.

    12. Grosz, B., Kraus, S., Talman, S. and Stossel, B. The influence of social dependencies on decision-making: Initial investigations with a new game. In Proceedings of 3rd International Joint Conference on Multiagent Systems (2004), 782–789.

    13. Hindriks, K., Jonker, C. and Tykhonov, D. Towards an open negotiation architecture for heterogeneous agents. In Proceedings for the 12th International Workshop on Cooperative Information Agents. LNAI, 5180 (2008), Springer, NY, 264–279.

    14. Hoppman, P.T. The Negotiation Process and the Resolution of International Conflicts. University of South Carolina Press, Columbia, SC, May 1996.

    15. Jonker, CM., Robu, V., and Treur, J. An agent architecture for multi-attribute negotiation using incomplete preference information. Autonomous Agents and Multi-Agent Systems 15, 2 (2007), 221–252.

    16. Katz, R. and Kraus, S. Efficient agents for cliff-edge environments with a large set of decision options. In Proceedings of the 5th International Conference on Autonomous Agents and Multi-Agent Systems (2006), 697–704.

    17. Katz, R. and Kraus, S. Gender-sensitive automated negotiators. In Proceedings of the 22nd National Conference on Artificial Intelligence (2007), 821–826.

    18. Keeney, R. and Raiffa, H. Decisions with Multiple Objective: Preferences and Value Tradeoffs. John Wiley, NY, 1976.

    19. Kenny, P., Hartholt, A., Gratch, J., Swartout, W., Traum, D., Marsella, S. and Piepol, D. Building interactive virtual humans for training environments. In Proceedings of Interservice/Industry Training, Simulation and Education Conference (2007).

    20. Kraus, S. Strategic Negotiation in Multiagent Environments. MIT Press, Cambridge Ma, 2001.

    21. Kraus, S., Hoz-Weiss, P., Wilkenfeld, S., Andersen, D.R., and Pate, A. Resolving crises through automated bilateral negotiations. Artificial Intelligence 172, 1 (2008), 1–18.

    22. Kraus, S. and Lehmann, D. Designing and building a negotiating automated agent. Computational Intelligence 11, 1 (1995), 132–171.

    23. Kraus, S., Sycara, K., and Evenchik, A. Reaching agreements through argumentation: a logical model and implementation. Artificial Intelligence 104, 1–2 (1998), 1–68.

    24. Lin, R., Kraus, S., Wilkenfeld, J. and Barry, J. Negotiating with bounded rational agents in environments with incomplete information using an automated agent. Artificial Intelligence 172, 6–7 (2008), 823–851.

    25. McKelvey, R.D. and Palfrey, T.R. An experimental study of the centipede game. Econometrica 60, 4 (1992), 803–836.

    26. Muthoo, A. Bargaining Theory with Applications. Cambridge University Press, MA, 1999.

    27. Online negotiation course; http://www.negotiate.tv/ (2008).

    28. Oshrat, Y., Lin, R., and Kraus, S. Facing the challenge of human-agent negotiations via effective general opponent modeling. In Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (2009).

    29. Raiffa, H. The Art and Science of Negotiation. Harvard University Press, Cambridge, MA, 1982.

    30. Rasmusen, E. Games and Information: An Introduction to Game Theory. Blackwell Publishers, 2001.

    31. Ross, W. and LaCroix, J. Multiple meanings of trust in negotiation theory and research: A literature review and integrative model. International Journal of Conflict Management 7, 4 (1996), 314–360.

    32. Rubinstein, A. Perfect equilibrium in a bargaining model. Econometrica 1 (1982), 97–109.

    33. Rubinstein, A. A bargaining model with incomplete information about preferences. Econometrica 53, 5 (1985), 1151–1172.

    34. Sanfey, A., Rilling, J., Aronson, J., Nystrom, L., and Cohen, J. The neural basis of economic decision-making in the ultimatum game. Science 300 (2003), 1755–1758.

    35. Selten, R. and Stoecker, R. End behavior in sequences of finite prisoner's dilemma supergames: A learning theory approach. Economic Behavior and Organization 7, 1 (1986), 47–70.

    36. Tennenholtz, M. On stable social laws and qualitative equilibrium for risk-averse agents. In Proceedings of the 5th International Conference on Principles of Knowledge Representation and Reasoning (1996), 553–561.

    37. Traum, D., Marsella, S., Gratch, J., Lee, J., and Hartholt, A. Multi-party, multi-issue, multi-strategy negotiation for multi-modal virtual agents. In Proceedings of the 8th International Conference on Intelligent Virtual Agents, 2008.

    38. Tversky, A. and Kahneman, D. The framing of decisions and the psychology of choice. Science 211 (1981), 453–458.

    39. Wellman, M.P., Greenwald, A., and Stone, P. Autonomous Bidding Agents: Strategies and Lessons from the Trading Agent Competition. MIT Press, Cambridge, MA, 2007.

    40. Zhang, X., lesser, V., and Podorozhny, R. Multidimensional, multistep negotiation for task allocation in a cooperative system. Autonomous Agents and MultiAgent Systems 10, 1 (2005), 5–40.

    a. Although Jonker et al. discuss and present results on one domain only, they state their model is generic and has also been applied in other domains.

    This research is based upon work supported in part by the U.S. Army Research Laboratory and the U.S. Army Research Office under grant number W911NF-08-1-0144 and under NSF grant 0705587.

    DOI: http://doi.acm.org/10.1145/1629175.1629199

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More