Research and Advances
Computing Profession Review articles

Deception, Identity, and Security: The Game Theory of Sybil Attacks

Classical mathematical game theory helps to evolve the emerging logic of identity in the cyber world.
Posted
  1. Introduction
  2. Key Insights
  3. Conceptual Building Blocks
  4. Results
  5. Conclusion and Future Work
  6. References
  7. Authors
  8. Footnotes
  9. Sidebar: WANETS and Hastily Formed Networks
  10. Sidebar: Costly Signaling
  11. Sidebar: Ant Colonies
  12. Sidebar: Defining Deception
deception identity, illustration

“When the world is destroyed, it will be destroyed not by its madmen but by the sanity of its experts and the superior ignorance of its bureaucrats.”
    — John le Carré

Back to Top

Key Insights

  • Cyber systems have reshaped the role of identity. The low cost to mint cyber identities facilitates greater identity fluidity. This simplicity provides a form of privacy via anonymity or pseudonymity by disguising identity, but also hazards proliferation of deceptive, multiple and stolen identities. With
  • growing connectivity, designing the verification/management algorithms for cyber identity has become complex, and requires examining what motivates such deception.
  • Signaling games provide a formal mathematical way to analyze how identity and deception are coupled in cyber-social systems. The game theoretic framework can be extended to reason about dynamical system properties and behavior traces.

Decades before the advent of the Internet, Fernando António Nogueira Pessoa assumed a variety of identities with the ease that has become common in cyber-social platforms—those where cyber technologies play a part in human activity (for example, online banking, and social networks). Pessoa, a Portuguese poet, writer, literary critic, translator, publisher, and philosopher, wrote under his own name as well as 75 imaginary identities. He would write poetry or prose using one identity, then criticize that writing using another identity, then defend the original writing using yet another identity. Described by author Carmela Ciuraru as “the loving ringmaster, director, and traffic cop of his literary crew,” Pessoa is one of the foremost Portuguese poets and a contributor to the Western canon. The story of Pessoa illustrates a key insight that holds true for the cyber-social systems of today: Identity costs little in the way of minting, forming, and maintaining yet demands a high price for its timely and accurate attribution to physical agency.

Along with the low cost of minting and maintaining identities, a lack of constraints on using identities is a primary factor that facilitates adversarial innovations that rely on deception. With these factors in mind, we study the following problem: Will it be possible to engineer a decentralized system that can enforce honest usage of identity via mutual challenges and costly consequences when challenges fail? The success of such an approach will remedy currently deteriorating situations without requiring new infrastructure. For example, such a system should be able to reduce fake persons in social engineering attacks, malware that mimics the attributes of trusted software, and Sybil attacks that use fake identities to penetrate ad hoc networks.

Note that many cyber-physical facilities—those where a physical mechanism is controlled or monitored by computer algorithms and tied closely to the internet and its users (for example, autonomous cars, medical monitoring)—also aim to enable users to remain anonymous and carry out certain tasks with only a persistent but pseudonymous identity. This form of short-term identity (especially in the networks that are ad hoc, hastily formed, and short lived) can remain uncoupled from a user’s physical identity and allow them to maintain a strong form of privacy control. How can this dichotomy, namely trading off privacy for transparency in identity, be reconciled? The emerging logic underlying identity (what types of behaviors are expected, stable, possible) will also be central to avoiding many novel and hitherto unseen, unanticipated, and unanalyzed security problems.

Our approach is founded upon traditional mathematical game theory, but is also inspired by several mechanisms that have evolved in biology. Here, we analyze a game theoretic model that evolves cooperative social behavior, learning, and verification to express the strategy of costly signaling. We further suggest this could scale within cyber-social systems.

Road map. Our approach starts with mathematical game theory to analyze decisions concerning identity. Central to the dilemma are privacy and intent, and these notions are captured with information asymmetry (for example, an agent’s true identity vs. the agent’s purported identity) and utility (that is, the agent’s preference of identity use). We argue this scenario is best captured with a classical signaling game, a dynamic Bayesian two-player game, involving a Sender who (using a chosen identity) signals a Receiver to act appropriately. With the identity signaling game defined, the communication among agent identities is a repeated signaling game played among peers. Throughout communications, agents remain uncertain of both the strategies implemented by other identities and the true physical agent controlling those identities. We treat the population of agents as dynamic (that is, allowing agents to be removed from the population and be replaced by mutants who use modified strategies) and rational (allowing them to preferentially seek greater payoff). By specifying the procedures of direct and vicarious learning we construct a dynamical system familiar to evolutionary game theory. However, we control the parameters in this system associated with evolution rates. Using these building blocks we synthesize models in order to create population simulations and empirically evaluate Nash and weaker equilibria. We present experiments that focus on how ad hoc information flows within networks and examine mechanisms that further stabilize cooperative equilibria. Results are presented and conclusions drawn by outlining the design of cooperativity-enhancing technologies and how such mechanisms could operate in the open and among deceptive types.

Motivation. Novel ad hoc network-communication techniques (for example, formed hastily in the wake of a disaster or dynamically among a set of nearby vehicles) blur the boundaries between cyber and physical purposefully for the benefits of their cohesion. Within these innovations security concerns have centered on identity deception.26 Here, we motivate our game theoretic models via illustrative examples from wireless ad hoc networks (WANETs) and hastily formed networks (HFNs) for humanitarian assistance (see the sidebar “WANETS and Hastily Formed Networks”) under Sybil attacks. A Sybil attack involves forging identities in peer-to-peer networks to subvert a reputation system and is named after the character Sybil Dorsett, a person diagnosed with dissociative identity disorder. Within the framework of game theory, the Sybil attack is viewed by how agents reason and deliberate under uncertainty as well as control deception in an information asymmetric setting (see “Defining Deception” for a definition of game theoretic deception). Looking to the future, as the distinction between the cyber and physical fades, attacks such as these will very likely pose existential threats to our rapidly growing cyber-physical infrastructure. Hence, there is urgency to the problem.

Back to Top

Conceptual Building Blocks

Here, we construct the signaling game theory of identity within cyber-social systems. The effects of repeated play and evolutionary dynamics provide the conditions under which the theory admits equilibria.

Agency, identity, and signaling. An agent is the notion of a decision maker informed by various faculties. In our setting, an agent’s utility models preferences related to the possible use of pseudonymous identity and actions upon receiving information from other pseudonymous identities.

For example, in the WANET setting the network nodes act as identities, themselves a proxy to the root physical agent controlling them. Thus, a physical agent who constructs a deception via a screen manages a Sybil node: the node’s physical agent appears unknown, murky, or rooted elsewhere.

To create convincing fake identities, a root agent must maintain the act when challenged. One approach to design costly signaling within cyber-social systems is to add risks into the required decisions for maintaining fake identities. We use the term M-coin to represent assets held at risk when an agent’s identity is challenged. For example, the bio-inspired protocol detailed in Casey et al.6 and simplified in the sidebars “Costly Signaling” and “AntColonies” imposes costly signaling with a digital form of the ant’s Cuticular Hydrocarbon Chemicals (CHCs). Analogously M-coins, encoded digitally but constrained like CHCs, aim to have similar effects for the utility and identity of nodes within a WANET.

The game. Traditional mathematical game theory23,35 models scenarios where outcomes depend on multiple agent preferences. Not all outcomes are alike; under various conditions some outcomes feature greater stability (that is, non-deviation)24,25 and are computable.16,17,27 Interesting game scenarios yield differing rewards to agents depending on outcome. Thus, agents evaluate scenarios insofar as common and private knowledge allows, and they act rationally (that is, to optimize utility) by selecting their own action in the context of how other agents act. As the case of Pessoa’s creative use of identities suggests, private knowledge is important in shaping outcomes.

To accommodate these types of scenarios, game theory has developed a branch of models known as incomplete/partial information games,22,30 of which the Lewis signaling game is one example.4,14,19,31,34 Signaling games have been studied in diverse contexts, including economics and biology,1,13,18,20,29,33,36 particularly for evaluating the stability of honest signaling when agents have a partially common interest and where the role of costly signaling and credible deterrence is widely recognized. Applications to cybersecurity are addressed in these references.6,7,8,9,10,11,12 The simplest such signaling game involving identity focuses on the possibility that during an encounter, a sender node S may use a strategic deception by claiming either a fabricated identity or making a malicious attempt to impersonate another’s identity. Within a WANET we will consider two natural types of nodes TC and TD to indicate respectively a cooperative node that employs no deceptions (preserving the desired systemwide properties of identity management), and a deceptive node that directly employs a deception. In either case, the node will communicate a signal to a receiver node R including a status of c to indicate it is cooperative with respect to system security, or a status of d to indicate anomalous behavior (such as compromised status). A receiver node R, given the signal of a sender node S but unaware of the sender node’s true type, must select an action to take.

One option for the receiver is to simply trust the sender node, denoted as t; alternatively, the receiver node may pose a challenge action, denoted as a, which creates an attempt to reveal the sender’s nature and leads to costly outcomes for deception. While any individual challenge may not reveal completely the nature of a sender, repeated challenges may eventually expose Sybil identities, as senders who are frequently challenged are under pressure to manage their resources for verifying their identity.

We sketch the outcomes of an encounter scenario graphically with an extensive-form game tree illustrated in Figure 2. Starting in the center, the sender S has type TC (cooperative) or TD (deceptive). Next, the sender selects a signal c (cooperative) or d (otherwise); the receiver selects an action t (trust) or a (challenge). We explore the outcomes and payoffs for identity as illustrated in the accompanying table.

f1.jpg
Figure 1. Identity: Trust or verify.

f2.jpg
Figure 2. Extensive form games.

Outcomes. Outcome o1 describes a sender S that is cooperative by nature and offers a nominal proof of identity to the receiver R. The receiver R then trusts S and acts upon the information provided, for example, relaying the communicated message.

Outcome o2 describes a scenario like o1, except the receiver R challenges S to provide a more rigorous proof of identity. In this case, given the cooperative nature of the sender, the challenge is unnecessary, netting cost burdens to maintaining a trusted network.

Outcome o3 describes a cooperative sender S not willing (or able) to offer a nominal proof of identity (for example, after being repeatedly but maliciously challenged by “suspicious” receivers to the point of insolvency).a The receiver R nonetheless trusts S, and in this case the exchange is altruistic, helping to recover a trustworthy node in distress.

For brevity, we describe only one more outcome here. Outcome o5 describes a sender S that is deceptive but offers a nominal proof of identity. The receiver R trusts S and acts upon the information and the receiver’s misguided trust of the deceptive identity is costly.

Signaling games involve asymmetric information constraints for the receiver; without the sender’s type, the receiver cannot distinguish outcome o1 from o6, nor o3 from o8. By selecting the challenge action, the receiver exchanges additional resource cost to partially distinguish among these outcomes. From the point of view of a trustworthy network, we summarize outcomes {o1, o3} as naturally supporting, while {o5, o7} are the most destructive; outcomes {o2, o4} add unnecessary cost, and {o6, o8}, although they add cost, are necessary and effective recourse given deceptive types.

The payoff structure of the table depends on four parameters. We let A be the reward extracted by the deceptive sender at the loss of the trusting receiver;b let B be the benefit enjoyed by both sender and receiver nodes acting cooperatively in message passing; let C be the cost of challenging a node for additional proof concerning its identity without knowing sender’s type; and let D be the imputed cost to the sender for being deceptive (identified by a receiver’s challenge).

Repeated games and strategy. Repeated interactions occur as a sequence of plays between two identities. While in classical signaling games there is little need for a distinction to be made between identity and agent, here we highlight identity fluidity with which an identity or cyber asset can be usurped by another agent. Games are played between two identities, and identities are bound to physical agents (the resident decision control at the time of play). Agent types will remain fixed by nature but note that in subsequent plays the control of an identity can pass from one agent to another, consequently the type changes accordingly. This type of perturbation is intended to be explored by our model, in order that cybersecurity issues such as Sybil attacks (where identities are stolen or fabricated) can be adequately expressed and tested for their ability to destabilize a desired equilibrium.

uf1.jpg
Figure. Outcome labels, payoff, transaction costs, and DFA codes for identity management signal game.

To accommodate this, we encode change to the population over time (for example, by invasion of mutants) over repeated games by using deterministic finite automata (DFA). The DFA strategy space offers a vastly reachable space of dynamic strategic structures. This provides the means to explore the uses of identity in repeated signaling interactions.

The DFA state codes noted in the table determine the (type, signal) of a sender’s controlling agent, or the action as receiver. Each DFA encounter determines a sequence of outcomes as illustrated in the example that follows. Consider the strategy of Figure 3(c) as sender matched against strategy of (d) as receiver with a transaction budget of two units. The sender starts in state s1, and the receiver starts in state s3; they play at the cost of one unit against the transaction budget. Note that the discount for deception will entail additional communication efforts. Next, the sender transitions to state s7 by following the s3 labeled transition, and the receiver loops back to state s3; they both play at the cost of a half unit since state s7 uses deception. Next, the sender transitions to state s1 while the receiver transitions to state s6 to exhaust the transaction budget and complete the game. The computed outcome sequence is o1, o7, o2, resulting in a sender aggregate utility of (4 + B) and receiver aggregate utility of (B – (A + C)).

f3.jpg
Figure 3. Evolutionary games and dynamics.

Evolutionary strategy. Evolutionary game theory models a dynamic population of agents capable of modifying their strategy and predicts population-level effects.2,3,5,19,32 Formally, evolutionary games are a dynamic system with stochastic variables. The agents in evolutionary games may (both individually and collectively) explore strategy structures directly (via mutation and peer-informed reselection), and they may exploit strategies where and when competitive advantages are found.

To implement this system, the time domain is divided into intervals called generations. The system is initialized by fixing a finite set of agents and assigning each agent a strategy determined with a seeding probability distribution. During a generation, pairs of agents will encounter one another to play repeated signaling games; the encounters are determined by an encounter distribution. At the completion of a generation, agents evaluate rewards obtained from their implemented strategies. This evaluation results in their performance measure. Next, performance measures are compared within a set of peer agents that cooperte to inform each agents’ reselection stage. During the reselection stage, agents determine a strategy to use in the next generation, as achieved by a boosting probability distribution that preferentially selects strategies based on performance. After reselection, some agents are mutated with a mutation probability distribution. This step completes the generation and establishes the strategies implemented during the next generation.


When a deceptive identity succeeds, it will be used numerous times as there is no reason to abandon it after one interaction. Moreover, it is precisely the repeated interactions that are needed to develop trust.


The agents evolve discrete strategic forms (DFA); a strategic mutation network is graphed in Figure 3(e) to provide a sense of scale. The dynamic system thus evolves a population measure over strategies. Within the WANET, nodes freely mutate, forming deceptive strategies as often as they augment cooperative ones. Evolutionary games allow us to elucidate the stability and resilience of various strategies arising from mutations and a selection process ruled by non-cooperation and rationality.

We augment the basic structure of reselection by considering carefully how strategic information is shared. Upon noticing that deceptive and cooperative strategies differ fundamentally in their information asymmetric requirements, we introduce a technique referred to as split-boosting, which modulates the information flow components of the network.

Recreate by split-boosting. During the Recreate phase, agents select strategies preferentially by comparing performance measured only among a set of agents that share this pooled information.

Splitting the set of agents into components we limit the boosting to include only strategies available from the component. Within a component (subset) S, let vi be the performance measure for strategy used by agents iS. Letting cacm6201_a.gif and cacm6201_b.gif we can safely transfer the performance measures to the interval [0, 1] as the limit of fractional transformation:

ueq01.gif

The term η simply prevents division by zero, and the term ζ is a statistical shrinkage term used as a model parameter that helps to distort global information available to agents when they reselect a strategy.

We describe the probability that agent iS switches over to use the strategy that agent jS previously implemented as cacm6201_c.gif

Back to Top

Results

Under the signaling game theoretic model, we evaluate equilibrium concepts and their stability under evolutionary dynamics including mutant Sybil identities. We further specify the WANET case and its parameters to perform computer simulations yielding empirical measures of its behavior. Here, we focus on how validated and shared security information can ballast the desired equilibrium of honest signaling.

Models and simulations. To demonstrate simulation scalability, we used a laptop (with a 2GHz Intel core i7 processor and 8GB of RAM) to measure a simulation history (with 800 nodes and 1,000 generations). In eight minutes of user time over 16M rounds of play, 160K strategic mutations were explored; 125K of those mutations were found to be unique DFA strategy structures, and 36K employed deceptive identities. It was possible to discover a stable equilibrium where all agents reveal their identity honestly and act with the common knowledge of others revealing their identities honestly. Since mutating into a Sybil behavior is detectable by others and credibly punishable, the equilibrium is stable. Note also that the nature of cyber-social systems makes these systems amenable to empirical evolutionary studies in that model checking or other formal approaches would require “an intelligent designer” who could specify various global properties of the system. However, we do not rule out a role for statistical model checking in this and other similar mechanism design studies.

Experiments and empirical analysis. Our experiments consider a simple setting to illustrate the intuition that costly signaling and verified information flows among cooperative types can stabilize behavior in WANETs. More generally simulations (as a computational technique) can evaluate a variety of mechanisms and how they influence system behaviors.

Our major control in experiments examines how differing information pooling for cooperative vs. deceptive types leads to differing qualitative behavior outcomes. We consider a reference system S0 and reengineer it with a device to express improved information pooling among cooperative types to create alternate system S1. The systems feature the same competitive pressures and are identical in every way except in their implementation of the reselection step. Game parameters are A, B, C, D = 4, 0.5, 0.5, 4.0, with 800 network nodes and 400 generations. In both systems, the same seeding distribution initializes the simulations from a state where no nodes employ (immediately) deceptive or Sybil identities. From these initial conditions, mutations allow nodes to quickly use deceptive strategies and test their efficacy.

In the first system S0, all agents select strategies using common and identical information pooling. Therefore, both cooperative and deceptive types are treated alike, specifically with the same awareness to and distortions of pooled information guiding strategic exploration.

In the second system S1, agents select strategy with boosting split by type. Strategic information, once verified as cooperative, is offered to all agents with an openly shared common database of clean strategies. This modification enhances information for cooperative types while conversely imposing isolating effects for deceptive types. Also, in our simulations, the deceptive types maintain rationality, so when a deceptive strategy is found to be performing poorly (less than the cooperative group average), the agents abandon the deceptive strategy as being nonproductive, thereby coming clean and reselecting strategies from the shared database as the best survival option.

In Figure 4 we show typical simulated traces for systems S0 and S1 plotting the proportion of population employing deceptive strategies (a crude estimation of deception as defined in the sidebar “Defining Deception”). The differing properties for information flows affecting reselections offer strong controls to stabilize the dynamic equilibrium favorable to cooperators. In S1 the advantages of deception are short-lived, and cooperative behaviors are promoted even when agents remain free to explore for niche use of deception.

f4.jpg
Figure 4. Results.

Back to Top

Conclusion and Future Work

Several insights and contributions emerge from our experiments. One key insight is that challenging an agent in such a way that deceptive agents either fail the challenge or face greater risk can deter deception. Another key insight is that many instances where agents use deceptive identities in cyber-social systems are repeated games. When a deceptive identity succeeds, it will be used numerous times as there is no reason to abandon it after one interaction. Moreover, it is precisely the repeated interactions that are needed to develop trust. Thus, formalizing these insights we devised a mathematical game to model strategic interactions, while recognizing a possibility of permissive and malleable identities. With the dilemma between privacy and intent clarified formally in signaling games, we computationally considered various strategies such as those based in behavior learning and costly signaling. Our computational simulations uncovered several interesting information flow properties that may be leveraged to deter deception, specifically by enhancing the flow of information regarding cooperative strategies while reinforcing the cooperative group’s identity. Interestingly, this result indicates an identity management system, typically thought to hinge on the precision of true positives and astronomical unlikeliness of false-positive recognition, may rather critically depend on how learned behavior and strategic information can be shared.

Our computational experiment offers new insights for achieving strong deterrence of identity deception within ad hoc networks such as WANETs, however much is left as future work. Our larger practical goal is M-coin, a design strategy and system for cooperation enhancing technologies. M-coin may be thought of as an abstract currency guiding an open recommender-verification system that incorporates new agent types (to verify identities, behavior histories, and cooperative strategies as well as the consistency of distrusted information); the new types promote efficiencies supporting cooperative coalitions. The main step forward, as demonstrated here, is recognizing the effects of pooled and verified strategic information, and its flow constraints (as well as its capabilities to operate in the open). Vetted strategic information assists cooperators to rapidly adapt to and out-compete deceptive strategies.

Still, many challenges remain outstanding. The possibility of an agent not compelled by utility presents a problem, as that agent may persist within the network indefinitely to form effective attacks. Future work may focus on how the expression of rationality could be fortified for identities/nodes. Critically, deceptively minded actors will need to prefer a base level of utility, and this remains an open challenge (although the solution could lie in the many possibilities suggested by biological systems). Additionally, technologies supporting the tedious aspects of information gathering and validation must be aligned to user incentives.

Properly constructed recommender-verifier architectures could be used in WANETS, HFNs, and other fluid-identity cyber-social and cyber-physical systems to reliably verify private but trustworthy identities and limit the damage of deceptive attack strategies. Starting with WANETs, we motivate an elegant solution using formalisms we originally developed for signaling games. Nonetheless, we are encouraged by analogous biological solutions derived naturally under Darwinian evolution.

Acknowledgments. We thank the anonymous reviewers for their insightful comments. This material is based upon work funded and supported by U.S. Department of Defense Contract No. FA8702-15-D-0002 with Carnegie Mellon University Software Engineering Institute and New York University and ARO grant A18-0613-00 (B.M.). This material has been approved for public release and unlimited distribution, ref DM17-0409.

Back to Top

Back to Top

Back to Top

Back to Top

Back to Top

Back to Top

Back to Top

    1. Argiento R., Pemantle, R., Skyrms, B. and Volkov, S. Learning to signal: Analysis of a micro-level reinforcement model. Stochastic Processes and their Applications 119, 2 (2009), 373–390.

    2. Axelrod, R. An evolutionary approach to norms. American Political Science Review 80, 4 (1986), 1095–1111.

    3. Axelrod, R. The Evolution of Cooperation. Basic books, 2006.

    4. Banks, J. and Sobel, J. Equilibrium selection in signaling games. Econometrica: J. EconometricSociety, (1987), 647–661.

    5. Binmore, K. and Samuelson, L. Evolutionary stability in repeated games played by finite automata. J. Economic Theory 57, 2 (1992), 278–305.

    6. Casey, W., Memarmoshrefi, P., Kellner, A., Morales, J.A. and Mishra, B. Identity deception and game deterrence via signaling games. In Proceedings of the 9th EAI Intern. Conf. Bio-inspired Information and Communications Technologies, 73–82.

    7. Casey, W., Morales, J.A. and Mishra, B. Threats from inside: Dynamic utility (mis) alignments in an agent-based model. J. Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications 7 (2016), 97–117.

    8. Casey, W., Morales, J.A., Nguyen, T., Spring, J., Weaver R., Wright, E., Metcalf, L. and Mishra, B. Cyber security via signaling games: Toward a science of cyber security. In Proceedings of the Intern. Conf. Distributed Computing and Internet Technology, 34–42.

    9. Casey, W., Morales, J.A., Wright, E., Zhu, Q. and Mishra, B. Compliance signaling games: Toward modeling the deterrence of insider threats. Computational and Mathematical Organization Theory 22, 3 (2016), 318–349.

    10. Casey, W., Weaver, R., Morales, J.A., Wright, E. and Mishra, B. Epistatic signaling and minority games, the adversarial dynamics in social technological systems. Mobile Networks and Applications 21, 1 (2016), 161–174.

    11. Casey, W., Wright, E., Morales, J.A., Appel, M., Gennari, J. and Mishra, B. Agent-based trace learning in a recommendation verification system for cybersecurity. In Proceedings of the 9th IEEE Intern. Conf. on Malicious and Unwanted Software: The Americas, (2014), 135–143.

    12. Casey, W., Zhu, Q., Morales, J.A. and Mishra, B. Compliance control: Managed vulnerability surface in social-technological systems via signaling games. In Proceedings of the 7th ACM CCS Intern. Workshop on Managing Insider Security Threats, (2015), 53–62.

    13. Catteeuw, D., Manderick, B. et al. Evolution of honest signaling by social punishment. In Proceedings of the 2014 Annual Conf. Genetic and Evolutionary Computation, (2014), 153–160.

    14. Cho, I-K. and Sobel, J. Strategic stability and uniqueness in signaling games. J. Economic Theory 50, 2 (1990), 381–413.

    15. Chung, H. and Carroll, S.B. Wax, sex and the origin of species: Dual roles of insect cuticular hydrocarbons in adaptation and mating. BioEssays, (2015).

    16. Daskalakis, C., Goldberg, P.W. and Papadimitriou, C.H. The complexity of computing a Nash equilibrium. SIAM J. Computing 39, 1 (2009), 195–259.

    17. Fabrikant, A., Papadimitriou, C. and Talwar, K. The complexity of pure Nash equilibria. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing, (2004), 604–612.

    18. Hamblin, S. and Hurd, P.L. When will evolution lead to deceptive signaling in the Sir Philip Sidney game? Theoretical Population Biology 75, 2 (2009), 176–182.

    19. Huttegger, S.M., Skyrms, B., Smead, R. and and Zollman, K.J.S. Evolutionary dynamics of Lewis signaling games: Signaling systems vs. partial pooling. Synthese 172, 1 (2010), 177–191.

    20. Jee, J., Sundstrom, A., Massey, S.E. and Mishra, B. What can information-asymmetric games tell us about the context of Crick's 'frozen accident'? J. the Royal Society Interface 10, 88 (2013).

    21. King, D. The Haiti earthquake: Breaking new ground in the humanitarian information landscape. Humanitarian Exchange Magazine 48, (2010).

    22. Lewis, D. Convention: A Philosophical Study. John Wiley & Sons, 2008.

    23. Nash, J. Non-cooperative games. Annals of Mathematics, (1951), 286–295.

    24. Nash, J. et al. Equilibrium points in n-person games. In Proceedings of the National Academy of Sciences 36, 1 (1950), 48–49.

    25. Nash, J.F. Jr. The bargaining problem. Econometrica: J. Econometric Society, (1950), 155–162.

    26. Newsome, J., Shi, E., Song, D. and Perrig, A. The Sybil attack in sensor networks: Analysis & defenses. In Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks, (2004), 259–268.

    27. Papadimitriou, C. Algorithms, Games, and the Internet. In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing, (2001), 749–753.

    28. Sharma, K.R., Enzmann, B.L. et al. Cuticular Hydrocarbon pheromones for social behavior and their coding in the ant antenna. Cell Reports 12, 8 (2015), 1261–1271.

    29. Silk, J.B., Kaldor, E., and Boyd, R. Cheap talk when interests conflict. Animal Behavior 59, 2 (2000), 423–432.

    30. Skyrms, B. The Stag Hunt and the Evolution of Social Structure. Cambridge University Press, 2004.

    31. Skyrms, B. Signals: Evolution, Learning, and Information. Oxford University Press, 2010.

    32. Smith, J.M. Evolution and the Theory of Games. Cambridge University Press, 1982.

    33. Smith, J.M. Honest signaling: The Philip Sidney game. Animal Behaviour 42, 6 (1991), 1034–1035.

    34. Sobel, M.J. et al. Non-cooperative stochastic games. The Annals of Mathematical Statistics 42, 6 (1971), 1930–1935.

    35. Neumann, J.V. and Morgenstern, O. Theory of Games and Economic Behavior. Princeton University Press, 2007.

    36. Zollman, K.J.S., Bergstrom, C.T., and Huttegger, S.M. Between cheap and costly signals: The evolution of partially honest communication. In Proceedings of the Royal Society of London B: Biological Sciences, (2012).

    a. Not dissimilar to traditional media being accused of producing "fake news."

    b. The zero-sum equity establishes the conflict and incentivizes a Sybil attack.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More