As recommender systems proliferate, they present new vulnerabilities to gaming. The gaming that I mean is those activities that adhere to the rules, but thwart the intention; as we say in English, they follow the letter, but not the spirit, of the law. White text that draws search engines to bogus pages fall into this category, so that shopping online for an item with a specific description—make and model, fabrication, material—often turns up commercial offers for inferior items.
Of course, gaming a system is an offline human activity with long history and new manifestations. Creative accounting can game the tax code and comfort animals can game the disability support system. Here, we want to limit the definition to the digital version, allowing for fortuitous and surprising acceptances and rejections that lead to refinements of the system.
Modern definitions of "gaming the system":
WhatIs: Manipulation or exploitation of the rules designed to govern a given system in an attempt to gain an advantage over other users [ WhatIs]
Wikipedia: Using the rules and procedures meant to protect a system to, instead, manipulate the system for a desired outcome [Wiki]
Wikipedia also discusses its self-reflective version: "...deliberately using Wikipedia policies and guidelines in bad faith to thwart the aims of Wikipedia. Gaming the system may represent an abuse of process, disruptive editing, or otherwise evading the spirit of community consensus. Editors typically game the system to make a point, to further an edit war, or to enforce a specific non-neutral point of view"[Wiki:Gaming]. Well, yes, that's what we're talking about, expressed in the context of talking about these things.
These definitions exclude naive achievement of results by unanticipated benign means—which we would like to preserve at least for academic reasons. Let's identify the elements involved in the phenomenon of gaming the system, starting by setting aside some activities that might fit, but don't. This is not spam and phishing, which are computer-based fraud exploiting social protocols, therefore not taking place within a single system. Furthermore, this is not the gaming that happens in games. Games are plainly adversarial; players are invited to make visible moves in a full mutual understanding of the context. The word "gaming" hints at game theory, but that area is a mathematical study of sustained multi-party interchanges, ruled out along with games. Computer viruses profit from fooling the system but we can ask whether they are playing by the rules. This not crime, black-hat hacking, where the perpetrator has malicious intent. This is, rather, opportunism, where the perpetrator may be just trying to get something done (with or without understanding of the system's objective).
If the system were cut-and-dried, there would be no reason for assessment. For example, buying a cocktail commonly depends on two criteria, a minimum age and possession of money. In such cases, no reviewers or judges are necessary, and no gaming takes place, only direct violations. A refinement might be the requirement of sobriety; now, the system requires human judgment. So say that we are running a club (or institution or charity or co-workspace or family), for which membership applications are assessed by a panel that strives to achieve constituency goals developed through consensus.
What are the elements of this endeavor in terms of a system and its gaming?
- There is some kind of a system R, comprising some kind of decisions or rules (our club membership criteria, which may not be public), which defines a set of successful membership applications M.
- The system designers intend some vaguely-defined result G (the kind of members we want).
- A prospective (but unwanted) member, or associate of such a person, the "transgressor" T.
- An object X (membership application that has to be accepted) constructed by T that satisfies the selection R but thwarts the intention G.
Note that this is a modeling exercise. The axioms of the system are the rules R. The intention is the set of goal states G. But the system actually defines the instantiations, or models, M. The transgressor T grasps R sufficiently to construct X, where X ∈ M but X ∉ G. A common consequence of successful gaming is refinement of R (and thereby M) to exclude X and its ilk.
So here is a new definition: Gaming a system is discovery and exploitation, by an agent, of its unintended affordances to finesse its intended benefit. Intention matters. The intentions implemented by recommender systems must be in active consideration at Tech companies, but we do not have the benefit of those discussions on proprietory products. Google contantly updates its detection of such transgressions of search, having moved from open cooperation to proprietarial secrecy a few years ago [Sentance]. Web bots that generate approval actions are now outlawed by many services. We can bet that someone, many someones, well-paid ones, are already working on this kind of analysis, with more precision and in greater depth than you will find here.
And they may be considering these worrisome hypotheses:
- For any goal set G, the rules R will be inadequate: M ≠ G because no matter how much refinement we apply to R to limit M, we will still have (as above) ∃X such that X ∈ M but X ∉ G. It's always possible to construct a foil, X.
- For any goal set G, the rules R will be inadequate in the other way: No matter how much refinement we apply to R to expand M, we will still discover that ∃X such that X ∈ G but X ∉ M. Some good prospects will be rejected.
- If R is known, any prospect carries a rich enough contextual dataset to construct a model of G, no matter how constrained the system. That is, editing various facts, based on R with shrewd guesses about G, is generally possible, resulting in an acceptance of an X ∈ M such that X is technically in G... but where that "technically" causes the system designers to recoil in dismay.
Additional worrisome hypotheses can be drawn from these possibilities, some already apparent to us, and manifest in reality.
- No matter how hard we try, we can't block gaming; any programmed selection is subject to mischief.
- No matter how hard we try, we will end up discarding those desriable members that we hadn't thought of. (See volumious commentary on current problem with recommender systems!)
- Conjoined with transparency of AI recommender systems, and non-discriminatory G, this means that X can be constructed by a program, using the exact criteria R as constraints, by either distorting some contextual feature or acquiring the credential. In other words, anybody can get into our club (without lying) by running a bot.
Can these be established or denied? What else falls out of this definition? And what other considerations should be taken into account?
[Sentance] Rebecca Sentance. 2017. Should Google be more transparent with its updates? Search Engine Watch. Accessed 27 July 2021.
[WhatIs] Ivy Wigmore. 2021. Article "gaming the system". WhatiIs.com by TechTarget. Retrieved 30 July, 2021.
[Wiki] Wikipedia contributors. 2021. (2021, May 18). Gaming the system. In Wikipedia, The Free Encyclopedia. Retrieved 28 July, 2021.
[Wiki:Gaming] Wikipedia. 2021. (2021, July 4). Wikipedia:Gaming the system. Retrieved 28 July, 2021.
Robin K. Hill is a lecturer in the Department of Computer Science and an affiliate of both the Department of Philosophy and Religious Studies and the Wyoming Institute for Humanities Research at the University of Wyoming. She has been a member of ACM since 1978.