In November 2015, the computing world was abuzz with the news that László Babai proved the Graph-Isomorphism Problem, that is, the long-standing open problem of checking whether two given graphs are isomorphic, can be solved in quasi-polynomial time. (While polynomial means *n* raised to a fixed degree, quasi-polynomial means *n* raised to a poly-logarithmic degree.) The mainstream media noticed this news; *Chicago Magazine* wrote "Computer Scientists and Mathematicians Are Stunned by This Chicago Professor's New Proof."

If Babai's result holds under scrutiny, it is likely to become one of the most celebrated results in theoretical computer science of the past several decades. Graph isomorphism has long tantalized researchers as it was not known to be solvable in polynomial time, yet there are good reasons to believe it is not NP-complete. The new algorithm provides further evidence of that!

As excited as I was about this news, I was quite annoyed by the reports in the media that "A new algorithm efficiently solves the graph-isomorphism problem." Computational-complexity theory focuses on classifying computational problems according to their inherent difficulty. The theory relies on some fundamental abstractions, including that it is useful to classify algorithms according to their worst-case complexity, as it gives us a universal performance guarantee, and that polynomial functions display moderate growth. The complexity class PTIME, often abbreviated as P, consists of the problems that can be solved, in the worst case, in polynomial time. This class is now conventionally considered to be the class of efficiently solvable problems. But this identification of P with the class of efficiently solvable problems is just a mathematically convenient abstraction. In practice, polynomials with degree larger than four grow quite fast. It is unrealistic to consider algorithms with high-degree polynomial bounds as efficient. To consider quasi-polynomial-time algorithms as efficient is simply ignorant of the reality of algorithmic engineering.

The slide from considering polynomial-time algorithms as efficient to considering quasi-polynomial-time algorithms as efficient illustrates, I believe, what I'd like to call the "moral hazard of complexity-theoretic assumptions." In economics, moral hazard occurs when one person takes more risks because someone else bears the burden of those risks. Here, I would like to use the term to imply as one gets used to the "risk" of making complexity-theoretic assumptions, it gets easier to make stronger assumptions. The slide from polynomial time as efficient to quasi-polynomial time as efficient is an instance, I believe, of such moral hazard in action.

Another example of such a hazardous slide can be seen in a news article from August 2015 with the title "For 40 Years, Computer Scientists Looked for a Solution that Doesn't Exist." The story refers to a paper by Arturs Backurs and Piotr Indyk, "Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false)." What is SETH? The conventional wisdom is that P is different from NP. Thus, this is now taken as a standard assumption, even though the belief in it is mostly built around the questionable identification of P with efficiently solvable. In some cases, however, the PNP assumption is not strong enough. The Strong Exponential-Time Hypothesis (SETH) is a stronger assumption that asserts that Boolean Satisfiability (SAT) cannot be solved in strongly subexponential time (see the Backurs-Indyk paper for a precise definition).

Proving that SETH implies that edit distance requires quadratic time is already a very nice result. But the title of the paper implies one should not expect SETH to be false, which is the way the result was broadly interpreted. But SETH is a much stronger assumption than the PNP assumption. The evidence for this assumption is that so far no one has been able to come up with a strongly subexponential algorithm for SAT. But most progress in complexity theory over the past 40 years has been through the discovery of clever new algorithms, such as Khachian's Ellipsoid Algorithm for linear programming, Shor's quantum poly-time algorithm for factoring, and now Babai's quasi-polytime algorithm for graph isomorphism. If I had to bet, I would bet on further progress in improved upper bounds than on progress in improved lower bounds.

The bottom line is that complexity-theoretic assumptions are mathematical abstractions. They need to be refined further to bring them into better alignment with real-world tractability. As I have written in the past, there is a growing gap between current theory and practice of complexity and algorithms. Bridging that gap is an important research challenge!

Follow me on Facebook, Google+, and Twitter.

** Moshe Y. Vardi,** EDITOR-IN-CHIEF

The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.

## Piotr Indyk

Given that the article mentions a paper that I am a co-author of [ "Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false)", STOC'15], I believe it is appropriate for me to respond. In short, I believe that much of the analysis in the article stems from a confusion between press coverage and the actual scientific inquiry. Indeed, it is unclear whether Dr. Vardi addresses what he believes to be a "media" phenomenon (how journalists describe scientific developments to a broad public) or a "scientific" phenomenon (how and why the scientists make and use assumptions in their research and describe them in their papers). In order to avoid conflating these two issues, I will address them one by one, focusing on our paper as an example.

1) Media aspects: The bulk of the editorial is focused on some of the press coverage describing recent scientific developments in algorithms and complexity. In particular, Dr. Vardi mentions the title of a Boston Globe article covering our work ("For 40 Years, Computer Scientists Looked for a Solution that Doesn't Exist.") . As I already discussed this elsewhere (https://liorpachter.wordpress.com/2015/08/14/in-biology-n-does-not-go-to-infinity/#comment-4792 ), I completely agree that the title and some other parts of the article leave a lot to be desired. Among many things, the conditional aspects of the result are discussed only at the end of the article, and therefore are easy to miss.

At the same time, I believe some perspective is needed. The inaccuracy or confusion in popular reporting of scientific results is an unfortunate but common and longstanding phenomenon (see e.g., this account https://lpsdp.files.wordpress.com/2011/10/ellipsoid-stories.pdf of press coverage of the famous Khachiyan's linear programming algorithm in the 1970s). There are many reasons for this. Perhaps the chief one is the cultural gap between the press and the scientists, where journalists emphasize accessibility and newsworthiness while scientists emphasize precision. As a result, simplification in scientific reporting is a necessity, and the risk of oversimplification, inaccuracy or incorrectness is high. Fortunately, more time and effort spent on both sides can lead to more thorough and nuanced articles (e.g., see https://www.quantamagazine.org/20150929-edit-distance-computational-complexity/ ). Given that the coverage of algorithms and complexity results in popular press is growing, I believe that, in time, both scientists and journalists will gain valuable experience in this process.

2) Scientific aspects: Dr. Vardi also raises some scientific points. In particular:

a) Dr. Vardi is critical of the title of our paper: "Edit Distance Cannot Be Computed in Strongly Subquadratic Time (unless SETH is false)." I can only say that, given that we are stating the assumption explicitly in the title, in the abstract, in the introduction, and in the main body of the paper, I believe the title and the paper accurately represents its contribution.

b) Dr. Vardi is critical of the validity of SETH as a hardness assumption: this question is indeed a subject of a robust discussion and investigation (see e.g., the aforementioned Quanta article). The best guess of mine and of most of the people I discussed this with is that the assumption is true. However, this is far from a universal opinion. Quantifying the level of support for this conjecture would be an interesting project, perhaps along the lines of similar efforts concerning the P vs. NP conjecture ( https://www.cs.umd.edu/~gasarch/papers/poll2012.pdf ). In any case, it is crucial to strengthen the framework by relying on weaker assumptions, or replacing one-way reductions with equivalences; both are subjects of ongoing research. However, even the existing developments have already led to concrete benefits. For example, failed attempts to prove conditional hardness of certain problems have led to better algorithms for those tasks.

Finally, let me point out that one of the key motivations for this line of research is *precisely* the strengthening of the relationship between theory and practice in complexity and algorithms, a goal that Dr. Vardi refers to as an important challenge. Specifically, this research provides a framework for establishing evidence that certain computational questions cannot be solved within *concrete* (e.g., sub-quadratic) polynomial time bounds. In general, I believe that a careful examination of the developments in algorithms and complexity over the last decade would show that the gap between theory and practice is shrinking, not growing. But that is a topic for another discussion.

Note: this comment will been crossposted at http://blog.geomblog.org/ . Thanks to Suresh Venkatasubramanian for his hospitality.