Opinion
Computing Profession Viewpoint

GOTO Rankings Considered Helpful

Seeking to improve rankings by utilizing more objective data and meaningful metrics.
Posted
  1. Introduction
  2. Call to Action
  3. References
  4. Authors
  5. Footnotes
dipolomas with red ribbons

Rankings are a fact of life. Whether or not one likes them (a previous Communications editorial argued we should eschew rankings altogether4), they exist and are influential. Within academia, and in computer science in particular, rankings not only capture our attention but also widely influence people who have a limited understanding of computing science research, including prospective students, university administrators, and policymakers. In short, rankings matter.

Today, academic departments are mostly ranked by for-profit enterprises. The people doing the ranking are not computer scientists, and typically have very little understanding of our field. For example, U.S. News and World Report, in ranking Ph.D. programs in sub-areas of computer science inaccurately describes the characteristics of research in the area of “Programming Language” [sic] (see Figure 1).

f1.jpg
Figure 1. U.S. News and World Report inaccurate research area description (https://www.usnews.com/best-graduate-schools/top-science-schools/computer-programming-rankings, May 2018).

This lack of understanding of the field suggests it is highly questionable that U.S. News and World Report has the necessary expertise to rank the quality of Ph.D. programs across computer science. In fact, we know that many rankers often use the wrong data. For example, we have repeatedly seen problems with rankers who only consider journal publications, leaving out conferences, which capture the most influential publications in most areas of computing. The consequences are rankings that are completely implausible. For example, while King Abdulaziz University may be a fine institution, it is unlikely that anyone with any familiarity with computing-related departments would rank the university number five in the world, as U.S. News and World Report does in its ranking of “Best Global Universities” (see Figure 2).

f2.jpg
Figure 2. U.S. News and World Report implausible ranking (https://www.usnews.com/education/best-global-universities/computer-science, May 2018).

Another key limitation of a number of rankings, including those produced by U.S. News and World Report and The Times Higher Education rankings, is that they depend in whole or in part on reputation surveys. One problem with reputation is that it is a lagging indicator. When an institution improves, it can take years for its reputation to catch up. Reputation surveys therefore are inherently “stale.” A more serious problem with reputation surveys is that opinions are often based on subjective assessments with very little basis in objective data.

No one is sufficiently knowledgeable about all aspects of computer science and all departments to even make an informed guess about the broad range of work in an entire department. In fact, a “mid-rank” department is often the most difficult to assess by reputation because the department may be particularly strong in some sub-areas but weaker in others, that is, the subjective rating of the department may vary greatly depending on the sub-area of the assessor.

To summarize, rankings matter and will not go away, regardless of their shortcomings. Commercial rankers today do a poor job of ranking computer science departments. Since we understand our community and what matters, we should take control of the ranking process.

At the very least, we as a community should insist on rankings derived from objective data, whether it be based on publications, citations, honors, funding, or other criteria. We should ensure rankings are well-founded, based on meaningful metrics, even if we have diverging perspectives on how best to fold the data into a scalar score or rank. We may still arrive at very different rankings, but we will have a defensible basis for comparisons.

Toward this end, the Computing Research Association (CRA) has stated that a “methodology [which] makes inferences from the wrong data without transparency” ought to be ignored.1 It has also adopted the following statement about best practices:

“CRA believes that evaluation methodologies must be data-driven and meet at least the following criteria:

  • Good data: have been cleaned and curated
  • Open: data is available, regarding attributes measured, at least for verification
  • Transparent: process and methodologies are entirely transparent
  • Objective: based on measurable attributes”

We call rankings that meet these criteria GOTO Rankings. Today, there are at least two GOTO rankings: http://csrankings.org and http://csmetrics.org (both are linked from the site http://gotorankings.org). CSrankings is faculty-centric and based on publications at top venues, providing links to faculty home pages, Google Scholar profiles, DBLP pages, and overall publication profiles. It ranks departments by aggregating the full-time tenure-track faculty at each institution. CSmetrics is institution-focused, without regard to department structure or job designations for paper authors. It includes industrial labs and takes citations into account. It derives its rankings from the Microsoft Academic Graph,3 an open and frequently updated dataset.

These are not the only two reasonable ways to rank departments.5 One may disagree with the rankings these sites produce, or with their choices of weighting schemes or venue inclusion. But one can clearly understand the basis for each and inspect all or most of the included data. These GOTO rankings are a far cry from the products of most commercial rankers.

Back to Top

Call to Action

We call on all CS departments and colleges to boycott reputation-based and non-transparent ranking schemes, including but not limited to U.S. News and World Report:

  • Do not fill out their surveys. Deprive these non-GOTO rankings of air, at least for computer science.
  • Do not promote or publicize the results of such ranking schemes in departmental outlets.
  • Discourage university administrators from using reputation-based and non-transparent rankings.
  • Encourage the use of GOTO Rankings such as CSrankings and CSmetrics as better alternatives.

Back to Top

Back to Top

Back to Top

    1. Davidson, S. CRA statement on U.S. News and World Report rankings of computer science universities. 2017; https://bit.ly/2W8hSOj

    2. Dijkstra, E. Go to statement considered harmful. Commun. ACM 11, 3 (Mar. 1968), 147–148.

    3. Sinha, A. et al. An overview of Microsoft Academic Service (MAS) and applications. In Proceedings of the 24th International Conference on World Wide Web, 2015, 243–246.

    4. Vardi, M.Y. Academic rankings considered harmful Commun. ACM 59, 9 (Sept. 2016),

    5. Wang, K. The knowledge Web meets big scholars. In Proceedings of the 24th International Conference on World Wide Web, 2015, 577–578.

    * The title of this Viewpoint and (re-ordered) bullets herein are in homage to Edsger Dijkstra's famous 1968 letter to Communications.2

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More