Home → Magazine Archive → December 2009 (Vol. 52, No. 12) → Crowdsourcing and the Question of Expertise → Full Text

Crowdsourcing and the Question of Expertise

By David Roman

Communications of the ACM, Vol. 52 No. 12, Page 12

[article image]

Save PDF

There is an in inherent weakness to crowdsourcing that should bother computer scientists and computer users alike. It's the fact there is no clear difference between "the wisdom of the crowd" and "the mob that rules." What's missing is a measure of discernment.

The Internet is awash in information that demands selectivity, leading Newsweek among others to predict the rise of online experts and reliable information (http://www.news-week.com/id/119091). The assessment seems overly optimistic. There are some efforts to rate expertise on the Internet (http://cacm.acm.org/news/42206), but most of us are left with coping strategies that limit where you go, what you see, and who you trust. It is not the kind of open investigation that promotes learning or understanding.

Crowdsourcing doesn't really help sort through or synthesize information, in fact, it might do the opposite. Research shows that it favors popular opinion and therefore reinforces homogeneity (http://cacm.acm.org/news/42525). That's not hospitable to unconventional or idiosyncratic views.

There is an upside, for sure. Luis von Ahn's GWAP (http://www.gwap.com/gwap/about/) uses computer games "to solve problems for humans all over the world." And Galaxy Zoo tapped about 250,000 visitors to classify nearly one million galaxies (http://cacm.acm.org/magazines/2009/10/42492).

Now the downside: The limitations of crowdsourcing are becoming apparent, even to its defenders. Blogger Josh Berkus summarizes key weaknesses, saying the term is "evil" and carries too much baggage (http://it.toolbox.com/blogs/database-soup/never-say-crowdsourcing-34331). In the end he concludes that the problem is mainly about improper usage. But the issue is bigger than that. The problem with crowdsourcing is that there is no verity. In fact, "correctness [is]...anathema to crowdsourced systems" (http://cacm.acm.org/magazines/2009/7/32094). That's a small concern when rating movies, but researchers and scientists need something more.

Science needs higher standards. This was illustrated by Newsweek when it decried science education in the U.S. and showed how "wisdom of the masses" is an oxymoron. It described how John Holdren, director of the White House Office of Science and Technology Policy, trades candor for political timidity when discussing science policy (www.newsweek.com/id/216505). "He must sell his ideas to people who couldn't pass high-school algebraand who believe they know more than he does."

Crowdsourcing empowers followers. It risks weakening leaders.

Back to Top


DOI: http://doi.acm.org/10.1145/1610252.1610258

©2009 ACM  0001-0782/09/1200  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.