Lord Kelvin has been quoted as saying, "If you cannot measure it, you cannot improve it." (But he also said, "There is nothing new to be discovered in physics now," so what did he know?) In contrast, quality guru W.E. Deming wrote, "the most important figures that one needs for management are unknown or unknowable." What can we measure in computing education, what can't we measure, and does it matter whether or not we can? I've been thinking about enrollment and quality--what can we measure, and what does it matter?
Enrollment: Network World declared computer science to be "the hottest major in campus" recently. Enrollment has risen dramatically at the top CS departments. But has it really risen nationally? Internationally? I recently visited Swinburne University in Melbourne, Australia, for their Melbourne Computing Education Conventicle. CS enrollments are down in the state of Victoria, and applications for next year are down 10%.
Most of what we know about CS enrollment in the United States we know from the Computing Research Association's Taulbee Report which gathers data from research institutions (PhD-granting). There have been efforts to gather data more widely in the United States (called Taurus for "Taulbee for the Rest of Us"), but those have been small and not adequately funded. The US Department of Education tracks undergraduate enrollment in their IPEDS database, but only for first-time and full-time students. Part-time students, and adults returning for more education, are not counted. In reality, we don't know how "hot" CS is as a major. Nobody has the broad view.
Is that a problem? Many were concerned about a lack of enrollment in computer science. Some are now concerned about the rise in enrollment. We don't really know what the enrollment is, up or down, and maybe it doesn't really matter. We simply respond, and mostly invisible market forces will drive the students in ebbs and flows. If it is important to us (e.g., to the IT industry, to those concerned about the economy), then we need to figure out a way to measure it.
Quality: I have argued in the past that we have only a few good instruments for measuring knowledge about computer science, and these aren't used often. We need these measures in order to figure out what works in computing education. I recently finished reading Richard DeMillo's new book, From Abelard to Apple. He talks about the challenges facing Universities today, from issues of cost, to issues of accessibility. The for-profit institutions threaten today's non-profit higher education institutions because they offer lower-cost and more flexible alternatives.
The argument is posed that the for-profits offer lower-quality offerings, that the non-profit colleges and universities offer better quality. Do they? How do we know? Rankings of collleges and universities are based on prestige and reputation, not on measures of learning outcomes. If a student wanted to choose an institution based on the one that could provide the most learning opportunities, how would she find that institution?
If learning quality matters, then we should try to measure it. But it might not matter. DeMillo recently pointed out (in a response to a blog post) that land lines offer higher quality phone calls, but cell phones won out because of the importance of flexibility and accessibility. The quality is good enough on cell phones. Does the added quality (if any, if measurable) of colleges and universities make the increased cost worthwhile? Or are all higher-education alternatives equally good enough, so choice is based on cost and accessibility? If quality matters, we should figure out how to measure it and demonstrate the value.