I just finished my third workshop in two weeks yesterday, and my fifth workshop of the summer. I taught three workshops on Media Computation, funded by the National Science Foundation's CCLI program, in Claremont, CA, Kansas City, and Atlanta. I taught two workshops on ways to introduce computing that help to broaden participation in computing (that engage women and members of under-represented minority and that lead to higher pass rates), sponsored by the NSF's BPC program, in Detroit and Atlanta.
I build into the workshops several discussion periods. So I've now heard a bunch of (~100) computer science faculty this summer talk about what they're doing in introductory computing courses, what they're planning on doing, and their reasoning for each. Most of the attendees tell me that they're taking the workshop to figure out how to engage students better, and most are still reporting failure rates in the 30-50% range.
I am surprised to hear a lot of interest in Python these days. When I first started these workshops three years ago, Python was interesting, but Java ruled the land. Interestingly, many faculty said that they saw a lot of use for Media Computation Python in the first course for non-majors (most often called a "CS0"), but not in "CS1" (so identified as the first course for majors).
I try to just be a facilitator in these discussions, so I was glad when, at one workshop, a participant challenged her fellow teachers. “Why do all of you think that Python is good for CS0, but not for CS1? Why do you have to use Java in CS1?” The answer about why Python was good for non-majors was consistent — Python was easier and made it easier to focus on the “concepts” rather than on “coding.” The answers about Java were also quite honest, though distressing: “There’s such a critical mass behind Java” and “Everyone else does it that way” and “We want students to be able to transfer their credit.” No one offered a pedagogical answer. No one could say why Java helped students learn anything in particular, or why it was the best choice for learning something they thought was important. Instead, the answers were that the herd had decided, and all the CS departments represented were going to follow the pack.
The point of my workshops is to get teachers to consider some alternative ways of teaching computing, like robotics or Media Computation. Most faculty talked about plans to use “part of” robotics or media computation, or "for a few labs," or “maybe reuse some of the Java classes,” and especially, “in addition to our traditional text.” I’m glad that they’re willing to use any of the material at all, but I was curious. Why such reluctance to change practice? Was what they're doing now so successful? Below are some of the paraphrased responses (I took notes, and these are my reconstructions).
- "We mostly teach C++, and Java will more easily lead to C++ than Python." Is there any evidence of that? "No, not really." Why are you emphasizing C++? "We've been doing that for years!"
- “I wouldn't use the IPRE robotics or Media Computation approaches. I've now gone to the effort of learning Python 3.0, and neither of those are there yet. I don't believe I should teach students less than the current version of things.”
- “We want students to learn the standard Java libraries, and Media Computation doesn’t touch on all of them.”
- “It’s important to learn good coding style and all the right programming habits, and these approaches are engaging but won't enforce the right discipline.”
I don't see the items on this list as being on any curricular standard from ACM, IEEE, or ABET. I'm surprised that all these decision points are about programming. If we really believe that computer science is not about coding, then why should we make decisions about the introductory course for our majors based on the latest version of a language or coverage of language libraries? Are language designers particularly good curriculum designers? Should we assume that Sun or the Python Foundation are particularly inspired about what should be in the first course, such that their decisions determine what we teach?
I tend to think that our undergraduate enrollments are not out of the woods, e.g. the UCLA HERI data point to several more years of fewer undergraduates in computing than industry needs. The goal of broadening participation demands that we re-think what we do. We need to think more engagement. Why would we leave the "easier language" that emphasizes "concepts" to the non-computing majors? Why not use some of that for our own majors, to engage them and keep them going?
If we really do want a different result, we have to be willing to change what we're doing. Of course, I want my participants to use approaches that I'm teaching. But more importantly than that, I would like the participants to accept the need for change if we want the situation to get better.
Computer Science is about coding. It isn't the be-all end-all point of Computer Science, but it is about coding. Without it, our theory isn't useful. In other words, programming is the language of computer science. It doesn't matter the language per say, as long the result is a testable implementation of that theory.
On a personal note, when studying other people's theories, or using my own, my highest moment of reflection comes from implementing an idea.
As for more practical purposes for your "which language to teach", I think you'd be very hard-pressed to find sufficient opportunities in the industry for Python developers. Thus, isn't it prudent to give students the right tools based upon the current demands of the industry?