The 2014 International Computing Education Research (ICER) conference (sponsored by ACM SIGCSE) was held at the University of Glasgow August 11-13. You can find the schedule here and the papers in the ACM Digital Library here. ICER 2014 was probably my favorite ICER yet.
ICER started on the weekend before (Aug 9-10) for me, Sally Fincher (U. Kent-Canterbury), four discussants (Neil Brown, Will Doane, Lauri Malmi, and Yifat Kolikant), and 17 PhD students in computing education research — our largest doctoral consortium ever. Our discussants were from UK, US, Finland, and Israel respectively. Our students were similarly from all over the world (including several from Germany and Brazil). The DC is an important part of growing the ACM SIGCSE community — several of the ICER 2014 presenters were participants in the DC (sometimes many) years previously, including Andy Begel, Anna Eckerdal, Briana Morrison, Mike Hewner, and Brian Dorn. The DC participants presented posters on their work at the conference in a session that was really well attended.
Quintin Cutts chaired the conference (with co-chairs Beth Simon of UCSD and Brian Dorn of U. Nebraska-Omaha), and he put together a great program. Sunday night started with an organ recital emphasizing fugues and canons — good start for the discussion on recursion the next day. Monday night was a welcome to the city in the Glasgow City Chambers. Tuesday night was a Scottish ceilidh, which was great fun. ICER may be the only CS conference with a close enough to 50-50 gender balance to make traditional Scottish dancing work with its participants. Quintin brought in teachers and other leaders from the Computing At School Scotland effort (and I got to meet fellow [email protected] blogger, Judy Robertson.)
I can't possibly talk about all of the presentations, so I'm going to give you my highly-biased perspective on just a few of them.
Our keynote presenter, Professor David Nicol, talked to us about Unlocking learners' evaluative skills: a peer review perspective. He described how engaging students in peer review has benefits for the receiver of the review (e.g., more feedback sooner than the overworked teacher can provide) and for the sender (e.g., in encouraging reflection and comparison to one's own work). I totally believe the value, but was disappointed that the talk didn't get to some of the challenges in computer science. We have a problem with peer review where over-confident, Slashdot-inspired students can be quite vicious to one another, leading to decreases in learning and self-efficacy (see Hundhausen's paper as an example). Professor Nicol did emphasize creating an environment of trust and respect for peer review to succeed, and we still need to figure out how to make that happen reliably in CS classes.
For me, a theme developed between several of the papers at the conference.
- Kathi Fisler took another crack at The recurring rainfall problem by asking students learning functional programming to try it. She found that those who used higher-order functions and the standard recursive patterns emphasized in their curriculum had better shot than prior efforts.
- Elynn Lee and Victoria Shan presented their paper on a game (Cargo-Bot) designed to teach recursion. They found that they had greater success teaching students how to write recursive programs than getting them to understand recursion.
- Finally, Colleen Lewis described the multiple ways she saw students tracing linear recursion correctly. She wasn't looking at misconceptions or problems with recursion. She was categorizing the different ways that students understand execution of recursion functions correctly.
We are seeing the development of a learning progression around recursion. First, students learn a generic pattern or structure to writing recursive code — not an execution model. If they learn it well, they can use it in a bunch of situations. But to transfer that pattern to new situations, they have to learn execution models, which are harder to learn. Once they learn execution models, they're more likely to get challenges like the Rainfall problem right.
The Chair's Award (best paper as determined by the conference chairs, using reviews as input) went to Leo Porter, Daniel Zingaro, and Raymond Lister on "Predicting student succeeding using fine grain clicker data." The computing education research literature contains many attempts to predict success in computing courses based on some test or some variable (like math background or whether the student used a computer at home). These researchers used peer instruction, "clicker" questions, to identify those students needing additional help. The advantage here is that clicker questions are cheap and easy to implement. One of the interesting twists of the study (as the authors point out) is that peer instruction is just so useful and important for learning that the assessment may itself be an intervention to improve learning.
The "John Henry" Paper Award is selected by popular vote for the paper most "out there" and leading to new ideas and direction in computing education research. The winners at ICER 2014 were Josh Tenenberg and Yifat Ben-David Kolikant for their paper "Computer programs, dialogicality, and intentionality." Drawing on psycholinguistics and evolutionary psychology, Josh and Yifat describe challenges in understanding how we learn and use programming languages. They pointed out that human languages presume a common understanding between the sender of the message and the receiver. But no such common understanding exists between humans and computers, which leads to the "superbug" error (first described by Roy Pea in 1986) which is the source of so many problems for novice programmers. Programming languages are used for messages to a computer (that has no common understanding with the author) and also to a future human reader (who presumably does have common understanding with the sender). The presumption of common understanding might influence program readability, particularly if there is a mismatch between the author's expectation and what the reader really knows. Programming languages may be the only notation that humans have invented that has this dual role, of being part of two dialogs at once.
After the ICER 2014 conference, we had our first ever "Work in Progress" workshop, or Critical Research Review. Led by Colleen Lewis of Harvey Mudd College, it was five hours of intense (and really great fun) work helping to develop new projects in computing education. I hope that we see this continue in future ICER conferences.
Brian Dorn, chair of ICER 2015 in Omaha, has a high bar to meet after Glasgow! Looking forward to seeing what the Nebraskan equivalent of a ceilidh is.