Benchmarking a set of exam questions for introductory programming

Sheard, J., Simon, Dermoudy, J., D\'Souza, D., Hu, M. and Parsons, D.

    This paper reports on the combining of two related but hitherto distinct themes in programming education research. The first is the recognition that students in programming courses tend to perform far more poorly than their teachers would like, and further, more poorly than their teachers would expect without a careful analysis of their results. The second is the proposal of a number of different styles of examination question, sometimes coupled with analysis of student performance on those questions, typically at single institutions. This work combines these themes by including a common set of short questions in the final examinations of introductory programming courses at six institutions in Australia and New Zealand, and analysing the student performance across all six institutions. The analysis results in a set of four simple questions that can be used to benchmark student performance in introductory programming courses at a wide range of institutions.
Cite as: Sheard, J., Simon, Dermoudy, J., D\'Souza, D., Hu, M. and Parsons, D. (2014). Benchmarking a set of exam questions for introductory programming. In Proc. Sixteenth Australasian Computing Education Conference (ACE2014) Auckland, New Zealand. CRPIT, 148. Whalley, J. and D\'Souza, D. Eds., ACS. 113-121
pdf (from crpit.com) pdf (local if available) BibTeX EndNote GS