Skip to main content
construction release_alert
The Scholars Team is working with OIT to resolve some issues with the Scholars search index
cancel

Taking advantage of scale by analyzing frequent constructed-response, code tracingwrong answers

Publication ,  Conference
Stephens-Martinez, K; Ju, A; Parashar, K; Ongowarsito, R; Jain, N; Venkat, S; Fox, A
Published in: ICER 2017 - Proceedings of the 2017 ACM Conference on International Computing Education Research
August 14, 2017

Constructed-response, code-Tracing questions ("Whatwould Python print?") are good formative assessments. Unlike selected-response questions simply marked correct or incorrect, a constructed wrong answer can provide information on a student's particular difficulty. However, constructed-response questions are resource-intensive to grade manually, and machine grading yields only correct/incorrect information. We analyzed incorrect constructed responses from code-Tracing questions in an introductory computer science course to investigate whether a small subsample of such responses could provide enough information to make inspecting the subsample worth the effort, and if so, how best to choose this subsample. In addition, we sought to understand what insights into student difficulties could be gained from such an analysis. We found that ≈5% of the most frequently given wrong answers cover ≈60% of the wrong constructed responses. Inspecting these wrong answers, we found similar misconceptions as those in prior work, additional difficulties not identified in prior work regarding language-specific constructs and data structures, and nonmisconception "slips" that cause students to get questions wrong, such as syntax errors, sloppy reading/writing. Our methodology is much less time-consuming than full manual inspection, yet yields new and durable insight into student difficulties that can be used for several purposes, including expanding a concept inventory, creating summative assessments, and creating effective distractors for selected-response assessments.

Duke Scholars

Published In

ICER 2017 - Proceedings of the 2017 ACM Conference on International Computing Education Research

DOI

Publication Date

August 14, 2017

Start / End Page

56 / 64
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Stephens-Martinez, K., Ju, A., Parashar, K., Ongowarsito, R., Jain, N., Venkat, S., & Fox, A. (2017). Taking advantage of scale by analyzing frequent constructed-response, code tracingwrong answers. In ICER 2017 - Proceedings of the 2017 ACM Conference on International Computing Education Research (pp. 56–64). https://doi.org/10.1145/3105726.3106188
Stephens-Martinez, K., A. Ju, K. Parashar, R. Ongowarsito, N. Jain, S. Venkat, and A. Fox. “Taking advantage of scale by analyzing frequent constructed-response, code tracingwrong answers.” In ICER 2017 - Proceedings of the 2017 ACM Conference on International Computing Education Research, 56–64, 2017. https://doi.org/10.1145/3105726.3106188.
Stephens-Martinez K, Ju A, Parashar K, Ongowarsito R, Jain N, Venkat S, et al. Taking advantage of scale by analyzing frequent constructed-response, code tracingwrong answers. In: ICER 2017 - Proceedings of the 2017 ACM Conference on International Computing Education Research. 2017. p. 56–64.
Stephens-Martinez, K., et al. “Taking advantage of scale by analyzing frequent constructed-response, code tracingwrong answers.” ICER 2017 - Proceedings of the 2017 ACM Conference on International Computing Education Research, 2017, pp. 56–64. Scopus, doi:10.1145/3105726.3106188.
Stephens-Martinez K, Ju A, Parashar K, Ongowarsito R, Jain N, Venkat S, Fox A. Taking advantage of scale by analyzing frequent constructed-response, code tracingwrong answers. ICER 2017 - Proceedings of the 2017 ACM Conference on International Computing Education Research. 2017. p. 56–64.

Published In

ICER 2017 - Proceedings of the 2017 ACM Conference on International Computing Education Research

DOI

Publication Date

August 14, 2017

Start / End Page

56 / 64