Scoring Single-Response Multiple-Choice Items – Quite Simple?! A Scoping Review and Comparison of Different Scoring Methods (Preprint)

2022 | preprint. A publication with affiliation to the University of Göttingen.

Jump to: Cite & Linked | Documents & Media | Details | Version history

Cite this publication

​Scoring Single-Response Multiple-Choice Items – Quite Simple?! A Scoping Review and Comparison of Different Scoring Methods (Preprint)​
Kanzow, A. F.; Schmidt, D.& Kanzow, P.​ (2022). DOI: https://doi.org/10.2196/preprints.44084 

Documents & Media

Preprint1.76 MBAdobe PDF

License

Published Version

Attribution 4.0 CC BY 4.0

Details

Authors
Kanzow, Amelie Friederike; Schmidt, Dennis; Kanzow, Philipp
Abstract
Background: Single-choice items (eg, <i>best-answer items</i>, <i>alternate-choice items</i>, <i>single true-false items</i>) are one type of multiple-choice items and have been used in examinations for over 100 years. At the end of every examination, the examinees' responses have to be analyzed and scored in order to derive with an information about examinees' <i>true knowledge</i>. Objective: The aim of this paper is to compile scoring methods for individual single-choice items described in the literature. Furthermore, the metric <i>expected chance score</i> and the relation between examinees' <i>true knowledge</i> and expected scoring results (averaged percentage score) are analyzed. Furthermore, implications for potential pass marks to be used in examinations to test examinees for a predefined level of <i>true knowledge</i> are derived. Methods: Scoring methods for individual single-choice items including were extracted from various databases (ERIC, PsycInfo, Embase via Ovid, MEDLINE via PubMed) in September 2020. Eligible sources reported on scoring methods for individual single-choice items in written examinations including but not limited to medical education. Separately for items with <i>n</i> = 2 answer options (eg, <i>alternate-choice items</i>, <i>single true-false items</i>) and <i>best-answer items</i> with <i>n</i> = 5 answer options (eg, <i>Type A</i> items) and for each identified scoring method, the metric <i>expected chance score</i> and the expected scoring results as a function of examinees' <i>true knowledge</i> using fictitious examinations with 100 singlechoice items were calculated. Results: A total of 21 different scoring methods were identified from the 258 included sources, with varying consideration of correctly marked, omitted, and incorrectly marked items. Resulting credit varied between -3 and +1 credit points per item. For items with <i>n</i> = 2 answer options, <i>expected chance scores</i> from random guessing ranged between -1 and +0.75 credit points. For items with <i>n</i> = 5 answer options, <i>expected chance scores</i> ranged between -2.2 and +0.84 credit points. All scoring methods showed a linear relation between examinees' <i>true knowledge</i> and the expected scoring results. Depending on the scoring method used, examination results differed considerably: Expected scoring results from examinees with 50% <i>true knowledge</i> ranged between 0.0% (95% CI: 0% to 0%) and 87.5% (95% CI: 81.0% to 94.0%) for items with <i>n</i> = 2 and between -60.0% (95% CI: -60% to -60%) and 92.0% (95% CI: 86.7% to 97.3%) for items with <i>n</i> = 5. Conclusions: In examinations with single-choice items, the scoring result is not always equivalent to examinees' <i>true knowledge</i>. When interpreting examination scores and setting pass marks, the number of answer options per item must usually be taken into account in addition to the scoring method used.
Issue Date
2022
Extent
73
Language
English

Reference

Citations


Social Media