Using Differential Item Functioning to Test for Interrater Reliability in Constructed Response Items

DOI

10.1177/0013164419899731

Document Type

Journal Article

Publication Date

8-1-2020

Publication Title

Educational and Psychological Measurement

Volume

80

Issue

4

First Page

808

Last Page

820

ISSN

131644

Keywords

classical test theory, constructed response items, differential item functioning, interrater reliability, rater severity

Abstract

The purpose of this study was to investigate a new way of evaluating interrater reliability that can allow one to determine if two raters differ with respect to their rating on a polytomous rating scale or constructed response item. Specifically, differential item functioning (DIF) analyses were used to assess interrater reliability and compared with traditional interrater reliability measures. Three different procedures that can be used as measures of interrater reliability were compared: (1) intraclass correlation coefficient (ICC), (2) Cohen’s kappa statistic, and (3) DIF statistic obtained from Poly-SIBTEST. The results of this investigation indicated that DIF procedures appear to be a promising alternative to assess the interrater reliability of constructed response items, or other polytomous types of items, such as rating scales. Furthermore, using DIF to assess interrater reliability does not require a fully crossed design and allows one to determine if a rater is either more severe, or more lenient, in their scoring of each individual polytomous item on a test or rating scale.

Open Access

Green Final

Share

COinS