Alternative Methods to Curriculum-based Measurement for Written Expression: Implications for Reliability and Validity of the Scores

Open Access
- Author:
- Merrigan, Teresa
- Graduate Program:
- School Psychology
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- October 04, 2012
- Committee Members:
- James Clyde Diperna, Dissertation Advisor/Co-Advisor
Linda H. Mason, Committee Member
Beverly Vandiver, Committee Member
Anne Whitney, Committee Member
Shirley Andrea Woika, Committee Member - Keywords:
- curriculum-based measures
written expression - Abstract:
- The purpose of the current study was to evaluate the psychometric properties of alternative approaches to administering and scoring curriculum-based measurement for written expression. Specifically, three response durations (3, 5, and 7 minutes) and six score types (total words written, words spelled correctly, percent of words spelled correctly, correct word sequences, percent of correct word sequences, and correct minus incorrect word sequences) were considered. Participants included students in sixth, seventh, and eighth grades recruited from a rural middle school in Pennsylvania. Reliability (inter-rater and alternate forms) and validity (convergent, criterion-related) evidence were examined for scores derived from each combination of response duration and score type. In addition, scores were evaluated to determine whether they differ based on group membership (grade, sex, and free or reduced lunch status). Differences in predictive validity also were examined across groups. Results reflected that many scores demonstrated adequate evidence for reliability and validity. As expected, group differences were found relative to grade, sex, and free or reduced lunch status. No differences were observed in predictive validity based on group membership variables. Student ratings of self-efficacy related to overall writing performance on criterion measures; however, self-efficacy ratings increased after completing the writing tasks. Overall results indicated that the most complex scoring approach (CIWS) in conjunction with the briefest (3 minute) narrative sample yielded the best combination of prompt, response duration, and score.