Generalizability of Direct Behavior Ratings by Trained and Untrained Raters

Open Access
- Author:
- Leposa, Bradley T
- Graduate Program:
- School Psychology
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- December 12, 2016
- Committee Members:
- Dr. Peter Nelson, Dissertation Advisor/Co-Advisor
Dr. Peter Nelson, Committee Chair/Co-Chair
Dr. Schaefer, Committee Member
Dr. Hazler, Committee Member
Dr. Greenberg, Outside Member - Keywords:
- Direct Behavior Ratings
DBRS
Generalizability Theory
Reliability
Multi-Tiered Systems of Support - Abstract:
- Direct Behavior Ratings (DBRs) are an emerging tool for gathering data to use in decisions within both traditional and expanded roles for school psychologists. The present study contains two components examining DBRs. The first component uses Generalizability Theory (GT) to examine agreement between two raters using DBRs in two different classrooms, one with trained raters and the other with untrained raters. The second component uses a focus group to solicit raters’ views regarding DBRs, raters’ views regarding a DBR training module, and the reasoning behind raters’ ratings. The G studies provide evidence that rater load (i.e., the number of students rated at one time) may impact rater disagreement; however, training had no discernable effect on reliability. Results from the focus group indicate raters found the DBRs mechanically easy to fill out, but that they had difficulty rating and differentiating between Respectful and Disruptive Behavior. For school psychologists in practice, the results of the present research suggest characteristics of the sample of students being rated (i.e., number of students) should be considered when interpreting DBR scores. For further development of DBR items, additional constructs suited to educators’ needs, such as a construct targeting social skills, may be beneficial.