Reliability and Validity of the RNR Program Tool

Erin Crites

Advisor: Faye S. Taxman, PhD, CHSSWeb Design Preview

Committee Members: Danielle Rudes, Devon Johnson, Christianne Esposito-Smythers

Merten Hall, #3300
December 03, 2015, 02:30 PM to 11:30 AM

Abstract:

The risk-need-responsivity (RNR) model has become an important foundational principle for many scholars and professionals in the correction’s field. In addition, principles of effective interventions for justice-involved persons have also gained favor. Even when there is strong evidence on what types of interventions are most effective, practitioners often struggle to implement theoretically sound, evidence-based, and structured programs. Identifying the essential features of programs—target, content, dosage, and implementation fidelity are key for achieving good outcomes. The RNR Program Tool is designed to assist practitioners in understanding how well their programs meet these essential features and how likely they are to achieve their desired outcomes. It provides classifies programs based on target behaviors and assesses the quality of programs relative to the features expected to be present based on the program’s classification. This dissertation is built around a set of three papers, each addressing different aspects of the development, reliability and validity testing, and relevance of the RNR Program Tool. The first paper describes the pilot testing and scoring of the RNR Program Tool using early pilot data. The results suggest the tool has sufficient internal reliability in its domains to measure program adherence to the risk principle, need principle, responsivity principle, dosage principles, sound implementation, additional characteristics or restrictions, and an overall measure of program quality. The second paper uses data from the Multi-site Adult Drug Court Evaluation to test the predictive capabilities of the RNR Program Tool’s scoring areas.  Although a clear predictive relationship could not be modeled between the RNR Program Tool scores and MADCE outcomes (re-arrest at 24-months) due to limitations in the data resulting in a lack of variability in many of the scoring domain, this exercise reinforced the need for making tools like the RNR Program Tool available to programs and to move towards a process for describing programs in greater detail. A third, and final paper, extends the discussion on the importance of collecting information on program features. Finding from this investigation suggest programs targeting similar behaviors, have similar features, and scores on the RNR Program Tool’s domains vary significantly between the program classification groups. Finally, the concluding chapter identifies the theoretical, policy and practice, and future research implications of this work.