Author
Lyn Richards

Pub Date: 11/2009
Pages: 256

Click here for more information.
Lyn Richards
Title: Youth Offender Program Evaluation: Ensuring Quality and Rigor: A Few Lessons Learned from a National Evaluation Study

Authors: Dan Kaczynski (Central Michigan University), Ed Miller (U.S. Army), Melissa A. Kelly (University of Illinois at Chicago)

Working with data

Round I
In Round I the CIPP model (Stufflebeam & Shinkfield, 1985) played a major role in several aspects of the evaluation, including data analysis. Using this systems-flow approach allowed evaluators to identify the context, inputs, process, and product of each project and track the temporal flow of each project through its developmental stages from design to implementation. By integrating these components into the analysis process, the team was able to strengthen quality and rigor in the evaluation.

Round II
In analyzing the data collected in Round II, the evaluation team took into account the considerable variability among the projects. For example, some grantees were justice agencies, some were workforce agencies, and others were community-based organizations. Grantees were also states, counties, municipalities or nongovernmental organizations. Some of the projects' target areas were counties; others were cities or neighborhoods within a city. Data analysis was a 3-stage process based on the work of Rossi and Freeman (1993), which entailed (a) providing a full and accurate description of the actual project, (b) comparing the implementation of the demonstration across sites so that the evaluators could better understand the basis of differences they observed, and (c) asking whether the project, as implemented, conformed to its project design. An important part of the analysis included identifying lessons learned during the course of implementation, with special attention given to lessons that had broader implications for national policy. Equally important was the evaluation teams's task to determine how closely the projects adhered to the Public Management Model that was developed as a result of the lessons learned during the Round I evaluation.

Round III
For the instrumental case studies conducted in Round III, each project was assigned a single evaluation specialist; for the intrinsic case studies, two-member field evaluation teams were assigned to each site. The basis of the team member assignments was whether the member's area of expertise was rooted in organizational/process research techniques or ethnographic research techniques. Each team member's responsibilities were defined by the research questions, in that one team member focused primarily on organizational and process events that occurred during the project's planning and implementation phases, and the other team member focused primarily on youths and families involved with the project. The team member who focused on processes and events paid special attention to relationships between the project and other agencies and community institutions, as well as plans to sustain the project after the end of grant funding. This evaluator also focused on finding answers to questions that primarily required description and explanation of organizational processes and the project's interactions with other agencies, institutions, and youths and their families. Hence, the evaluator sought answers to the evaluation questions involving what, how, and why interactions occurred. The other member of the field evaluation team sought to explain the dynamics and interactions of youth and their families with the projects, agencies, and institutions of the community. A key goal for this team member was to better understand and explain why and how beliefs and behaviors of youths and others within their community affected the operation of the demonstration project. Relying heavily upon the team member's expertise and experience as well as theoretical propositions, this evaluator used ethnographic tools to make sense out of what he or she observed.

Triangulation
During the entire evaluation, triangulation was especially useful in identifying and reconciling discrepancies and inconsistencies among the data. To strengthen investigator triangulation, the evaluation team used a 2-stage review process to enhance intercoder reliability verification. Team members submitted their coded NVivo project files to the research firm headquarters where a staff member conducted a second coding pass. The NVivo project was then reviewed by research firm project administrators. To further enhance dependability and confirmability, an analysis oversight committee was included in the review process. The committee held quarterly meetings to review team feedback, data analysis procedures, and coding discrepancies, and to approve modifications to the emergent evaluation design.

Memorandums
Another aspect of the evaluation design that strengthened quality and rigor was the use of the memorandums written at the end of each day's observations and interviews during site visits. The reports became a data source that enhanced analysis and interpretation of data. When the researchers left the field, they began final coding of text data and organizing the data into more precise conceptual categories to support their analysis in consultation with the research firm office staff. In addition, the evaluation team was briefed on administrative requirements, as well as site visit protocols and methods to enhance interrater reliability and uniformity of evaluation approaches.

Qualitative Data Analysis Software
Unlike during the first two rounds of the YODP, evaluators during Round III used NVivo qualitative data analysis software to aid them in collection, management and analysis. The evaluation team's use of NVivo encouraged and supported the evaluators' immersion in complex sets of data, fine coding, and transparency of interpretation. Specifically, several of the evaluators concluded that NVivo illuminated the analysis process among members of the evaluation team engaged in data analysis and was particularly useful for strengthening intercoder reliability. For example, members of the evaluation teams were able to review each others' work to determine how each person was defining issues. Use of the software also allowed the evaluators to conduct reviews more effectively and finely than by using manual analysis. The evaluators used free nodes to preserve the integrity of inductive flow and the quality of the qualitative theoretical design while maintaining structured, controlled tree nodes. Rather than directly changing the code tree, evaluators could propose changes to the tree through a controlled process.

Back to Project Home Page