Author
Lyn Richards

Pub Date: 11/2009
Pages: 256

Click here for more information.
Lyn Richards
Title: Youth Offender Program Evaluation: Ensuring Quality and Rigor: A Few Lessons Learned from a National Evaluation Study

Authors: Dan Kaczynski (Central Michigan University), Ed Miller (U.S. Army), Melissa A. Kelly (University of Illinois at Chicago)

Analysis Processes

Round I
In Round I the evaluation team adopted a process evaluation approach to make "use of empirical data to assess the delivery of programs" (Scheirer, 1994, p. 40) and examine the implementation of the projects. Considering environmental factors identified in evaluation literature (Rossi & Freeman, 1989), the evaluation team assessed the extent to which grantees built strong linkages among existing organizations and developed integrated services. With such linkages, the projects were expected to deliver the services to the target population effectively and efficiently. Information about the nature of the linkages established for each project was instrumental in providing DOL with accounts of the progress being made in service delivery.

The evaluation team also sought an expansive evaluator role that entailed the use of evaluation results to help a program improve (Patton, 1986). Unfortunately, the project sponsor did not allow the evaluation team to share its findings with either the projects or the technical assistance team that had the task of helping the projects be effectively implemented. In some ways, it can be argued that this approach violated the tenets of formative evaluation, which in generally is a process evaluation whose intent is to help projects improve as they are implemented as well as provide information to those who commissioned the evaluation (Patton, 1986; Carter, 1994; Sonnichsen, 1992). As a result, the evaluation team focused on reporting the roles of key actors, dimensions of the projects, relationships, and activities within the projects and identifying lessons learned from the evaluation.

Round II
In analyzing the data collected in Round II, the evaluation team took into account the considerable variability among the projects. For example, some grantees were justice agencies, some were workforce agencies, and others were community-based organizations. Grantees were also states, counties, municipalities or nongovernmental organizations. Some of the projects' target areas were counties; others were cities or neighborhoods within a city. Data analysis was a 3-stage process based on the work of Rossi and Freeman (1993), which entailed (a) providing a full and accurate description of the actual project, (b) comparing the implementation of the demonstration across sites so that the evaluators could better understand the basis of differences they observed, and (c) asking whether the project, as implemented, conformed to its project design. An important part of the analysis included identifying lessons learned during the course of implementation, with special attention given to lessons that had broader implications for national policy.

Round III
Round III of the evaluation marked the most complicated stage of the evaluation project, due in part to the transition from the DOL demonstration branch to the DOL program office. Unlike the demonstration office, the program office was less interested in building knowledge and more interested in getting projects going. The subsequent shift in the sponsor's motives resulted in a shift in the focus of the evaluation. Also, since the research firm won the evaluation contract but not the contract to provide technical assistance, there was immediately a tension between the two competing private firms. These issues were the most significant factors that impacted data collection and analysis of data in Round III.

Due to resource constraints at the start of the Round III evaluation, the initial development of the codebook did not involve everyone on the evaluation team. All of the evaluators did, however, receive training on the use of the codebook. Although the implication was a tremendous commitment of team member time and organizational resources, the involvement of all team members in code development was considered to be a critical factor in the evaluation. Also related to data analysis, the evaluation echoed the importance of supporting the evolution of the codebook. Throughout the evaluation, the codebook continued to change. As the researchers returned to the field for the second stage of site visits in Round III, the codebook became deductive due to heavy analysis during the initial stages of the round. This represented an important change because inductive inquiry is used in qualitative evaluation to promote open-ended inquiry and deeper understanding. Shifting to a deductive perspective more closely aligns with quantitative hypothesis testing. By adopting a predetermined fixed code structure of meanings the study design was demonstrating a potential shift in both methodology and field practices. With a multiphase multisite study, special care was needed to shift back to inductive analysis when returning to the community. This shift had to be supported and coordinated with changes to the code structure.

Validity & Transferability
Due to the nature of the case studies examined in Round III, the evaluators collected tremendous amounts of site-specific data. The sponsor's expectation, however, was to generalize the findings across the sites. The evaluation team used Yin's (2003) framework for validity and reliability to help ensure that the data were appropriate, meaningful, and useful for making inferences involving analytical (rather than statistical) generalization. Within Yin's framework the quality of the research design was represented by four concepts: construct validity, internal validity, external validity, and reliability. To strengthen construct reliability, the researcher needed to ensure that he or she established operational measures correct for the concepts being studied. Specific strategies for addressing construct validity in case studies included using multiple data sources, establishing a chain of evidence, and having key informants review a draft of the case study. Internal validity, which applied to explanatory or causal studies only but not to descriptive or exploratory studies, involved establishing a causal relationship, whereby certain conditions were shown to lead to other conditions. To strengthen internal validity, Yin recommended incorporating pattern-matching, explanation-building, rival explanations, and logic models into data analysis. External validity meant establishing the domain to which a study's findings could be generalized. Strategies for addressing external validity in case studies included the application of theory (for single-case studies) and replication logic (for multiple-case studies). Reliability entailed demonstrating that the operations of a study could be repeated with the same or similar results. Yin suggested using a case study protocol or database for strengthening reliability. An important adjunct to the quality-control process for case studies was the use quantitative data, when possible, to confirm qualitative findings and interpretations. Also important was the use of triangulation techniques such as multiple observations by team members and multiple data sources (Patton, 2002).

Back to Project Home Page