Author
Lyn Richards

Pub Date: 11/2009
Pages: 256

Click here for more information.
Lyn Richards
Title: Youth Offender Program Evaluation: Ensuring Quality and Rigor: A Few Lessons Learned from a National Evaluation Study

Authors: Dan Kaczynski (Central Michigan University), Ed Miller (U.S. Army), Melissa A. Kelly (University of Illinois at Chicago)

Setting up the project
Juvenile crime and gang activity is exploding, especially in our nation's urban areas. Reports of murder, rape, robbery and aggravated assault committed by young people, who should make up our best hope for the future, are broadcast nightly on the local television news. To add to our worries, detention facilities for youths are over flowing and the cost of caring for and rehabilitating these kids is skyrocketing.

Is there anything that can be done to stem the increases of crimes committed by youthful offenders? And, is there any way to keep youths who are at risk of becoming either criminals or gang members out of trouble? Those were just two of the questions that the U.S. Department of Labor asked in 1998 when it collaborated with the Justice Department to see if there was anything that could be done to dissuade kids from criminal activity.

The result was a multi-million-dollar, multiyear, multiphase Youth Offender Demonstration Project (YODP) launched in 1990. Over the course of four years, funds were awarded to 52 projects with the goal of helping youths secure long-term employment at wage levels that broke the cycle of delinquency and dependency on public support.

These 52 projects addressed a variety of needs and services, including employment, education, alcohol and drug abuse interventions, and antigang activities. The nature of the projects varied as well. For example, some grantees were justice agencies, workforce agencies, states, counties, municipalities, or community-based organizations. Some of the target areas were counties, cities, or neighbourhoods.

An important component of the federal initiative, as is the case with all such government projects, was to ensure that the projects followed the rules and did their best to provide services to the youths they were serving. To do this, a nation-wide evaluation of the projects was conducted by an external evaluation firm. The evaluation experienced challenging conditions imposed by the evaluation sponsor which were counter to recommended evaluation practices. In addition, outcomes from the study were in conflict with the desired results of the sponsor. What follows, are brief descriptions of the numerous challenges the evaluation team faced in balancing the need to be responsive to stakeholders while maintaining technical quality and rigor throughout the longitudinal evaluation.

This report of the evaluation is written by two evaluators who were directly involved in the design and implementation of the evaluation and a third author who provided research support. [More...]

The data
Over the course of the evaluation, evaluators conducted extensive site visits in communities where the projects were set up. In all, a total of over 20 evaluators participated in the evaluation effort, sometimes as a lone evaluator or a member of a team. As this report shows, the evaluators had to face three distinctly different purposes that involved a unique set of methodological and interpretive challenges. The first round of the evaluation, for example, consisted of a process evaluation of 11 of the 14 project sites. The evaluators were tasked to track each site's implementation progress and goal attainment, reporting back to the DOL. They also collected quantitative data such as demographic information and outcomes that helped indicate the effectiveness of the project. The evaluators were particularly interested in identifying significant changes in project plans, contextual changes, and unexpected consequences resulting from implementation. These types of data were particularly valuable for identifying where linkages failed and when program components were poorly implemented or not implemented at all. During the second round of the demonstration, evaluators continued their observations but also fed the technical assistance team information to help the projects improve as they were implemented. Round II was distinct in that grantees were required to follow a model that was designed as a result of the experiences derived from the first round of the demonstration. In Round III, the evaluation team followed Stake's (2000) case study methodology to examine a purposive sample of projects as either intrinsic or instrumental cases. For each evaluation question, the evaluation team identified a range of measures, indicators, outputs, and outcomes. [More...]

Working with data
Data collection in all three rounds of the evaluation entailed numerous site visits by the evaluation teams. In Round I, evaluators made three 2-day visits to each of the 11 projects over an 18-month period and conducted a process evaluation. Unfortunately, they were not allowed to share evaluation results with the projects or technical assistance team. To make sense out of the effort and to track each project's development, they used Stufflebeam's CIPP model: context, inputs, process, and product. The evaluation of Round II was somewhat different. It followed a formative approach in which evaluators shared evaluation reports with key YODP stakeholders to develop a feedback loop and help them improve operations. All second-round projects also participated in multiple technical assistance sessions or events conducted by staff or consultants. Data collection in Round III was done primarily by field evaluators during site visits lasting 8 to 10 days at each of the selected sites. At the end of each day the field evaluators were onsite, they prepared memos describing the project operations and activities, the participants' perceptions of the local culture, and the evaluators' impressions and interpretations of what they had observed. When the researchers left the field, they began final coding of text data in consultation with the office staff. The evaluators at the office then reviewed the memos and provided feedback to the field team. [More...]

Analysis processes
An important part of the Round I analysis included identifying lessons learned during implementation, with special attention given to lessons with implications for national policy. DOL also used the results to develop the Public Management Model for State and Local Workforce Agencies (PMM), which became the implementation model for subsequent projects. The evaluation team used the model in Round II to gauge the projects' progress toward achieving their objectives and goals, focus attention on the organizational and systems dimensions of implementation, and facilitate comparison of projects. Analysis in Round II was a 3-stage process which entailed fully describing each project, comparing implementations across sites, and examining the fidelity of implementation to design. In Round III, triangulation was especially useful for identifying and reconciling discrepancies and inconsistencies in the data. The evaluation team used a 2-stage review process to enhance intercoder reliability verification and strengthen investigator triangulation. Team members submitted their coded NVivo project files to an evaluation staff member who conducted a second coding pass. The NVivo project was then reviewed by project administrators. To further enhance dependability and confirmability, an analysis oversight committee was included in the review process. The committee held quarterly meetings to review team feedback, analysis procedures, and coding discrepancies, and approve modifications to the emergent evaluation design. During Round III, the evaluation team learned valuable lessons related to the importance of involving all team members in code development, supporting the evolution of the analytical codebook, documenting data collection procedures, and balancing analytical rigor with emerging data. [More...]

Reporting the project
In Round I, evaluators were not allowed to share information with the project staff or the technical assistance team because the sponsor believed that it was necessary to establish a neutral environment. In Round II, there were significant improvements in reporting and DOL's data collection processes with the integration of the PMM. Due to the complex nature of the cases involved in Round III, as well as a shift in sponsorship within the DOL and a change in the evaluation firm's contract status, the third round of the evaluation was the most complicated stage of the evaluation project. The evaluators collected tremendous amounts of site-specific data, but the volume of work hindered the preparation of meaningful memos. [More...]

List of References