Author
Lyn Richards

Pub Date: 11/2009
Pages: 256

Click here for more information.
Lyn Richards
Title: Youth Offender Program Evaluation: Ensuring Quality and Rigor: A Few Lessons Learned from a National Evaluation Study

Authors: Dan Kaczynski (Central Michigan University), Ed Miller (U.S. Army), Melissa A. Kelly (University of Illinois at Chicago)

Setting up the project - research design processes and entry to the field
Given the complex scope of this national evaluation that followed 52 projects, the study design required considerable attention prior to entry into the field. An important aspect of the longitudinal evaluation study was to test various service delivery models intended to help youth offenders and other vulnerable youths prepare for jobs that provided a decent wage and offered career potential. As originally envisioned, the evaluation's primary goal was to assess the YODP's effectiveness at providing core reentry services, workforce development, and additional needed services to youth offenders, gang members, and youths at risk of gang involvement. This goal also included an assessment of the outcomes of the developmental and support activities that were implemented for each project.

At the onset of the projects, DOL suspected that the effort would be difficult due to the varied needs of the youths and the range of programs and services established to address them. Among the range of needs were assistance with homelessness, drug and alcohol addiction, poor educational attainment, and weak family relationships. Services provided to the youths included employment, education, training, alcohol and drug abuse interventions, individual and family counseling, antigang activities, and recreation. Each project was initiated in one of three stages and evaluated accordingly. In Round I, (1999 to 2001), DOL awarded $13 million to a cohort of 14 entities that included states, counties, cities and nonprofit organizations. The awards were for 24 months, of which 6 months were for planning and 18 months were for operations. The second cohort, initiated in Round II (2001 to 2003), consisted of nine entities newly awarded a total of $8.2 million and an extension of 10 projects from Round I for an additional year. The grant period for the new awards was 30 months (6 months for planning and 24 months for operations). In Round III (2002 to 2005), DOL awarded a total of $11.5 million to a cohort of 29 communities. As in the case of the Round II projects, the grant period was 30 months.

Evaluation Purposes
As the demonstration progressed, the evaluation team coped with three distinctly different evaluation purposes. The Round I evaluation consisted of a process evaluation of 11 of the 14 sites operating in large and small communities. Research and Evaluation Associates, Inc., located in Chapel Hill, NC, received a contract to provide a process evaluation and technical assistance to 11 of the projects. Another firm received a contract to conduct an outcomes evaluation of programs run by three detention facilities (this evaluation is not discussed in this report).

During this phase of the evaluation, the intent of REA's evaluation was to track the implementation progress of the sites for internal use by DOL staff. The Round II evaluation was primarily a formative evaluation of the nine sites in the cohort, which operated in large and small communities and in a confinement facility. The evaluation team sought to empower the projects by helping them implement the demonstration according to a model prescribed by DOL. The same firm, Research and Evaluation Associates, Inc., provided both evaluation and technical services to the projects. In Round III, the evaluation used case study methodology to intensively examine a purposive sample of projects and conduct focused studies of unique program features. In addition, several projects from all three cohorts were selected for an outcome study.

Evaluation Sponsorship
The evaluation team also dealt with changes in perspectives among the sponsor and various DOL stakeholders along with shifts in project responsibility between internal DOL departments. Initially, the evaluation was managed by the Office of Policy Development Evaluation and Research (OPDER), which was the office that handled demonstration duties. For Round III, stewardship of the evaluation changed as the Office of Youth Services (OYS), which was primarily a programmatic office, took responsibility. With the shifts in responsibility, there were dissimilarities in the motives of the program office (OYS) and the demonstration office (OPDER) for conducting the evaluation. DOL Program's intent was to evaluate program outcomes, whereas DOL Demonstration sought to use the evaluation for implementation and project improvement. OPDER was essentially interested in identifying promising practices, as well as building knowledge. In contrast, OYS was less interested in knowledge building, and focused on getting programs implemented and operating at a point where they could sustain themselves in the future. The shifts among these driving forces resulted in significant changes in the evaluation design during the rounds of the evaluation.

Back to Project Home Page