Author
Lyn Richards

Pub Date: 11/2009
Pages: 256

Click here for more information.
Lyn Richards
Title: Youth Offender Program Evaluation: Ensuring Quality and Rigor: A Few Lessons Learned from a National Evaluation Study

Authors: Dan Kaczynski (Central Michigan University), Ed Miller (U.S. Army), Melissa A. Kelly (University of Illinois at Chicago)

The data

Round I
In Round I the evaluation's major purpose was to study each project's implementation process and assess its effectiveness in building upon existing programs and systems to reach its goals. There were three broad objectives guiding the evaluation: (a) provide feedback about the extent that program activities were being carried out as planned, (b) assess the extent to which program participants carried out their roles, and (c) provide a record of the program that was actually implemented and how it may have differed from what was intended. Essentially, the evaluators were tasked with identifying promising practices within the projects. Evidence used to identify and evaluate these practices included both qualitative and quantitative data. The qualitative data provided contextual information that was particularly valuable for identifying where linkages failed between organizations and when program components were poorly implemented or not implemented at all. Among the quantitative data collected were demographic information and outcome data such as the number of job placements. These types of information provided measures of how well programs were achieving their objectives and goals.

During data collection the evaluation team members made three 2-day visits to each of the 11 projects over an 18-month period. The evaluators collected data about significant changes in project plans, contextual changes, and unexpected consequences resulting from the project. Data was also collected to document barriers, challenges, and successes for each project. An important part of the analysis included identifying lessons learned during the course of implementation, with special attention given to lessons with broader implications for national policy. The findings from the evaluation were primarily used to inform DOL of the progress that the projects were making toward implementation. The evaluation team was not allowed to share information with either the projects or the technical team. DOL also used the evaluation's results to develop the Public Management Model for State and Local Workforce Agencies (PMM), which became the implementation model required for subsequent projects.

Round II
Framed by the need to assess the implementation of each project, the main objectives of the Round II evaluation were the same as in Round I. Data collection drew upon several methodologies, including quantitative and qualitative approaches proposed by Wholey, Hatry, and Newcomer (1994) and the use of performance monitoring systems in government programs (Swiss, 1991). The evaluation team drew upon an array of data sources, including observations, unstructured interviews, systems analysis, document reviews, data file extraction, and information exchange with the technical assistance team. As in Round I, the evaluators collected data related to significant changes in project plans, contextual changes, and unexpected consequences within the nine project sites. Data was also collected to document barriers, challenges, and successes that had surfaced within the projects.

One of the key factors that shaped data collection in Round II was the DOL's development and adoption of the PMM. From the model, which was based on the work of Richard Nathan (1988) in the area of systems change, DOL hypothesized that projects that provided an array of workforce and re-entry services tailored to the needs of youth and that demonstrated good management practices (including data collection and analysis) would develop a continuous improvement loop. In addition to providing a good indicator of the progress that the projects were making, the PMM also guided data collection and analysis as it focused attention on the organizational and systems dimensions of each project's implementation.

Round III
The main intent of the evaluation in Round III was to determine what had been learned about how to best help youth offenders and youth at risk of court involvement break the cycle of crime and juvenile delinquency. Due to constraints in cost and time, the evaluation team was unable to adopt a formative approach. Instead, the team used a case study approach to examine a purposive sample selected from 29 projects, which consisted of eight Round III sites selected as intrinsic cases because the projects were of particular interest and a total of six Round II and Round III sites selected as instrumental cases due to interest in a particular issue, such as how the projects used an employment bonus to retain youth in the project, or a unique component of each project. Data collection was primarily conducted during site visits lasting 8 to 10 days at each of the selected sites. While preparing the case studies, the evaluation team collected data to describe, explain, and explore the dynamics between youths and their families, the projects, and the community. Specifically, the studies considered how the dynamics affected the likelihood that youths would receive appropriate workforce development, re-entry and supportive services; prepare for employment; and avoid further involvement with the justice system. For each evaluation question and subquestion, the evaluation team identified measures, indicators, outputs, outcomes, and dimensions that represented a range of quantitative and qualitative data. Across the case studies, the data sources and collection strategies included data collected through:

    1. Direct observations of project advisory board meetings (especially efforts to revise plans, and program operations)
    2. Unstructured one-on-one and group interviews and semi-structured one-one-one and group interviews with program managers and front-line staff, youth, parents, and community representatives during visits to project sites
    3. Systems analysis of systems that supported or affected project development and implementation such as community-based organizations, schools, courts, employment and training programs
    4. Document reviews of artifacts such as project proposals, planning session documents, needs/strengths assessments, strategic and implementation plans, case files, self-assessments, records of court involvement by youths
    5. Collection of data from listservs created for discussions among project participants, including staff, partners and youth participants
    6. Review of management information system records, including abstractions of data from project records and standardized reports about the outcomes for members of the target population
    7. Review of program documentation including individual project progress reports and other relevant area research reports and findings
    8. Exchange of information with the technical assistance team and the DOL staff
    9. Telephone calls, email and other correspondence with program staff at each site
Not all of the data collection strategies applied to every case study. The specific strategies used in each case study varied based upon the nature of the evaluation questions, as well as whether the case studies were either intrinsic or instrumental. For the intrinsic case studies, evaluators attempted to answer all the evaluation questions; for instrumental case studies, only a narrow range of evaluation questions were explored during fieldwork and subsequent follow-up. A fundamental requirement for the data collection strategies in both types of case studies was the need to establish credibility and rapport with the projects' staff, the administrators, the youth and their families, and the community or neighborhoods within which the projects operated.

Back to Project Home Page