Download Evaluating and Improving Criminal Justice Training and more Study notes Criminal Justice in PDF only on Docsity!
The author(s) shown below used Federal funds provided by the U.S.
Department of Justice and prepared the following final report:
Document Title: Training Evaluation Model: Evaluating and
Improving Criminal Justice Training
Author(s): Kelly Bradley, Edward Connors
Document No.: 244478
Date Received: December 2013
Award Number: 2003-DD-BX-K
This report has not been published by the U.S. Department of Justice.
To provide better customer service, NCJRS has made this Federally-
funded grant report available electronically.
Opinions or points of view expressed are those
of the author(s) and do not necessarily reflect
the official position or policies of the U.S.
Department of Justice.
Institute for Law and Justice
Alexandria, Virginia
Training Evaluation Model:
Evaluating and Improving Criminal
Justice Training
Final Report
September 2007
Submitted to
National Institute of Justice
Prepared by
Kelly Bradley
Edward Connors
Institute for Law and Justice
Table of Contents
Chapter 7: National White Collar Crime Center's Foundations of Intelligence
Appendices
- Project Summary.............................................................................................................................. Chapter 1: Project Background and Report Overview
- Need for a Criminal Justice Training Evaluation Model .................................................................
- Overview of the Report....................................................................................................................
- Types of Evaluations........................................................................................................................ Chapter 2: Planning for Evaluations
- Evaluation Planning Steps ...............................................................................................................
- Identify Program Goals, Objectives, and Evaluation Questions..............................................
- Develop Conceptual Framework and Logic Model.................................................................
- Design Evaluation Methodology .............................................................................................
- Conduct the Evaluation............................................................................................................
- Analyze and Communicate Evaluation Results.......................................................................
- Evaluating Criminal Justice Training Programs ............................................................................
- Opportunities for Control and Comparison Groups.................................................................
- Challenges in Evaluating Criminal Justice Training ...............................................................
- Training Objectives........................................................................................................................ Chapter 3: Factors That Contribute to Successful Practitioner Training Outcomes
- Adult Learning Concepts ...............................................................................................................
- Instructional Methods ....................................................................................................................
- Practical Training Matters..............................................................................................................
- Facilitation Skills ...........................................................................................................................
- Communication Skills..............................................................................................................
- Active Listening.......................................................................................................................
- Body Language ........................................................................................................................
- Sensitivity to Adult Students' Cultural Diversity.....................................................................
- Kirkpatrick’s Training Evaluation Model...................................................................................... Chapter 4: Criminal Justice Training Evaluation Model
- Customizing and Expanding on Kirkpatrick’s Evaluation Model for Criminal Justice Training.
- Conduct Needs Assessment .....................................................................................................
- Design Training Plan ...............................................................................................................
- Develop and Test the Curriculum ............................................................................................
- Deliver the Curriculum ............................................................................................................
- Evaluate the Training and Trainers and Revise .......................................................................
- Key Decision Processes for Site Selection .................................................................................... Chapter 5: Project Methodology
- Overview of Methods ....................................................................................................................
- Summary of the Training Evaluation Model's Applications ......................................................... Chapter 6: Cross-site Comparisons and Findings
- Needs Assessment....................................................................................................................
- Training Plan............................................................................................................................
- Develop and Test Curriculum..................................................................................................
- Pilot Test ..................................................................................................................................
- Trainer Selection......................................................................................................................
- Training Course Evaluation .....................................................................................................
- Conclusions.................................................................................................................................... - Justice Training............................................................................................................... Recommendations and Lessons Learned: Tips for Evaluating and Improving Criminal
- Costs of Training......................................................................................................................
- The National White Collar Crime Center ...................................................................................... Analysis Training
- History and Background ..........................................................................................................
- Center Services ......................................................................................................................
- Review of the Intelligence Literature ..........................................................................................
- Intelligence-led Policing ........................................................................................................
- National Intelligence Plan......................................................................................................
- Core Standards .......................................................................................................................
- Foundations of Intelligence Analysis Training............................................................................
- Program Overview .................................................................................................................
- Evaluation Methodology..............................................................................................................
- Evaluation Questions .............................................................................................................
- Data Collection Methods and Framework .............................................................................
- Study Strengths and Weaknesses...........................................................................................
- Evaluation Findings .....................................................................................................................
- Participant Reaction ...............................................................................................................
- Knowledge and Skills Gained................................................................................................
- Behavior Changes ..................................................................................................................
- Discussion ....................................................................................................................................
- Strengths of the Course..........................................................................................................
- Recommendations for Change...............................................................................................
- APPENDIX 7-A: FIAT Development SME Participants...........................................................
- APPENDIX 7-B: NW3C FIAT Course Training Evaluation Materials.....................................
- APPENDIX 7-C: Pre-Post FIAT Participant Self-assessment of Course Comfort Level
- APPENDIX 7-D: Matched Pairs T-test Results of Pre/Post FIAT Course Comfort Level........
- Simon Wiesenthal Center ............................................................................................................ and Terrorism Training
- Review of Hate Crimes and Terrorism Literature .......................................................................
- Hate Crime Defined ...............................................................................................................
- Statistics .................................................................................................................................
- Terrorism................................................................................................................................
- Training..................................................................................................................................
- Teaching Tools for Tolerance......................................................................................................
- Tools for Tolerance National Institutes Against Hate Crimes and Terrorism .......................
- Evaluation Methodology..............................................................................................................
- Evaluation Questions .............................................................................................................
- Data Collection Methods and Tools ......................................................................................
- Strengths and Weaknesses .....................................................................................................
- Evaluation Findings .....................................................................................................................
- Participant Reaction ...............................................................................................................
- Learning/Knowledge Gained .................................................................................................
- Attitude and Behavior Changes .............................................................................................
- Organizational Impact............................................................................................................
- Discussion ....................................................................................................................................
- APPENDIX 8-A: SWC Training Evaluation Materials..............................................................
- APPENDIX 8-B: Case Study of Monmouth County, New Jersey
- APPENDIX 8-C: Case Study of Madison, Wisconsin
- Introduction.................................................................................................................................. Corrections, and Security Officers
- Context for Evaluation...........................................................................................................
- Overview of Literature Relevant to the Training...................................................................
- Research Questions................................................................................................................
- Method .........................................................................................................................................
- Trainings and Participants......................................................................................................
- Design, Instrumentation, and Data Collection.......................................................................
- Results..........................................................................................................................................
- Level 1 Reaction Results .......................................................................................................
- Level 2 Knowledge Results ...................................................................................................
- Level 3 Behavior Change Results..........................................................................................
- Level 4 Organizational Impact Results..................................................................................
- Discussion ....................................................................................................................................
- Main Findings ........................................................................................................................
- Strengths, Limitations, and Recommendations .....................................................................
- APPENDIX 9-A: NCLETTC Study Timeline, Milestone, and Workplan Chart .......................
- APPENDIX 9-B: NCLETTC Training Evaluation Materials ....................................................
- National Judicial College............................................................................................................. Chapter 10: National Judicial College's Civil Mediation Training
- Literature Review.........................................................................................................................
- Civil Mediation Training .............................................................................................................
- Program Overview .................................................................................................................
- Evaluation Methodology..............................................................................................................
- Evaluation Questions .............................................................................................................
- Data Collection Methods and Framework .............................................................................
- Evaluation Findings .....................................................................................................................
- Participant Reaction ...............................................................................................................
- Knowledge and Skills Gained................................................................................................
- Behavior Change....................................................................................................................
- Discussion ....................................................................................................................................
- APPENDIX 10-A: NJC Training Evaluation Materials .............................................................3 - Assessment Instrument………………………………………………………..... Appendix A: NW3C Fiat Course: Instructor Classroom Training Observation - Site Selection Screening………………………………………………………... Appendix B: Training Evaluation Model Project Evaluability Questions for
- Appendix C: Synthesis Report on Evaluability Assessments of Training Programs………….
- Appendix D: NW3C FIAT Training Evaluation Plan………………………………………… - and National White Collar Crime Center………………………………………. Appendix E: Memorandum of Understanding Between Institute for Law and Justice
- References ………………………………………………………………………………..…...
Chapter 1: Project Background and Overview • 1
Chapter 1
Project Background and Overview The purpose of this project was to produce a training evaluation model that can guide evaluations of a wide range of criminal justice training programs. The study was conducted by the Institute for Law and Justice in partnership with Eastern Kentucky University. It was sponsored by the National Institute of Justice (NIJ) with funding from the Bureau of Justice Assistance (BJA). The project’s overall goal was to help the Office of Justice Programs (OJP), U.S. Department of Justice, achieve more consistency and control over the hundreds of training programs for which it provides funding, and at the same time, increase the capacity of other criminal justice programs—federal, state, and local—to conduct their own training evaluations.
Project Summary This study had two major objectives: (1) develop a flexible model for evaluating criminal justice training programs, and (2) test the model by applying it in the field to four training programs. The four programs that were evaluated to test the model had received BJA discretionary grant funding for training (commonly known as “earmarks”). They were selected in part because they permitted a test of the model in diverse environments: the programs were different in terms of learning objectives, intended audiences, instructional methods, subject matter, and other factors. The four participating training programs were
- Foundations of Intelligence Analysis Training (FIAT) offered by the National White Collar Crime Center. This was a basic analytical intelligence training curriculum for law enforcement and regulatory personnel.
- Tools for Tolerance Institutes offered by the Simon Wiesenthal Center. The purpose of this training was to give participants new perspectives on hate crime and terrorist acts, help them form multi-agency collaborations, and foster the development of strategic action plans.
- Advanced Leadership for Law Enforcement and Corrections Professionals offered by the National Corrections and Law Enforcement Training and Technology Center. This course was focused on teaching values-based leadership skills to agency leaders who are responsible for first responders and correctional and security officers.
This document is a research report submitted to the U.S. Department of Justice. This report has notbeen published by the Department. Opinions or points of view expressed are those of the author(s)
Chapter 1: Project Background and Overview • 3
Evaluations are essential for determining whether OJP-funded training efforts are effective. NIJ has a responsibility to collaborate with other OJP agencies to support such evaluations. It sought an evaluation model that was flexible and practical, yet rigorous enough to guide evaluation planning where experimental or quasi-experimental designs were feasible.
Most training programs do assess participants’ immediate reactions—they conduct what Kirkpatrick has called a “level one” evaluation—but far fewer programs or program sponsors are able to answer the more difficult evaluation questions: What specific knowledge, skills, or changes in attitude did participants gain as a result of the training? Were participants able to apply what they learned back on the job? Did their employers see positive changes in their organizations as a result of having invested in employee training? (Kirkpatrick 1998)
NIJ recognized that a more consistent approach to training evaluation was needed both to assist OJP agencies and Congress in making wise funding decisions (avoid funding ineffective training programs) and to assist grantees in conducting meaningful evaluations that could help them improve their training and document effectiveness. Although the Kirkpatrick model (explained in Chapter 4) offered an excellent framework for the planning of training evaluations of differing levels of complexity, it had not been fully explored in the criminal justice context. It was important to determine how such a model should best be modified or expanded to improve criminal justice training and training evaluations.
Overview of the Report The audiences for this report include sponsors of criminal justice training programs; researchers and evaluators; and training program directors and trainers who may or may not have strong backgrounds in evaluation methodology. The chapter-by-chapter guide that follows is intended to help readers turn to portions of the report that may be of special interest to them.
Chapter 2: Planning for Evaluations This chapter’s purpose is to help “level the playing field” for readers who are not evaluation professionals by providing information about evaluation theory and design. It gives training developers and sponsors some of the tools they need to work effectively with evaluators. This is important because collaboration in the early stages of planning for a training program produces the strongest possible evaluation design and helps ensure that the design can actually be
This document is a research report submitted to the U.S. Department of Justice. This report has notbeen published by the Department. Opinions or points of view expressed are those of the author(s)
Chapter 1: Project Background and Overview • 4
executed. The chapter can also serve as a guideline for evaluators as they consider how to discuss evaluation planning with their clients.
Chapter 3: Factors That Contribute to Successful Practitioner Training Outcomes This chapter reviews the importance of applying adult learning concepts to criminal justice curriculum development; discusses important considerations for matching course content with instructional methods and media; and explains how the learning environment contributes to successful outcomes. The chapter is intended to (1) help training developers increase the likelihood that they will achieve their training objectives, and (2) aid both training professionals and evaluators in interpreting evaluation results.
Chapter 4: Criminal Justice Training Evaluation Model After reviewing the key features of the Kirkpatrick training evaluation model, this chapter explains how the model was enhanced in this project and then presents the complete, revised model. This is a step-by-step discussion of each element of the model. It covers simple assessments of participant satisfaction; evaluations of knowledge, skills, and attitudes learned; more complex and demanding evaluations of behavioral and organizational changes that may be attributable to the training experience; and the often overlooked task of evaluating instructors objectively.
Chapter 5: Project Methodology This chapter first provides a detailed discussion of the criteria that guided the researchers in conducting ten evaluability assessments and in selecting the four training programs that participated in the evaluation. In addition, it provides an overview of the methodologies employed in each of the four evaluations. More detailed discussions of methodology and related issues are found in the individual evaluation reports (Chapters 7 through 10).
Chapter 6: Cross-site Comparisons and Findings This chapter summarizes the key features of the criminal justice training evaluation model that were tested; discusses similarities and differences among the four training programs that participated in the project’s test of the model; and presents our findings with respect to the applicability of the model and what we learned about outcome evaluations in terms of learning, behavior change, and where feasible, organizational impact. The chapter also includes policy recommendations for improving training and training evaluation and provides lessons learned for
This document is a research report submitted to the U.S. Department of Justice. This report has notbeen published by the Department. Opinions or points of view expressed are those of the author(s)
Chapter 2: Planning for Evaluations • 6
Chapter 2
Planning for Evaluations Evaluating criminal justice training programs—like evaluating any program—involves systematically assessing whether a program operates the way it was intended and whether it has produced the intended outcomes. The best evaluations are planned concurrently with the program’s implementation; however, most evaluations are done after the program has been operating for awhile, at its conclusion, or at a future time after the program has ended. There are many different approaches to planning an evaluation of a training program, but the strongest evaluation is one that is planned during the curriculum development phase of the training, with the evaluation taking place concurrently with the training.
Stakeholders in the training evaluation process—for example, funding agencies, associations, training program directors, curriculum developers, trainers, and recipients of training—may not have the same understanding of evaluation methodology that professional evaluators do. This chapter provides these stakeholders with information they will need to communicate effectively with evaluators. The chapter first provides background information on various types of evaluations—their purposes and the questions one could expect to answer when choosing one type of evaluation over another. Next, it provides a detailed discussion of the steps taken in planning an evaluation. It concludes by pointing out some of the opportunities and challenges involved in conducting training evaluations. With this knowledge base in common, evaluators and program personnel can get more out of their joint planning efforts.
Types of Evaluations Program evaluation is defined by the General Accounting Office (GAO)^1 as “…individual systematic studies conducted periodically or on an ad hoc basis to assess how well a program is working” (U.S. GAO 1998, p. 3). An evaluation should be purposive, analytic, and empirical (Maxfield 2001). That is, its purpose should be known, it should be based on logic, and the results should be based on experience and data.
(^1) This office is now the Government Accountability Office.
This document is a research report submitted to the U.S. Department of Justice. This report has notbeen published by the Department. Opinions or points of view expressed are those of the author(s)
Chapter 2: Planning for Evaluations • 7
Evaluation relies on social science research methods to examine whether a program is operating the way it was intended (known as process or implementation evaluation) and whether it has produced the intended program effects (referred to as outcome or impact evaluation). It provides an in-depth assessment of program need, performance, or benefit. Types of evaluations include:
- Needs assessment —answers questions about the conditions a program is intended to address and the need for the program
- Assessment of program theory —answers questions about program conceptualization and design
- Assessment of program process —answers questions about the program activities and operation, implementation, and service delivery (process evaluation or implementation evaluation)
- Impact assessment —answers questions about program outcomes and impacts (impact evaluation or outcome evaluation)
- Efficiency assessment —answers questions about program cost and cost- effectiveness (sometimes referred to as a Return On Investment (ROI) or cost benefit analysis) The most common program evaluations examine both the process of a project (how it is being implemented) and the impact of a project (the consequences of the project for its participants). It is possible to conduct a process evaluation of a project (how it was implemented) without measuring the project’s impact. However, it is not possible to conduct an impact evaluation of a program without first completing a process evaluation, because to assess the impact of a project, we need to first systematically assess what is happening inside the project. For example, if the evaluation finds differing outcomes across project participants, a process evaluation will help indicate whether all participants actually received equivalent services, were served by the same staff, and attended the program regularly.
A process or formative evaluation assesses the fidelity and effectiveness of a program’s implementation by focusing on the activities and operations of the program (Rossi, Lipsey, & Freeman 2004). In essence, a process evaluation describes how a project was implemented, how it operates, and whether it is operating as stakeholders intended. Issues commonly investigated by a process evaluation include the following:
- What planning processes led to the application for program funding?
- Who was involved in the planning process? Were any key stakeholders omitted?
This document is a research report submitted to the U.S. Department of Justice. This report has notbeen published by the Department. Opinions or points of view expressed are those of the author(s)
Chapter 2: Planning for Evaluations • 9
about improving programs, projects, and components rather than decisions about whether to terminate a program or project. Decisionmakers may start out with global questions (“Is the program worth continuing?”) but they often receive qualified results (“These are good effects, but...”) that lead them to ways to modify present practice.
Evaluation Planning Steps Planning a program evaluation depends on the specific questions that the evaluation poses.^2 Before deciding on a plan, an evaluator needs to know the following:
- What the program stakeholders or funding providers seek from the evaluation
- How the results will be used
- Timing, resources, and budget Before an evaluation can be designed, it is important to decide what type of evaluation is best suited to your goals. That is, what is the purpose of the evaluation? Equally important is determining how the evaluation results will be used. The types of evaluation discussed earlier are shown in Exhibit 2-1. As the exhibit suggests, choosing the most fitting type of evaluation involves being clear on the evaluation’s purpose and the related questions that could reasonably be answered.
Exhibit 2-1: Evaluation Purpose, Questions, and Type^3 Evaluation Purpose Question to Be Asked Type of Evaluation Assessment of needs and determination of goals
To what extent are program needs and standards being met? What must be done to meet those needs?
Needs Assessment
Design of program alternatives
What services could be used to produce the desired changes?
Assessment of Program Theory
Review of program operation Is the program operating as planned?
Process Evaluation
Assessment of program outcomes
Is the program having the desired effects?
Impact/Outcome Evaluation
Assessment of program efficiency
Are program effects attained at a reasonable cost?
Cost Benefit/Effectiveness Analysis
(^2) An understandable evaluation guide and planning steps are presented on the BJA Center for Program Evaluation website at www.ojp.usdoj.gov/BJA/evaluation/ (^3) Source: Adapted from Rossi, Lipsey, & Freemman (2004, p. 40).
This document is a research report submitted to the U.S. Department of Justice. This report has notbeen published by the Department. Opinions or points of view expressed are those of the author(s)
Chapter 2: Planning for Evaluations • 10
The basic steps in planning an evaluation are discussed below and include identifying program goals and objectives, deciding upon evaluation questions, developing a conceptual framework and logic model, designing a methodology, conducting the evaluation, and communicating the results.
Identify Program Goals, Objectives, and Evaluation Questions
An evaluation begins with the identification of program goals, objectives, and specific evaluation questions. Key stakeholders need to agree on the short and long-term goals of the program (e.g., “train police dispatchers to use new computer system”). While the overall goals may not be measurable in specific, quantitative terms, the most clear evaluation findings are based on using specific objectives and quantitative language. Objectives are focused, operationalized measures of the goals (e.g., “50 percent increase in the number of police dispatchers using the new computer system by the end of the year”).
Formulating effective evaluation questions is critical to the success of an evaluation. The key question(s) to be answered by the evaluation may relate to program process, outcomes, the links between processes and outcomes, or explanations of why the program reached its observed level of effectiveness. The best questions are those that matter to key decisionmakers and stakeholders, while allowing for results that are useful, interpretable, and complete (Rossi, Lipsey, & Freeman 2004).
Develop Conceptual Framework and Logic Model
A conceptual framework (also known as a statement of theory, theory of program logic, or theory of program action) lays out the connections between the program strategy and tactics and the desired outcomes (Roehl 2002). A logic model is the graphical depiction of the conceptual framework. Developing a conceptual framework and logic model greatly simplifies designing the evaluation because it helps to identify which evaluation questions can and should be answered and which may not be feasible to address.
Care should be taken when identifying a program’s theory of program logic or action to avoid basing a program evaluation on faulty program logic flow (e.g., starting from weak or questionable premises, making too many leaps of faith in program expectations, or being too ambitious in what a program can accomplish using the means at hand). If the logic model is
This document is a research report submitted to the U.S. Department of Justice. This report has notbeen published by the Department. Opinions or points of view expressed are those of the author(s)
Chapter 2: Planning for Evaluations • 12
Exhibit 2-2: Experimental Design Pre-program Training Post-test Measurement Measurement
Treatment O X O
Random Assignment
Control O O T 1 T 2 T 3
O = Measurements X = Intervention T = Time period
With sufficient pre-planning, it is possible for training evaluations to use an experimental design. For instance, evaluators can take advantage of “wait-lists” for training and randomly assign half of the list to the training and half as a control group. The control group would not receive the training during the experimental period, but would receive it at the conclusion of the data collection phase of the evaluation. Consequently, the evaluators are able to conduct a rigorous evaluation, and all wait-listed participants are able to take the course as desired.
To conduct an RCT, individuals are not the only entities that can be randomly assigned. Workplaces, schools, or even entire communities can be randomly assigned. For example, in one evaluation of the Drug Abuse Resistance Education (D.A.R.E.) program, entire schools were paired by matching them on a number of factors. One school in each pair was randomly assigned to receive the program, while the other school served as the control (Rosenbaum & Hanson 1998).
However, it can often be difficult to randomly assign participants to a treatment or a control group. In these situations, the next best thing is to use a quasi-experimental design. In quasi-experiments, participants are not randomly assigned to a treatment or control group. Instead, the evaluator makes use of a real-life situation to form the groups, such as comparing
This document is a research report submitted to the U.S. Department of Justice. This report has notbeen published by the Department. Opinions or points of view expressed are those of the author(s)
Chapter 2: Planning for Evaluations • 13
two police academy classes. In this example, the evaluator cannot randomly assign police recruits to the academy class but can assume that the classes are reasonably similar, so that a comparison is possible. As in the experimental design, one group receives the intervention and one group does not (Exhibit 2-3).
Exhibit 2-3: Quasi-experimental Design with Pre-post Non-equivalent Comparison Groups Pre-program Training Post-test Measurement Measurement
No Treatment O X O Random Assignment Comparison O O T 1 T 2 T 3
Non-experimental designs include both reflexive designs and other types of data collection that typically rely upon qualitative data sources, such as case studies, interviews, and focus groups (see Exhibits 2-4 and 2-5). Reflexive designs involve comparing the targets with themselves (also know as pre-post or before and after designs). While these designs are the most frequently used in evaluation, they are the least rigorous.
Exhibit 2-4: Non-experimental One-group Pre-post Design Pre-program Training Post-test Measurement Measurement
Treatment O X O T 1 T 2 T 3
This document is a research report submitted to the U.S. Department of Justice. This report has notbeen published by the Department. Opinions or points of view expressed are those of the author(s)