Preview (11 of 34 pages)

This Document Contains Chapters 11 to 12 Chapter 11 Training Evaluation Essential Outcome After completing the lesson on this chapter, if nothing else, students should comprehend the challenge and complexity of training evaluation, and, at the same time, appreciate the importance of it within the context of the instructional systems design (ISD) framework. Chapter Learning Outcomes After reading this chapter, you should be able to: • define “training evaluation” and the main reasons for conducting evaluations • discuss the barriers to evaluation and the factors that affect whether it is conducted • describe the different types of evaluations • discuss the models of training evaluation and their interrelationships • describe the main variables to measure in a training evaluation and how they are measured • discuss the types of designs for training evaluation, their requirements, their limits, and when they should be used Key Concepts: HOW DOES THIS CHAPTER CONNECT TO THE WORLD OF TRAINING AND DEVELOPMENT? 1. The purpose of training evaluation is to assess the value and the worthiness of the training program. This helps decision-makers determine whether or not the training program needs improvement, achieves the expected results or solved the problem it was intended to, and ultimately whether or not the program was worth the investment. 2. In spite of its relative importance, conducting training evaluation can be difficult as there are many barriers and obstacles to consider and overcome, both pragmatic (cost, time, difficulty in collecting data) and political (lack of management support, trainers themselves feeling threatened by the potential results of the evaluation). 3. Training evaluations consist of many types (as described on pages 329–330) and can be classified by the focus of data collected (trainee perceptions, psychological forces, or the work environment) or the purpose of the evaluation. This may be formative (providing data about various aspects of the training program, such as training materials or the delivery by the trainer) or summative (providing data about the worthiness or effectiveness of the training program itself). 4. There are numerous models of training evaluation. The most commonly used are the Kirkpatrick 4-Level Hierarchical Model and the more recently introduced COMA Model (Cognitive, Organizational, Motivational, and Attitudinal) and the Decision-Based Evaluation (DBE) Model. Each model provides a useful framework for training evaluation and has its own strengths and limitations. 5. The variables that training evaluation can attempt to assess are listed in Table 11.1 on pages 344-345, and include trainee reactions, learning, behaviour, motivation, and self-efficacy, as well as their perceived and/or anticipated support for transfer. Other variables include organizational perceptions (linked to learning culture) and organizational results. 6. Numerous data collection designs exist, which can be categorized as either non-experimental (no comparisons made), experimental (comparisons made to a randomly selected control group), or quasi-experimental (comparison made to a non-randomly selected control group). Graphic illustrations of the various collection designs are depicted in Figure 11.3 on page 360. Student Motivation: WHY SHOULD STUDENTS CARE? While many students will be interested in how training evaluation helps to assess the return on investment (ROI) of training, expect that some in your class may not find the topic of this chapter particularly enthralling. Interestingly, this response to training evaluation reflects somewhat similar attitudes in the training field itself, something the authors allude to in the chapter, and which partially explains the rather low investment in actually conducting thorough training evaluation (see Figure 11.1 on page 335). Notwithstanding, it is worthwhile to reinforce the importance of training evaluation to your students, as supported by the examples provided. While improvements in training evaluation methods are important to illustrate (e.g., the COMA and DBE models), at the level of this course, where most students are happy to gain a broader understanding and appreciation of the field of training and development, a focus on the easier-to- understand Kirkpatrick model (hence its lasting popularity in the field) is likely most appropriate. Barriers to Learning: WHAT ARE SOME COMMON STUDENT MISCONCEPTIONS AND STUMBLING BLOCKS? Students typically have difficulty distinguishing between summative and formative evaluation, so these concepts may need to be reinforced with the aid of a concrete example or two. Likewise, students often struggle with the terminology used in explaining expectancy theory (see page 353), especially with the terms “valence,” “instrumentalities,” and “expectancies.” The example provided by the authors (employee absenteeism) is helpful in clarifying this formula. Finally, the last section of this chapter, which deals with data collection designs in training evaluation, may be challenging to many students in your class, especially those who have not taken a research course before or who have little interest in data collection designs. The visuals used in Figure 11.3 on page 360 are helpful along with the descriptions and explanations that follow. Engagement Strategies: WHAT CAN I DO IN CLASS? 1. For In-Class Exercise 1, on pages 363-364: Use student groups of 3–5 participants for this exercise. Have students create their list on a sheet of flipchart paper or on an overhead transparency so they can share their ideas with the class. 2. For In-Class Exercise 2, on page 364: Form new teams, and assign this exercise for them to discuss. Facilitate a class discussion to share their insights and conclusions. As an alternative, the evaluation form used by your college or university could form the basis for this exercise. You may find this exercise will consume too much time if done completely in class; therefore, consider assigning the work to be done as a take-home assignment to be debriefed during the subsequent class. 3. For In-Class Exercise 3, on page 364: This exercise can be used as a basis for discussion with a learning partner or group. Alternatively, it can be given as an individual homework assignment. Suggestions for Large Class Exercises a) Ask students to bring in some examples of blank evaluation forms from training in their organization (where possible) or provide the class with several blank training evaluation forms. Divide the class into groups of four and give each group a copy of the evaluation. Ask students to determine what factors are being evaluated. Are they able to identify which method of evaluation is being used? Are there any key pieces of information missing? Have students share their findings with another group. Debrief for the class. b) Conferences involve participation from many different groups, including those who provide hosting services and those providing learning through sessions or workshops or presentations. Divide the class into groups of four. Have each group take on the role of a party involved with the delivery of a conference. Have the students identify what data each of the groups would want to collect from conference participants. Identify which data are more closely related to the learning taking place at the conference. Discuss and debrief. Suggestions for Technology-Enhanced Classrooms a) Provide a demonstration of how to create an online survey (evaluation form) using a program such as Survey Monkey. Build the survey live in the classroom with input from students. You may wish to incorporate this with group projects or presentations that are taking place within the course where an evaluation is required and create the evaluation form to be used for these projects or presentations. Discuss the various options that program like Survey Monkey provide and their benefit to a training manager. b) Many retail stores or services utilize online surveys for customer feedback. The link to these surveys is often provided on the receipt given to the customer. Ask students to bring in receipts they have that ask for participation in an online survey (or provide these to students). Before accessing the survey ask students to identify what they think the survey is meant to measure. Select a few of the surveys and take them as a class. Compare what the survey is measuring to what you thought the survey was going to measure. Critique the wording of the survey. How formal or informal is it? Discuss and debrief. Suggestions for Internet Classes a) A variety of online quiz and survey tools exist that have made creating and tabulating surveys a lot easier and faster. Have the students select a program (such as Survey Monkey) and create an evaluation form for a training course they have attended (or for the course they are in). Discuss how the use of online evaluation forms can make evaluation more effective. Discuss any negatives to using online evaluation forms. Assessment Tools You may wish to make use of the Test Bank, PowerPoint slides, or at the end of a class, ask a student to summarize the key points from the lesson.. Reflections on Teaching: HOW CAN I ASSESS MY OWN “PERFORMANCE”? Good teaching requires the practice of ongoing self-assessment and reflection. At the completion of this lesson, you may find it helpful to reflect on the following and consider whether or not you want or need to make any adjustments for subsequent lessons. 1. What worked in this lesson? What didn’t? 2. Were students engaged? Were they focused or did they go off on tangents? 3. Did I take steps to adequately assess student learning? 4. Did my assessments suggest that they understood the key concepts? 5. What (if anything) should I do differently next time? 6. How can I gather student feedback? 7. How can I use this feedback for continuous improvement of my teaching? AdDitional Resources Chapter Summary This chapter reviewed the main purposes for evaluating training programs as well as the barriers that prevent training evaluation. Models of training evaluation were presented, contrasted, and critiqued. Although Kirkpatrick’s evaluation model is the most common and frequently used model, for formative evaluations at least, the COMA model may be more appropriate. Also described are the variables required for an evaluation as well as some of the methods and techniques used to measure them. Whereas many of these are measured through questionnaires administered to trainees, their supervisors, or others, objective data can also be used. Advantages and disadvantages of each were discussed. The main types of data collection designs were also described along with the limits to the interpretation they permit. The choice of data collection design, as with most aspects of training, was seen as a trade-off among costs, practicalities, and the information needs of management. Web Links • A comprehensive article on training evaluation (with many additional links) from BusinessBalls.com: http://www.businessballs.com/trainingprogramevaluation.htm Case Incident: The Social Agency Answers 1. What type of evaluation is appropriate for assessing the effectiveness of the program in the Social Agency case, and what data sources should be used? Answer: Students should refer to the section on types of training evaluation (pages 338-339) and consider they focus of the data collected and the purpose of the evaluation. Since the goal is to evaluate the effectiveness of the program, students should understand the difference between formative evaluations and summative evaluations, and be able to conclude that what is being asked for is a summative evaluation which would involve collecting data from various sources including the trainees, their supervisors, and possibly other stakeholders who could comment on the implementation of the new service delivery system. The appropriate type of evaluation for assessing the effectiveness of the program is a summative evaluation. This type of evaluation focuses on the overall effectiveness of the training program. Data should be collected from various sources, including: • Trainees: To gather their feedback on the training. • Supervisors: To provide insights on the application of new skills and changes in performance. • Other Stakeholders: Who can comment on the implementation and impact of the new service delivery system. 2. According to Table 11.1 on page 344, what variables are relevant for evaluating the effectiveness of the training program, and when should measurements be taken? Answer: To answer this question, students should reference Table 11.1 on page 344. The could make a case that each of the variables listed are relevant to the goal and depending upon the approach taken, the measurements should take place either immediately after the training is completed or after a sufficient lag time to allow for implementation before attempting to measure impact. Variables relevant for evaluating the effectiveness of the training program include those related to the training’s impact on learning, behavior, and results. Measurements should be taken: • Immediately After Training: To assess immediate reactions and learning. • After a Sufficient Lag Time: To evaluate the impact and implementation of the new service delivery system over time. The choice of when to measure depends on the specific aspect of effectiveness being evaluated. 3. What does "effectiveness" mean in the context of training evaluation, and how does it relate to training objectives and evaluation models? Answer: In answering this question, students should refer back to their previous answers and pause to consider what “effectiveness” really means. This is an excellent time to reinforce the connection between evaluation and training objectives, as discussed in Chapter 5. They should be able to conclude that “effectiveness” goes beyond the immediate reaction of trainees to the training, and should move deeper into the levels of learning, behaviour, and results as described in the exploration of the various models such as Kirkpatrick, COMA, and DBE. In the context of training evaluation, "effectiveness" refers to more than just the immediate reactions of trainees to the training. It involves: • Levels of Learning: Assessing whether trainees have acquired the intended knowledge and skills. • Behavior Change: Evaluating whether trainees are applying the new skills and knowledge in their work. • Results: Measuring the impact of the training on organizational outcomes and performance. Case Study: The Alcohol/Drug Abuse Prevention Program (ADAPP) Answers 1. How should students approach the assignment or exercise related to evaluating the Alcohol/Drug Abuse Prevention Program (ADAPP), and what are the components of the question they need to address? Answer: These questions are best delivered as either a homework assignment, or as an in-class team-based exercise. Students will address the following three questions as components of this question. Students should approach the assignment or exercise as either a homework assignment or an in-class team-based exercise. The components they need to address are: 1. Selection and Justification of an Evaluation Model: Students should select one of the evaluation models discussed in the chapter and explain and justify their choice. They may choose the Kirkpatrick model or another model that provides deeper insights into specific issues of evaluation (formative and summative). 2. Use of Variables from Table 11.1: Students should refer to Table 11.1 (page 344) to identify and justify the relevant variables for evaluating the program’s effectiveness. They should recommend appropriate mechanisms for data collection, such as questionnaires, observations, or organizational records. 3. Evaluation Design: Students should explain the benefits and challenges of using pre-post and time series designs versus a post-only approach. They should make a case for why a particular design is suitable given the high stakes of effectively implementing the ADAPP policy. 2. Which evaluation model should students consider for evaluating the ADAPP, and what justification should they provide for their choice? Answer: Students can select any of the three models described in this chapter, but should be required to explain and justify their selection. Some might feel more at ease with the Kirkpatrick model, however others may see the value in the other two models in that they provide the opportunity to delve more deeply into specific issues or concerns as aspects of the evaluation (formative and summative). Students can choose any of the three evaluation models described in the chapter. For instance: • Kirkpatrick Model: This model is popular for its four levels of evaluation: Reaction, Learning, Behavior, and Results. It is suitable for those who prefer a structured approach to evaluate immediate reactions and long-term outcomes. • Alternative Models: Other models may delve more deeply into specific aspects of the evaluation, providing insights into formative and summative issues. Students should explain and justify why their chosen model provides the best framework for assessing the ADAPP. The justification should include how the selected model aligns with the goals of the program and the type of evaluation needed (formative or summative). 3. What variables from Table 11.1 should be considered for evaluating the ADAPP, and what mechanisms should be used for data collection? Answer: Students should use Table 11.1 (page 344) as a resource when answering this question. They may make a valid case for any of the variables listed, but should be able to justify their recommendations along with the mechanisms (e.g. questionnaires, observations, organizational records, etc.). Variables from Table 11.1 (page 344) that should be considered for evaluating the ADAPP include: • Trainee Reactions: To gauge initial impressions of the training. • Learning Outcomes: To measure the knowledge and skills gained. • Behavior Change: To assess whether trainees are applying what they learned. • Program Results: To evaluate the overall impact on organizational outcomes. Mechanisms for data collection might include: • Questionnaires: To gather feedback from trainees and stakeholders. • Observations: To monitor changes in behavior and implementation. • Organizational Records: To track changes in relevant metrics or performance indicators. Students should justify their choices of variables and mechanisms based on how well they address the goals of the evaluation. 4. What are the benefits and challenges of using pre-post and time series designs versus a post-only design for evaluating the ADAPP? Answer: Students should be able to explain the benefits of pre-post and time series designs, but also appreciate the difficulties (including time and costs) of choosing these designs over the more common post-only approach. As it is somewhat reasonable to conclude that the training is the primary factor influencing post-training behaviour and results, it would not be unreasonable for students to conclude that a post-only evaluation would suffice; however, as the stakes are high in terms of the desired outcome (effective implementation of the ADAPP policy), a strong case can also be made that at least a pre-post design should be used. Benefits of Pre-Post Design: • Baseline Comparison: Allows for comparison between pre-training and post-training data to assess changes attributable to the training. • More Reliable Results: Provides a clearer indication of the training’s impact. Benefits of Time Series Design: • Trend Analysis: Captures changes over time, which helps in understanding the long-term impact and trends. Challenges: • Time and Costs: Pre-post and time series designs require more time and resources to collect and analyze data compared to a post-only design. • Complexity: These designs are more complex to implement and analyze. Post-Only Design: • Simplicity: Easier and less costly to implement. • Sufficiency: May be sufficient if the training is the primary factor influencing post-training behavior and results. Given the high stakes of effectively implementing the ADAPP policy, a strong case can be made for at least using a pre-post design to ensure the evaluation is thorough and reliable. Flashback Answers 1. What are the conditions of transfer for the training directed at supervisors, and what issues might arise if there is little transfer of training? Answer: Conditions of transfer: • As the training is directed at supervisors, they should for the most part possess positive trainee characteristics such as adequate cognitive ability and self-efficacy. Additionally, as the supervisors have been informed that they will be held directly responsible for enforcing the policy and may face sanctions (including immediate termination) if they don’t, trainee motivation ought to be quite high. Should there be little transfer of training, the problem may be with the training design, and changes might be required. Conditions of transfer for the training directed at supervisors include: • Positive Trainee Characteristics: Supervisors should possess adequate cognitive ability and self-efficacy, which are crucial for applying new skills. • High Motivation: Supervisors are highly motivated due to the direct responsibility for enforcing the policy and the possibility of facing sanctions (including immediate termination) if they fail to do so. If there is little transfer of training, the problem may lie in the training design. In such cases, changes to the training design might be required to improve the effectiveness of the training and ensure that the skills are applied on the job. 2. Why might e-learning be a suitable alternative for training in this scenario, and what features could be included in the e-learning solution? Answer: E-learning alternative: • As the company operates continent-wide, e-learning makes sense because of the geographical spread of the trainees, and the travel costs involved with face-to-face training. The training largely focuses around declarative knowledge (see Table 3.2 on page 77), which may lend itself to an e-learning solution. Additionally, the e-learning solution could certainly include features such as video vignettes showing how and how not to explain and enforce the policy, and on how to spot signs of impairment. Aspects of active learning and active practice (such as online quizzes requiring a correct response before allowing the trainee to advance to the next learning point) could also be incorporated in the e-learning solution. Relapse prevention strategies such as booster sessions could also be incorporated using e-learning, to further enhance transfer to the job. E-learning is a suitable alternative due to the company's continent-wide operations and the associated travel costs of face-to-face training. E-learning is well-suited for delivering declarative knowledge, which is a major component of the training (referencing Table 3.2 on page 77). Features that could be included in the e-learning solution are: • Video Vignettes: Demonstrating how to explain and enforce the policy, and how to spot signs of impairment. • Active Learning Elements: Such as online quizzes that require correct responses before advancing to the next learning point. • Active Practice: Incorporating practice activities to reinforce learning. • Relapse Prevention Strategies: Including booster sessions to enhance transfer and address any challenges in applying the skills on the job. 3. How can trainee motivation be enhanced to facilitate transfer, and what role can upper management and the trainer (or e-coach) play in this process? Answer: Trainee motivation and transfer: • One elements of motivation that could facilitate and enhance transfer is by building up the trainee’s self-efficacy so that they will feel confident in their ability to enforce the policy on the job. This can be accomplished through providing opportunities to observe and practice the proper execution of the skills required to enforce the policy. Trainees can also be assisted in developing their abilities to self-regulate their behaviour related to successfully performing this challenging supervisory task. The trainees need to understand and appreciate the valence of outcomes of the training, so that they can see that the expected result of the training (both for the company and for their own careers) is worth the effort expended. This could be as straight-forward as a discussion during training of the consequences of not enforcing the policy vs. the benefits of doing so. • On the job, upper management can help maintain and enhance motivation by reinforcing and encouraging the application of the new skills and by providing timely and appropriate recognition and feedback to the supervisors when they properly enforce the policy. The trainer (in the case of e-learning, perhaps an “e-coach”) can provide post-training follow-up. Perhaps relapse-prevention strategies can also be incorporated into the training, with the opportunity for booster sessions post-training if a refresher is found to be necessary or if on-the-job conditions change to such a degree that the skills acquired during training and proving to be inadequate or ineffective. To enhance trainee motivation and facilitate transfer: • Build Self-Efficacy: Provide opportunities for trainees to observe and practice the skills required to enforce the policy, and help them develop self-regulation strategies for this challenging task. • Valence of Outcomes: Ensure trainees understand the benefits of enforcing the policy versus the consequences of failing to do so. This can be communicated through discussions during training about the impact on the company and their careers. Role of Upper Management: • Reinforcement and Encouragement: Maintain motivation by reinforcing and encouraging the application of new skills. • Recognition and Feedback: Provide timely and appropriate recognition and feedback when supervisors properly enforce the policy. Role of the Trainer (or E-Coach): • Post-Training Follow-Up: Offer follow-up support to address any issues and maintain motivation. • Relapse Prevention: Incorporate strategies into the training and provide booster sessions if necessary to address changes in job conditions or challenges in applying the skills. Flash Forward Question As a way of introducing Chapter 12, have students flip ahead to pages 374-375 to learn more about the general concepts of costing and the categories mentioned in the question. Running Case Study: Dirty Pools Suggested Answers to Case Questions 1. What type of evaluation is the Health Department seeking for the Dirty Pools case, and which evaluation model should students use to focus their evaluation? Identify the relevant variables to consider. Answer: Students should be able to conclude that what is being asked for by the Health Department is a summative evaluation, and should be able to select from the three models described (Kirkpatrick, COMA, or DBE) and identify which variables they would want to focus their evaluation on. The Health Department is seeking a summative evaluation for the Dirty Pools case. Students should select from the following evaluation models: • Kirkpatrick Model: Focuses on reaction, learning, behavior, and results. • COMA Model: Emphasizes the context, objectives, methods, and assessment of training. • DBE (Design, Behavior, Evaluation) Model: Concentrates on the design, behavior change, and overall evaluation. Students should identify variables related to the effectiveness of the training, such as: • Results: The impact of the training on pool cleanliness and compliance with health standards. • Behavior: Changes in practices and behaviors of the staff responsible for pool maintenance. 2. What type of evaluation might the Health Department require if they are focusing on the initial implementation phase, and which evaluation model would be most appropriate? Answer: Students should be able to conclude that in this case the Health Department is looking for a formative evaluation, and might suggest that this could be the focus of any of the three models but more than likely would point to the reaction level of Kirkpatrick’s model as an appropriate approach. If the Health Department is focusing on the initial implementation phase, they would require a formative evaluation. This type of evaluation aims to assess the training process and identify areas for improvement during the training itself. The reaction level of the Kirkpatrick model would be the most appropriate approach for a formative evaluation. This level assesses trainees' immediate reactions to the training, providing feedback on the training's content, delivery, and relevance, which is valuable for making adjustments before final implementation. 3. What data collection design could be suggested for evaluating the Dirty Pools training program, considering the difficulties of experimental and quasi-experimental designs? Answer: To answer this question, students should review the various training evaluation data collection designs that are illustrated in Figure 11.3 on page 360. Appreciating the difficulties of using experimental and quasi-experimental designs, they may suggest the Internal Reference Strategy (IRS) design as a compromise alternative. Students should review the data collection designs illustrated in Figure 11.3 on page 360. Given the difficulties associated with using experimental and quasi-experimental designs, they might suggest the Internal Reference Strategy (IRS) design as a compromise alternative. The IRS design involves using internal benchmarks and comparisons to evaluate the effectiveness of the training, which can be a practical and feasible option for assessing the Dirty Pools training program without the complexity of experimental designs. Chapter 12 The Costs and Benefits of Training Programs Essential Outcome After completing the lesson on this chapter, if nothing else, students should appreciate the importance of calculating the costs and benefits of training, as a way of justifying the investment in training and as part of the training evaluation process. Chapter Learning Outcomes After reading this chapter, you should be able to: • explain why training and human resource professionals should calculate the costs and benefits of training programs in their organization • calculate the various costs of training programs • compare and contrast cost-effectiveness evaluation and cost–benefit evaluation • conduct a net benefit analysis, benefit–cost ratio, return on investment, and utility analysis • explain what “credibility” means when estimating the benefits of training programs Key Concepts: HOW DOES THIS CHAPTER CONNECT TO THE WORLD OF TRAINING AND DEVELOPMENT? 1. Costing is the process used to identify all of the expenditures used in training and includes direct costs, indirect costs, developmental costs, overhead costs, and the costs of compensating trainees while they are participating in the training. 2. The benefits of training can be calculated through the use of a cost-effectiveness evaluation, which compares monetary cost of the training to the monetary benefit of it, or cost–benefit evaluation, which compares the monetary costs to the benefits, expressed in non-monetary terms. 3. Return on investment (ROI), as described on pages 378-379, compares the cost of the training program relative to the benefits derived from it, calculated by dividing the net benefit by the cost of the training program. 4. A utility analysis can be used to forecast the financial benefits resulting from a training program by using factors such effect size (difference in job performance results between the trained and untrained groups), standard deviation of job performance (based on the range of performance across the untrained group and the monetary value of the differences in that range), as well as the number of employees trained and the length of time that the benefits of the training are deemed to have an effect. A break-even analysis is used to help determine the point at which the cost of the training is off-set by the benefits realized by the training. 5. Whatever the approach used to calculate the costs and benefits of training, assumptions and judgments are involved, and these must be seen as credible in the eyes of decision-makers if the calculations are to be accepted and used. Student Motivation: WHY SHOULD STUDENTS CARE? Students who are interested in working in the field of training and development will want to be taken seriously by their peers and by upper management, and so will be interested in learning that cost and benefit calculations of training are an important aspect of that. Even those students who do not particularly intend to focus on training and development will likely be interested in this topic as it relates to strategic human resources management (SHRM). All will likely at least be familiar with the meaning of the term “ROI” and have a basic interest in learning how ROI of training is calculated. Barriers to Learning: WHAT ARE SOME COMMON STUDENT MISCONCEPTIONS AND STUMBLING BLOCKS? Many students may have little or no background in topics such as business accounting. Indeed, some students may have been attracted to the field of human resources (or specifically training and development) because they are more inclined toward the “human” aspects and less toward the “resources” aspects, at least in the cost accounting sense of the term. This inclination (or predisposition) might pose a certain attitudinal or interest barrier for these students. In particular, the material dealing with calculating utility analysis and break-even analysis may prove challenging to some students. With this in mind, encouraging these students to look beyond the formulas and the math toward the rationale behind cost and benefit analysis and emphasizing credibility aspect may be important. The tangible example of the wood panel plant training (Table 12.1 on page 376 and Table 12.2 on page 380) is very helpful in this regard. Even so, it may be worthwhile to point out the example (and cost figures) are over 25 years old and therefore in need of updating to bring them in line with current costs. Engagement Strategies: WHAT CAN I DO IN CLASS? 1. For In-Class Exercise 1, on pages 386-387: Organize the class into seven teams by having them count off 1 to 7. Assign each team the corresponding training program (a–g) and have them complete the exercise and present their findings to the class. Debrief. 2. For In-Class Exercise 2, on page 387: Use this exercise for the role play as suggested in the Lecture Outline below (see F. The Credibility of Estimates). Use the follow-up questions for the debriefing. 3. For In-Class Exercise 4, on page 387: Use this exercise for completing the costing worksheet on an overhead or white board with the class as suggested in the Lecture Outline below (see B. Costing Training Programs). Use the follow-up questions for the debriefing. Suggestions for Large Class Exercises A key element of this chapter is calculating costs. While a few examples are provided in the textbook, it is important to actually work through these costing factors as a class to provide direct experience in “doing the math” individually. a) Create a sample cost sheet for a training program. Have students calculate the total costs and benefits. Create an overhead/slide of these figures and the equations for Net Benefit, BCR, and ROI. Guide students through the calculation steps on the overhead, slide, or document viewer. Work at a slow pace so that all students have an opportunity to “do the math” and work out the answers by themselves before moving on. b) Repeat this with another set of numbers. Suggestions for Technology-Enhanced Classrooms a) Show images of “training facilities” of various organizations to illustrate the variety of facilities and the variety of potential overhead/fixed costs. Consider including the Learning Centre of BMO located in Toronto, other corporate “universities,” and small facilities such as organizations that may just use a boardroom. b) Show students an online flyer for residential training (such as Queen’s University Leadership program, which is $8,900 and includes a one-week residential component). Have students identify what costs are being included in the fee. See web link section for link to the PDF of the Queen’s brochure. Suggestions for Internet Classes a) Have students identify the difference in costs for delivering their Internet course compared to an in-class course. Have them identify the category of these costs. b) Have students conduct an online search to locate three different public training programs and assess the costs of these. What costs would the training provider absorb? What costs would the customer absorb? Assessment Tools You may wish to make use of the Test Bank, PowerPoint slides, or at the end of a class, ask a student to summarize the key points from the lesson. Reflections on Teaching: HOW CAN I ASSESS MY OWN “PERFORMANCE”? Good teaching requires the practice of ongoing self-assessment and reflection. At the completion of this lesson, you may find it helpful to reflect on the following and consider whether or not you want or need to make any adjustments for subsequent lessons. 1. What worked in this lesson? What didn’t? 2. Were students engaged? Were they focused or did they go off on tangents? 3. Did I take steps to adequately assess student learning? 4. Did my assessments suggest that they understood the key concepts? 5. What (if anything) should I do differently next time? 6. How can I gather student feedback? 7. How can I use this feedback for continuous improvement of my teaching? Additional Resources Chapter Summary This chapter described the methods and approaches for calculating the costs and benefits of training programs. Cost and benefit information is important not only for budgeting purposes and for comparing the costs of training programs, but also for training evaluation. We described five cost categories for costing training programs and the differences between cost-effectiveness evaluation and cost–benefit evaluation. Descriptions and examples were then provided for the calculation of the net benefit, benefit–cost ratio (BCR), and return on investment (ROI) of training programs. Utility analysis was described as an alternative approach for calculating the financial benefits of training programs. The chapter concluded with a discussion on the importance of credibility when estimating the financial benefits of training and development programs. Web Links • Canadian Management Centre (CMC) Sales training course: http://www.cmctraining.org/area-of-interest/sales-marketing • Queen’s University Leadership Program Brochure (PDF): http://business.queensu.ca/ConversionDocs/Execdev/Leadership.pdf Suggestions for End-of-Chapter Exercises For In-Class Exercise 3, on page 387: As some students will be much more comfortable with finance and accounting than others, have students do the first part in small groups; be prepared to provide plenty of guidance and support. Use the follow-up questions to debrief. Case Incident 1: The Harmony Remote Answers 1. What are the differences between the approaches for evaluating training in terms of monetary and non-monetary benefits, and how should students determine if the training is worth the investment? Answer: To answer this question, students should appreciate the differences between the two approaches and review the definitions on page 378. To do the former, students should indicate that they need to first be able to calculate all of the training costs (direct, indirect, developmental, overhead, and trainee compensation), and then subtract the total costs from the financial benefit gained as a result of the training. For the latter, students should be able to identify some of the non-monetary benefits such as increased customer satisfaction as a result of the training program and the need to determine if these benefits cover-off or exceed the costs of the training. Students should appreciate the following differences between the two approaches: • Monetary Benefits: Involves calculating all training costs (direct, indirect, developmental, overhead, and trainee compensation) and subtracting these costs from the financial benefits gained as a result of the training. This approach focuses on quantifying the financial return on investment. • Non-Monetary Benefits: Includes assessing benefits such as increased customer satisfaction or improved employee morale. Students should determine if these non-monetary benefits justify or exceed the training costs, even if they are harder to quantify. To determine if the training is worth the investment, students should: 1. Calculate all costs associated with the training. 2. Assess the financial and non-monetary benefits. 3. Evaluate if the benefits outweigh the costs. 2. How should students calculate the Benefit-Cost Ratio (BCR) and Return on Investment (ROI) for the training program, and what information is required for these calculations? Answer: Students will have identified how to calculate the net benefit in their previous answer. They should then be able to state that to calculate the BCR they simply need to be able to divide the benefit by the cost. Similarly, they should be able to state that ROI is calculated by dividing the net benefit by the cost of the training program. In each case, they should understand that in order to do these calculations they will require accurate financial information related to the complete costs of training. Students should use the following formulas: • Benefit-Cost Ratio (BCR): Calculate by dividing the total benefit by the total cost of the training program. This provides a ratio indicating the financial return for every dollar spent. • Return on Investment (ROI): Calculate by dividing the net benefit (total benefit minus total cost) by the total cost of the training program. This provides a percentage representing the return relative to the investment. For both calculations, accurate financial information related to the complete costs of training is required. 3. What additional numerical information is needed to perform a utility analysis of the training program, and where can students find guidance on how to perform this analysis? Answer: To answer this question, students should carefully review the section on utility analysis on pages 382-383. They should be able to identify that doing the calculation requires additional numerical information related to effect size, the standard deviation of job performance (in monetary terms), and the numbers of employees trained. To perform a utility analysis of the training program, students need additional numerical information: • Effect Size: The magnitude of the impact of the training on job performance. • Standard Deviation of Job Performance (in monetary terms): A measure of variability in job performance related to the training. • Number of Employees Trained: The total number of employees who received the training. Students should review the section on utility analysis on pages 382-383 to understand how to gather and use this numerical information for calculating the utility of the training program. Case Incident 2: Renswartz Realty Company Answers 1. How do you calculate the utility for each proposal, and what are the results? Answer: Utility Analysis Proposal #1: ΔU = (2)(200)(.35)($15,000) – 200($1,500) ΔU = $2,100,000 - $300,000 ΔU = $1,800,000 Proposal #2: ΔU = (1)(200)(.25)($15,000) – 200($450) ΔU = $750,000 – $90,000 ΔU = $66,000 • Utility Analysis for Proposal 1: ΔU = (2)(200)(0.35)($15,000) - 200($1,500) ΔU = $2,100,000 - $300,000 ΔU = $1,800,000 • Utility Analysis for Proposal 2: ΔU = (1)(200)(0.25)($15,000) - 200($450) ΔU = $750,000 - $90,000 ΔU = $66,000 2. What is the break-even point for each proposal, and how is it calculated? Answer: Break-even Analysis: Proposal #1: dᵗ = $300,000/600,000) dᵗ = 0.5 Proposal #2: dᵗ = $90,000/3,000,000 dᵗ = 0.03 3. How do you calculate the Benefit-Cost Ratio (BCR) and Return on Investment (ROI) for each proposal, and what are the results? Answer: Proposal #1: BCR: $1,800,000/$300,000 = 6 ROI: $1,800,000-$300,000/$300,000 = 5 Proposal #2: BCR: $66,000/$90,000 = .73 ROI: $66,000-$90,000/$90,000 = -0.266 4. What are the key differences between the two proposals in terms of cost, effect size, and duration of the training effect? Answer: The first proposal is more costly in terms of delivery costs ($1,500 per person), but has a larger effect size (0.35) and the training effect will last longer (2 years), while the second proposal has a lower delivery cost ($450 per person) but a smaller effect size (0.25) and will last a shorter amount of time (1 year). • Proposal 1: • Delivery Cost: $1,500 per person • Effect Size: 0.35 • Duration of Training Effect: 2 years • Proposal 2: • Delivery Cost: $450 per person • Effect Size: 0.25 • Duration of Training Effect: 1 year 5. Which proposal should the company accept, and why? Answer: The company should accept proposal #1 since even though it costs more, the ROI and BCR are significantly better. The company should accept Proposal 1. Despite its higher cost, it has significantly better ROI and BCR compared to Proposal 2. Proposal 1 provides a higher return on investment and a better ratio of benefits to costs, making it a more favorable option. 6. What should be considered regarding the accuracy of the calculations, and what potential issues might affect the validity of the results? Answer: While numerical, the calculations still rely on estimates and assumptions, so the numbers may not be as accurate as one may think if those estimates and assumptions are not accurate or are invalid. For example, no one can be sure with a high degree of certainty that the training effect will last one year vs. two years for example – these are assumptions that must be agreed upon in advance of doing the calculations. While the calculations are numerical, they rely on estimates and assumptions that may not always be accurate. For instance: • Assumptions About Training Duration: The actual duration of the training effect (1 year vs. 2 years) might differ from the estimates used, affecting the accuracy of the results. • Estimation Accuracy: If the estimates for training costs, effect sizes, or other variables are incorrect, the calculations may not accurately reflect the true value of the training. Case Study: Datain Answers 1. What are the total costs of the training program, and how are these costs categorized? Answer: In answering this question students should be sure to use the costing categories listed in Table 12.1 on page 376. • Direct costs include: the trainer’s cost ($1,600).lunch for the trainees ($250) and refreshments ($50). • Indirect costs include: administrative costs ($240). • Developmental costs include: need analysis ($2,000), the program development fee ($5,000), evaluation costs ($2,000), • Overhead costs include: portion of heat, electricity, etc. ($100). • Trainee compensation costs include: trainee’s wages ($3,000), Total program costs = $14,240 To calculate the total costs of the training program, students should use the costing categories listed in Table 12.1 on page 376: • Direct Costs: • Trainer’s cost: $1,600 • Lunch for trainees: $250 • Refreshments: $50 Total Direct Costs: $1,900 • Indirect Costs: • Administrative costs: $240 • Developmental Costs: • Needs analysis: $2,000 • Program development fee: $5,000 • Evaluation costs: $2,000 Total Developmental Costs: $9,000 • Overhead Costs: • Portion of heat, electricity, etc.: $100 • Trainee Compensation Costs: • Trainee wages: $3,000 Total Program Costs = Direct Costs + Indirect Costs + Developmental Costs + Overhead Costs + Trainee Compensation Costs Total Program Costs = $1,900 + $240 + $9,000 + $100 + $3,000 Total Program Costs = $14,240 2. How should benefit, net benefit, benefit-cost ratio, and ROI be calculated, and what factors should be considered in recommending a training program? Answer: Students should calculate benefit, net benefit, benefit-cost ratio and ROI in a similar fashion to Case Incident 2 Student’s recommendation should be based on their calculations with a particular focus on net benefit and ROI. These are however difficult to calculate without knowing the standard deviation of performance. One could conclude, however, that although the claimed reduction in errors and mistakes as a result of the training is relatively similar in both proposals, the anticipated rate of turnover is much lower in the training consultant’s proposal (a 90% reduction vs. a 60% reduction for the E-Learning program), and as the cost of turnover is a major cost consideration (estimated to be $5,000 per employee), that alone would suggest the consultant’s proposal stands to have a greater overall benefit and should therefore be the recommended option. • Benefit Calculation: To calculate the benefit, you would need to estimate the financial impact of the reduction in errors and turnover as a result of the training. Since the exact benefit value isn't provided, an estimate would be based on the anticipated reduction in turnover and errors. • Net Benefit Calculation: Net Benefit = Total Benefit - Total Program Costs In this case, without the exact figures for the benefits, we can't calculate these values directly. However, a comparison suggests that the training consultant’s proposal is preferable due to a lower anticipated turnover rate (90% reduction vs. 60% reduction for E-Learning) and a lower cost of turnover ($5,000 per employee). Recommendation: Given the higher anticipated benefit in terms of reduced turnover in the consultant’s proposal, and despite the lack of exact figures, the consultant’s proposal stands to offer a greater overall 3. What other factors should the company consider when deciding which training program to choose? Answer: Other factors the company should consider when deciding which program to go with should include trainee preferences, and the company culture for degree of “fit”, as well as the track record (based on referrals and recommendations) of each of the training vendors. When deciding between training programs, the company should consider: • Trainee Preferences: Ensure the training method aligns with what the trainees prefer and find effective. • Company Culture: The training program should fit well with the company’s culture and operational practices. • Track Record: Evaluate the reputation and effectiveness of each training vendor based on referrals and recommendations. 4. What additional information is needed to conduct a utility analysis, and how can this information be obtained? Answer: To conduct a utility analysis the owners would need to attempt to determine the length of the training effect and the effect size, along with determining a standard deviation of job performance. Assistance in determining these numbers could be solicited from the training vendors. To conduct a utility analysis, the owners would need to determine: • Length of the Training Effect: How long the benefits of the training will last. • Effect Size: The magnitude of the impact of the training on job performance. • Standard Deviation of Job Performance: Variability in job performance due to the training. Assistance in determining these numbers could be obtained from the training vendors through detailed consultations and data provision regarding their training impact metrics and expected outcomes. Flashback Answers 1. Is training the best solution to address the issues of data errors and employee retention, and what other factors should be considered? Answer: Is training the best solution: • Using Mager and Pipe’s flowchart (Figure 4.2 on page 124), it is helpful to consider factors other than lack of skill or lack of knowledge (the only performance problems potentially solved by training) that might be responsible for the problems of data errors and employee retention. • Addressing the question “Can we apply fast fixes?” might reveal problems related to a lack of resources that employees need to work effectively, a lack of clear performance expectations, or even a lack of feedback on the importance of error-free data entry and analysis. Considering the question “Are consequences appropriate?” might lead to a discovery that poor performance is actually rewarded (with more hours of work, for example), or good performance punished (less hours of work for hourly paid employees), or simply that there are no consequences associated with performance, be it good or bad. Other clues, taken from the right-hand side of the flowchart, could involve removal of obstacles (for instance, noise or other distractions), simplification of the task, or deficiencies in the hiring process leading to the wrong people in the job in the first place. To determine if training is the best solution, students should use Mager and Pipe’s flowchart (Figure 4.2 on page 124) and consider the following: • Factors Other Than Training: • Fast Fixes: Investigate if there are issues related to lack of resources, unclear performance expectations, or insufficient feedback on the importance of accuracy. • Consequences: Evaluate if poor performance is rewarded (e.g., more hours) or good performance is penalized (e.g., fewer hours). Assess if there are any consequences associated with performance. • Obstacles and Task Simplification: Consider if there are obstacles such as noise or distractions, if tasks can be simplified, or if deficiencies in the hiring process are causing performance problems. 2. What could be the potential training objectives for addressing the data entry and analysis issues? Answer: Possible Training objectives (following the example on page 147): • After training, the data input and analysis personnel (who) will be able to input data (what) with the accuracy rate of 95 percent (performance/ criterion) per job (where and when). • Following training, the data input and analysis personnel (who) will be able to analyze the data entered (what) at an efficiency rate of 600 data sets per hour (performance/criterion), using the data they previously entered (condition). • Note: While better training might result in increased job satisfaction (and therefore increased employee retention), this secondary desired outcome of the training does not lend itself to a stand-alone training objective. Potential training objectives might include: • Objective 1: After training, data input and analysis personnel (who) will be able to input data (what) with an accuracy rate of 95 percent (performance/criterion) per job (where and when). • Objective 2: Following training, data input and analysis personnel (who) will be able to analyze the data entered (what) at an efficiency rate of 600 data sets per hour (performance/criterion), using the data they previously entered (condition). Note: While improved training may enhance job satisfaction and employee retention, this secondary outcome should be considered separately from the primary training objectives. 3. What are the risks of not using a Request for Proposals (RFP) process, and what should an RFP include? Answer: Request for proposals (RFP): • Not going through the RFP process might result in the selection of a less-than-optimum training solution (compared to other possible options which may remain unknown), over-paying for the training product, or selecting an inappropriate training solution for the problem to be solved. • Refer to Trainer’s Notebook 5.1 on pages 150-151. The RFP should be based on a clear vision of what is sought from the training, describe the proper scope of the project, include a vendor pre-qualification checklist, and include a vendor scorecard to aid in assessing the various training proposals received. Risks of not using an RFP process: • Suboptimal Training Solution: Without an RFP, there is a risk of selecting a less-than-optimum training solution. • Overpayment: The company might overpay for training without knowing if better options are available. • Inappropriate Training: The selected training might not adequately address the problem. An RFP should include: • Clear Vision: A description of what is sought from the training. • Scope of the Project: Define the scope clearly. • Vendor Pre-Qualification Checklist: A checklist to assess vendor qualifications. • Vendor Scorecard: A tool to evaluate and compare the various training proposals received. 4. What are some possible evaluation methodologies for assessing the effectiveness of the training program? Answer: Possible evaluation methodology: • Kirkpatrick: Trainee reaction could be measured through a post-training questionnaire; learning could be measured through pre- and post-tests assessment of data entry efficiency and accuracy; behaviour measured through self-reports or observation by supervisors; and results measured by comparing error rates after training to the pre-training rates. • COMA: A post-training questionnaire could be administered to the trainees to determine what the trainees learned from the training (C), feel the work environment will support their application of their new skills and knowledge (O), feel motivated to use their new skills on the job (M), and feel capable of doing so (A). • DBE: A pre-test/post-test questionnaire could measure gains in knowledge and skill that could lead to greater effectiveness (reduction in errors), which could then be assessed to measure the degree of pay-off for the company in terms of cost savings, client satisfaction, and employee satisfaction (which could then be related to the second goal of improving employee retention). Possible evaluation methodologies include: • Kirkpatrick Model: • Reaction: Use post-training questionnaires to gauge trainee satisfaction. • Learning: Assess changes in data entry efficiency and accuracy through pre- and post-tests. • Behavior: Evaluate changes in behavior through self-reports or supervisor observations. • Results: Compare error rates before and after training. • COMA Model: • C: Assess what the trainees learned through a post-training questionnaire. • O: Determine if the work environment supports the application of new skills. • M: Evaluate the trainees' motivation to use new skills. • A: Assess the trainees' confidence in their ability to apply the new skills. • DBE Model: • Pre-Test/Post-Test: Measure knowledge and skill gains through pre- and post-training tests. • Effectiveness: Assess reductions in errors, cost savings, client satisfaction, and employee satisfaction to determine overall pay-off. 5. What evaluation design would provide the most complete and useful assessment of the training program, and why? Answer: Evaluation study: • Refer to Figure 11.3 on page 360. Design “E” (Pre-post design with control/comparison group) offers the most complete and useful evaluation for this training program. It allows the examination of results to determine the effect of the training on employee change (by comparing the trained group to the untrained group) and also answers the question of whether or not the training was responsible for the change (in this case, the desired reduction in errors combined with increased employee retention). Design E (Pre-post design with control/comparison group) offers the most complete and useful evaluation. This design: • Allows Comparison: It compares results from the trained group to an untrained group. • Determines Causality: It helps determine if the training was responsible for the observed changes (e.g., reduction in errors and increased employee retention). Flash Forward Question Have students look ahead to Table 13.2 on page 397 to consider this question. From the list students might identify #’s 3,4, and,5 as particularly suitable types of training programs. Running Case: Dirty Pools Suggested Answers to Case Questions 1. How can the City of Toronto calculate the overhead costs associated with providing the training, and what facilities might they use? Answer: This is a difficult question for students to answer, but they may be able to infer that since the City of Toronto is the organization both requiring and providing the training, they would have adequate training facilities, likely within some of their community recreation centres which often have training or multi-purpose rooms which they already rent out to other community groups. The City could calculate their overhead costs as a percentage of the time the facilities will be used for this particular training initiative. The City of Toronto can calculate the overhead costs by considering that they will likely use their existing community recreation centers, which already have training or multi-purpose rooms. To estimate the overhead costs: • Identify Facilities: Use community recreation centers or similar facilities that the City rents out to other groups. • Calculate Overhead Costs: Determine the percentage of time these facilities will be used for the training initiative and apply this percentage to the total overhead costs (e.g., heat, electricity) associated with these facilities. 2. What are some potential benefits of the training program beyond the direct safety improvements? Answer: Benefits include major outcomes like safer pools and the resulting reduction in health and safety infractions, but also significant intangible benefits such as improved publicity and public relations. There may be other spin-off benefits from the training including increased job satisfaction which could lead to better employee retention, for example. Benefits of the training program include: • Major Outcomes: Safer pools and a reduction in health and safety infractions. • Intangible Benefits: Improved publicity and public relations. • Spin-off Benefits: Increased job satisfaction leading to better employee retention. 3. What factors need to be considered when calculating the benefits of the training program? Answer: Benefit calculations require estimating how long the training effect will last, the financial and non-financial benefits of the training relative to the costs, as well as the expectations of the City related to the value of the training outcomes, which would be relatively high. Factors to consider include: • Duration of Training Effect: Estimate how long the training will remain effective. • Financial and Non-Financial Benefits: Assess both the financial and non-financial benefits of the training relative to its costs. • City’s Expectations: Consider the high expectations of the City regarding the value of the training outcomes. 4. How can non-financial benefits be used in ROI calculations, and what additional resources might assist in this process? Answer: It should be fairly easy for students to see that the benefits of the training program should easily outweigh the costs, especially when one considers that the closures resulting from the safety infractions have significant financial as well as non-financial implications for the city. In terms of non-financial benefits and how they can still be used in ROI calculations, have students review The Trainer’s Notebook 12.1 -Converting Benefits to Monetary Values, on page 381. • Benefits vs. Costs: Non-financial benefits should be compared to the costs of the training to determine overall value. For instance, improved public relations and reduced health infractions contribute to the value even if not directly quantified in financial terms. • Converting Benefits to Monetary Values: Refer to The Trainer’s Notebook 12.1 - Converting Benefits to Monetary Values (page 381) for methods to assign monetary values to non-financial benefits and integrate them into ROI calculations. Instructor Manual for Managing Performance through Training and Development Alan M. Saks, Robert R. Haccoun 9780176570293,9780176798079

Document Details

Related Documents

Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right