Preview (8 of 25 pages)

Preview Extract

CHAPTER 7 – CHAPTER EXERCISES Chapter Exercise 7.1 Performance Appraisal Feedback: A Role-Play Exercise Sharon L. Wagner, Richard G. Moffett III, and Catherine M. Westberry IM Notes prepared by Joyce E. A. Russell Objective. This exercise requires the student to be able to utilize the chapter information to review a performance appraisal form and make corrections or improvements in it. Description. This exercise requires about one hour of out-of-class preparation time to review Exhibits 7.1.1, 7.1.2, and 7.1.3. The student is also required to provide a one-page critique of the performance appraisal form. A role-play that should last about 40 minutes is described below: In class, the students are set up into teams of three (by the professor) and each person is given one role: as the feedback giver, recipient, and observer of the feedback. Give students about 10 minutes to review their roles in class. The feedback giver, Chris Williams, should carefully review Exhibits 7.1.1 and 7.1.2 to be better able to provide feedback to the subordinate, Jesse Anderson. The feedback recipient, Jesse should review Exhibit 7.1.1 to become more familiar with his/her qualifications and performance so he/she can play the role effectively. The observer should be instructed to review Exhibits 7.1.2 and 7.1.3 so that he/she can accurately observe the feedback session. After students have read and prepped their roles, they will role-play the exercise for about 15 minutes. This will involve a one-on-one session by "Chris" and "Jesse" and the observer will be taking notes. After the role-play is completed, the observer will provide his/her feedback to "Chris". This should take about 5-10 minutes. Remind observers that they should be providing both positive and constructive feedback to "Chris". The person who played the role of "Jesse" can also give "Chris" feedback on how he/she gave feedback to them. Following the role play, the professor can lead a class discussion on the types of behaviors that the observers or others saw that were effective when "Chris" was giving "Jesse" feedback. Professors can summarize this discussion of effective feedback-giver behaviors. FEEDBACK TO BE GIVEN: The feedback giver should be sure to address the positive aspects of Jesse's performance such as interpersonal skills, and quantity of work (acceptable). The unsatisfactory parts should also be addressed including Jesse's job knowledge, work quality, and reliability. Exercise 7.1 Additional Information for Class Discussion EFFECTIVE BEHAVIORS OF FEEDBACK GIVERS Some of the behaviors that should be described in the class discussion for effective behaviors by feedback givers include the following: 1. Develop an atmosphere of trust by establishing rapport. 2. Provide specific, descriptive feedback, not general, evaluative feedback. 3. Minimize interruptions in the feedback session (phone calls, other interruptions). 4. Provide positive and constructive feedback. 5. Focus feedback on behaviors, not personality characteristics. 6. Encourage two-way feedback (allow the subordinate to discuss their views as well). 7. Encourage the subordinate to ask questions or offer comments. 8. Allow the subordinate to respond fully without cutting him/her off. 9. Respond in full to the subordinate's questions. 10. Use nonverbal cues (smiles, nods, eye contact) when providing feedback. 11. Keep the discussion focused, moving from topic to topic. 12. Use reflecting or mirroring and probing responses to learn more about the subordinate's views. VALUE OF PERFORMANCE APPRAISAL FEEDBACK SESSIONS The professor could discuss with the class the value of performance feedback sessions which includes: 1. Allows managers to communicate what is expected of employees and clarifies any misunderstandings about requirements for the job. 2. Enables managers to assess the contribution of each subordinate to organizational goals. 3. Allows managers to use corrective actions when employee performance is unsatisfactory or to use rewards when performance is exceptional. 4. Sustains or enhances employee motivation and desire for continuous improvement. 5. Assists employees in career planning and development. 6. Satisfies employees' interpersonal needs (i.e., needs to be informed by the supervisor). 7. Facilitates mutual problem solving. 8. Offers opportunities to make needed changes or maintain behaviors. Chapter Exercise 7.2 The Heartland Greeting Cards Consulting Problem Esther J. Long IM Notes prepared by Joyce E. A. Russell Objective. This exercise builds from Chapter 6 (Personnel Selection) and Chapter 7 (Performance Management and Appraisal) to enable students to enhance their skills in refining a company's selection and performance appraisal systems. Description. In the first part of the exercise, students conduct an individual analysis by reading the background information provided in Exhibits 7.2.1 and 7.2.2. Following this, students should complete Forms 7.2.1 and 7.2.2. This individual part should take students about one hour in out-of-class preparation. In the second part of the exercise, students work in class in small groups (about 4-6 per team). They should discuss their completed Forms 7.2.1 and 7.2.2 and reach consensus on the appropriate selection tools and performance appraisal system to be used. This should take students about 20-30 minutes for this discussion. The professor can open the discussion up to the entire class by soliciting responses to Forms 7.2.1 and 7.2.2. The discussion may take about 20 minutes. Responses to Form 7.2.1 Design of a Selection Instrument Identify at least one selection method that could be used to assess whether a candidate possesses each of the job specifications listed below. Refer back to Chapter 6 for guidance and the material presented in Exhibits 7.2.1 and 7.2.2. Job specifications (minimum qualifications) 1. Mathematical ability to carry out calculations involving addition, subtraction, multiplication, and division of three digits or more including fractions and decimals. Note: Mathematical computations must be carried out when checking in merchandise or completing inventory. Calculators may be used. Assessment method: A test that assesses mathematical ability could be used or else a work sample could be given. 2. A 12th grade reading level in English. Note: The service manual is written at the 12th grade reading level. Assessment method: Requiring a high school degree or GED might be useful to ensure that the person can understand reading at the 12th grade. A reading test for comprehension can also be given. 3. Ability to attend to details. Note: To minimize the in-store inventory, the company blueprint for each display must be followed precisely. Each set of cards has a pocket where it is supposed to be displayed (the numerical codes on the back of the cards must match the codes on the pocket labels). Assessment method: A work sample can be used where individuals are timed or tested on appropriately displaying the inventory. In addition, a clerical ability test or relevant portions of it could be used to assess attention to details (Minnesota Clerical Test). 4. Ability to carry out company procedures while adapting to situational needs. Note: A person must have the ability to make snap decisions on the sport. Assessment method. A work sample could be used to assess judgment or analysis and decision-making. Perhaps a simulation such as one used in an assessment center (e.g., in-basket) could be used and timed. 5. Ability to resolve customer (i.e., store manager) complaints while maintaining good will. Assessment method: A work sample, such as a role-play or simulation exercise like those used in assessment centers, could be developed and used. The role-play would have the applicant play the part of the merchandiser who has to deal with customer complaints while working (the customer would be another person serving as the store manager who is complaining). Another possibility would be to use the Customer Service Orientation Index (see chapter 6). 6. Basic body mobility (e.g., ability to bend, reach, lift) and ability to stand for up to three hours at a time. Note: A person with a disability (in a wheelchair) could be accommodated. Assessment method: A work sample to resemble important activities on the job should be used and the person should be required to demonstrate competence in using the necessary body mobility. Accommodations in the work sample should be made for those with disabilities. 7. Ability to work alone with no supervision for weeks at a time. Assessment method: Personality tests such as the CPI or the Sixteen PF Questionnaire could be used to assess the degree to which an individual would be comfortable working on their own. Also, previous work history could be assessed via a situational interview to determine whether or not individuals had ever worked without supervision in past jobs. 8. Must provide own transportation to all stores. Assessment method: Using an interview, applicants could be asked whether they could find their own means of transportation to job sites. Note that they do not need to own their own vehicles, but that they just need to be able to get to the job sites (they could get there by bus, cabs, or others could drive them). It is important that the question is posed in such a way that it is not discriminatory (against those who do not own their own vehicles). Responses to Form 7.2.2 Design of a Performance-Appraisal System 1. Who should be responsible for evaluating the greeting card merchandiser's performance? Store managers would be one good source of performance appraisal data (using rating forms) since they are in a position to observe the merchandiser's performance. The merchandiser's immediate boss could also be a person who could evaluate if they met more frequently with the merchandisers or visited their stores more frequently. However, a multi-rater system is recommended. 2. What rating format(s) will allow you to incorporate the job performance criteria identified in the job analysis directly into the rating form? Explain your answer. Prepare a sample rating form. Write one rating item. Be sure to include the complete rating scale. A rating form should be used and raters could include both the supervisor and the store manager (for relevant dimensions). Dimensions to be assessed include: Inventory management - keeping inventories at the appropriate levels (see Exhibit 7.2.2). Preparation of displays - displays are designed according to company's policies and contracts as well as the store's procedures (see Exhibit 7.2.2) Customer relations - should measure customer relations, handling customer complaints in a timely, courteous manner. Among the forms hat could be used are behavioral anchored rating scales, Behavioral Observation Scales or PDA (to assess frequency of responses) could be incorporated. 3. What techniques do you recommend to ensure that the greeting card merchandiser is provided with accurate and timely feedback concerning his or her performance. Explain your answer. The merchandiser should receive formal feedback at least twice a year from his/her immediate boss. The boss may provide feedback based on ratings by the customers and the boss him/her self. The boss should be trained in providing more frequent, informal feedback to merchandisers, especially since they feel that they do not receive enough feedback. 4. What other components of the PA system will help make it more legally defensible? Explain your answer. Professors should review material in the chapter on legally defensible performance appraisal systems, found on page 146. Some issues that should definitely be addressed for this case are given below: 1. The raters should be trained. 2. The boss or feedback giver should be trained. 3. More objective, and less subjective criteria should be used to evaluate performance. 4. If ratings are to be used, multiple raters should be used. 5. Employees should be informed about the important dimensions at the start of the review period. 6. An appeal system should be put into place so that employees can appeal ratings. 7. Employees should see the rating form, and sign off that they have received the feedback. 8. The same system should be applied to all employees in the particular job. 9. Written documentation on the rating form should be provided to substantiate ratings. 10. Raters should be encouraged to keep diaries of the employees' performance. Chapter Exercise 7.3 Price Waterhouse v. Hopkins Objective. The purposes of exercise 7.3 are twofold: (1) to acquaint students with an important U.S. Supreme Court case related to performance appraisal; and (2) to allow students to consider the implications of this case as they relate to appraisal system development, implementation, and administration. Description. Students will assume the role of an outside consultant engaged by Price Waterhouse to design a performance assessment system that is valid and legally defensible. Therefore, the students' efforts should be directed toward developing expertise about: the specifics of the Hopkins case (see Exhibit 7.3.1), performance appraisal issues (Chapter 7), and the relevant EEO issues (Chapter 3). As consultants to Price Waterhouse HR senior management, students will be challenged to give constructive criticism about the firm's present staffing (especially promotion) systems. The background case information includes the specifics of a lawsuit filed against Price Waterhouse by a female manager (Hopkins), who was rejected as partner candidate. The basis for the former litigation is a violation of Title VII of the Civil Rights Act of 1991. Students should be aware that Title VII of the Civil Rights Act of 1991 is an amendment to Title VII of the Civil Rights Act of 1964. The former overruled a 1989 Supreme Court decision. Under the 1989 Price Waterhouse ruling, the Supreme Court allowed the defendant an opportunity to present a "mixed motive" argument. The "mixed motive" argument states that setting aside the "motivating factor," the Company would have made the same personnel decision. The Company was thus given an opportunity to prove by a preponderance of the evidence that it relied on valid reasons in making decisions. Section 107 of CRA 1991 established that if the plaintiffs demonstrating race, color, sex, religion, or national origin constituted a "motivating factor" in an unfavorable personnel decision, that personnel practice is unlawful under CRA 1991. Under CRA 1991, once the "motivating factor" is established, the plaintiff will prevail. The issue of whether an employer would have made the same decision in the absence of any discriminatory motive becomes relevant only in the remedial phase of the litigation (e.g., determination of damages, back pay, reinstatement, and promotion). If the defendant can establish that the same decision would have been made, the courts have the discretion to grant declarative relief, attorney's fees, and other costs. Also, students will be expected to make specific recommendations to the HR management for a tenable and legally defensible system for selecting partners. Student consultants will be expected to present and defend their positions both orally and in a written report. After reading Chapter 7, the out-of-class Individual Analysis can be completed in about one hour. The Group analysis can be completed in one class session. PROCEDURE FOR CHAPTER EXERCISE 7.3 For Step 1, the students should be instructed to read Chapter 7 before attempting to do the exercise. Then, they should read the background for the case (Exhibit 7.3.1 of the text), and review the relevant issues in Chapters 7 and 3. For Step 2, the written report should be no more than four pages. The format for the written report should facilitate a rapid review of the allegation, critique, and recommendations. For example, the introduction could address the issue of whether Hopkins suffered illegal discrimination under Title VII (see Chapter 3). Next, the critique of the present Price Waterhouse system should relate the appraisal system design section (in Chapter 7). Students should be encouraged to evaluate the existing Price Waterhouse appraisal system in the context of five (5) issues: 1. Measurement content 2. Measurement process 3. Defining the rater 4. Defining the ratee 5. Administrative characteristics These are issues that should involve partners, managers, employees, and HR professionals. Finally, specific recommendations for change (if necessary) should reflect the seven (7) steps cited in Chapter 7 on developing appraisal systems: 1. Start with job analysis 2. Specify performance dimensions and develop performance anchors 3. Scale the anchors 4. Develop a rating form or program 5. Develop a scoring procedure 6. Develop an appeal process 7. Develop rater and ratee training programs and manuals The assessment questions should be included with the written report, and may be developed as a table or summary sheet. For Step 3, groups of no more than six (6) students should develop short presentations. Students should use their responses from Form 7.3.1 for discussion purposes. Allow at least 15 minutes so that all group members can review all other student forms. We recommend that all group members review all the forms to promote the discussion. Groups should be given an additional 15 minutes to develop their presentations to the class. One spokesperson should deliver the group's report to expedite time. The instructor may select their respective spokespersons. The instructor, serving as discussion moderator, should summarize the class discussion and raise issues (if necessary) not covered by the group responses. Table 7.3.1 presents recommended answers to the assessment questions. Table 7.3.1 Answers for Form 7.3.1 1. What legal statute applies to this case? Ann Hopkins was a senior manager in an office of Price Waterhouse when she was proposed for partnership in 1982. She was neither offered nor denied admission to the partnership; instead, her candidacy was held for reconsideration the following year. When the partners in her office later refused [490 U.S. 228, 232] to repropose her for partnership, she sued Price Waterhouse under Title VII of the Civil Rights Act of 1964, 78 Stat. 253, as amended, 42 U.S.C. 2000e et seq., charging that the firm had discriminated against her on the basis of sex in its decisions regarding partnership. Judge Gesell in the Federal District Court for the District of Columbia ruled in her favor on the question of liability, 618 F. Supp. 1109 (1985), and the Court of Appeals for the District of Columbia Circuit affirmed. 263 U.S. App. D.C. 321, 825 F.2d 458 (1987). The Supreme Court granted certiorari to resolve a conflict among the Courts of Appeals concerning the respective burdens of proof of a defendant and plaintiff in a suit under Title VII when it has been shown that an employment decision resulted from a mixture of legitimate and illegitimate motives. 485 U.S. 933 (1988). Hopkins was ultimately awarded a partnership and prevailed in the case. 2. What additional data or information would be helpful in order for you to take a definitive position on Hopkins? More detail on partnerships and eligibility as related to gender is critical. Also, the performance requirements of the partner's job would be helpful to have in determining selection guidelines. In addition, the performance records of all the candidates should be reviewed. Third, the weight on each of the criteria on which the candidate is evaluated would help with determining the importance of the interpersonal skills rating. Finally, the weight given to each of the raters would help in determining the relative value on those that had direct contact with Hopkins and her work products. However, even without the additional requested data the Hopkins indicates a prima facie case for disparate treatment, but could also be a disparate impact case if the women establish prima facie evidence showing that the percentage of women who obtain partnership versus men is disproportionate. 3. What steps would you take at Price Waterhouse to prevent a similar legal problem in the future? PW should follow the recommendations outlined in Chapters 6 and 7 for conducting fair and legally sound performance appraisals. Namely, PW should develop a legally defensible appraisal system for promotions based on the following: Clearly defined and universally applicable appraisal procedures Hopkins should have been evaluated using the same procedures as if she were a man, i.e., no stereotyping Appraisal system content defined by criterion and job activity Ratee traits (e.g., "ladylikeness") should be excluded; may contribute to sexual stereotyping Objective verifiable performance data should be used (e.g., the $25 million contract Hopkins secured) Performance dimensions should be weighted by their relative importance e.g., 0.35 Value of contracts 0.25 Client relations 0.1 Interpersonal skills 0.3 etc. 3. Documentation of appraisal results___ a quantitative approach is recommended The local office partners' joint statement about Hopkins' performance should correspond with all quantitative ratings 4. Involvement of all qualified raters in the process Local office partners Outside clients Peers Subordinates 5. Development of rater and ratee training manuals and programs Clarification of duties and expectations of all participants The degree to which partners (committee) will review ratings Full description of appeal/adjudication processes for rejected candidates 6. Integration of a computerized performance appraisal system with corporate HR information systems Analysis of appraisal ratings distributions to show ratings errors among partnership candidates Link appraisal data to succession planning programs Integrate appraisal data with employee and client surveys for more comprehensive information about candidates 7. Establishment of a formal appeal process for rejected promotion candidates Candidates should be able to review and dispute any part of a PA Local office partners (possibly clients) should be able to dispute PA ratings of candidates 8. Methods to evaluate the system's effectiveness in terms of: User reactions Perception of fairness by raters and candidates safeguards against biases (e.g., sex, race, age discrimination) level of useful information gained Inferential validity The extent to which a candidate's outstanding ratings correspond to outstanding performance Discriminating power The extent to which the "best" partnership candidates can be distinguished from all other performers Disparate impact Focuses on significant differences in appraisal scores of candidates belonging to protected groups and others provides for appraisal system review and remedial steps to be taken, if necessary Another possibility is to use an assessment center for selecting partners (Ch. 6). 4. Is gender stereotyping illegal? If so, does Hopkins prevail in this case? Gender stereotyping is illegal if such perceptions lead to discriminatory employment decisions regarding groups protected under Title VII of CRA 1991 (e.g., women). Title VII does allow employment practices that may be correlated with sex in cases of: Seniority systems Veterans preference rights National security reasons Job qualifications based on Test scores Backgrounds Experience Bona fide occupational qualifications (BFOQs) 5. If gender stereotyping is an acceptable legal theory of discrimination, does the theory apply to discrimination against gay people under Title VII? Give an example of what you regard as illegal discrimination against a gay person using this theory. Title VII does not include sexual orientation as a protected class. Under numerous states and municipalities have protection. 6. What specific steps would you take to improve the validity and legal defensibility of the partner selection process? Validity and legal defensibility are overlapping yet separate concepts. To improve validity selection assessments should predict job performance. This requires assessing the performance appraisal process. It would be impossible to improve the validity of the selection system without valid criteria to predict. To improve defensibility assessments should also be proven to be job related. As with validity this is done by correlating assessments with job performance, as well as comparing assessments to work analysis data. The assessments must also be scored to reflect minimum qualifications, not just scores that reflect high levels of performance. Chapter Exercise 7.4 Performance Appraisal at Darby Gas & Light Joyce E.A. Russell IM Notes prepared by Joyce E. A. Russell and John Bernardin Objective. The purpose of this exercise is to give students practice in critiquing a performance appraisal form and in offering suggestions for improving it. In addition, by reviewing survey data survey taken of employees with regard to their organization's performance appraisal system, students can more effectively revise the firm's current performance appraisal system. Description. This exercise requires about one hour of out-of-class preparation to review the background information (Exhibits 7.4.1, 7.4.2, and 7.4.3) and to complete Form 7.4.1. Students will complete this preparation individually. In class, the professor will hold a discussion (about 30 minutes) where students can critique the form and system used by Darby and can offer their suggestions for a new form and system. Responses to Form 7.4.1 1. After reviewing Exhibit 7.4.2, list what you regard as the major problems with the Darby appraisal system. Make specific recommendations about changing the system and cover what you regard as all aspects of the system. The appraisal form is just terrible and should be dropped. Among the problems, 1. Use of dimensions that are not defined as performance outcomes (e.g., job knowledge, decision making). 2. Use of anchors that are not defined in terms of behaviors (e.g., low, average, high). 3. No requirement to substantiate ratings by making comments. Raters are told they should provide comments, not that they must provide comments. 4. Dimensions assessed may be viewed as subjective and difficult to legally defend (appearance, leadership). 5. Apparently no involvement of “customers” in the derivation of performance anchors or criteria. Darby should begin the process of revising the form based on a thorough job analysis of the various jobs. The form content should then be linked to the job analyses. Dimensions should be defined in behavioral terms and anchors should be defined in behavioral terms. Employees and managers should be used as subject matter experts and be interviewed to solicit their views on the form (to make sure it is comprehensive and covers all critical parts of the job). You’ll note from Figure 7-1 that one of the recommendations nested under “strive for as much precision in defining and measuring performance” as possible is to measure content using ratings of relative frequency. There are many options for rating behavior or performance outcomes. The chapter points out that performance appraisal should focus on the record of outcomes and rating of relative frequency. Summated scales or PDA would be good formats. Darby should consider 360- degree appraisal and if they do, involve each “customer” constituency in the development of the criteria. Most experts contend that the best way to control for deliberate bias on the part of an individual rater is to use more than one rater. In general, the mean data compiled from ratings across all (or a sample) of qualified raters will result in less bias and more validity for the performance appraisal system. A “qualified” rater can be defined as any internal or external customer who is the recipient of the performers’ products or services. 2. What revisions to the form would you suggest? What particular methods (formats) discusses in Chapter 7 do you recommend? 1. Use of more precise definitions of performance levels (e.g., behavioral anchors using summated (BOS), PDA. CARS and BARS could be useful too. 2. Definitions provided for scale dimensions. 3. Place on the form for the supervisor and the employee to indicate that they have participated in the review session. 4. Requirement in the instructions that raters provide comments to substantiate ratings. 3. Suppose the firm wants to use the form for employee feedback (i.e., to provide feedback to employees on their strengths and weaknesses). Do you think the instrument will be useful for this purpose? Why or why not? What, if any, revisions would you suggest so that the form can be used for employee development? The form will probably not be very useful in providing feedback to employees. Employees may know what their ratings are, but not what the ratings refer to or how to change their behavior. To correct this, the ratings should require behavioral anchors and definitions of dimensions. Also, specific, descriptive behavioral comments should be provided for all dimensions. In addition a section that deals with career planning could be included which summarizes the employee's strengths and areas to improve, with specific goals listed for a specified time period. In the feedback session, the rater and the ratee should mutually agree upon these goals. In addition, a section, which asks employees to indicate the type of assistance that could prove beneficial for them in improving their performance or in reaching their goals, could be included. See the discussion in the book about BARS approach to better feedback. 4. Suppose Darby has used this form to both promote people and make merit pay adjustments. Suppose also that Darby has been informed that six African-Americans have claimed discrimination based on promotion and pay policies. What (if any) advice can you give the company? What specific data should Darby evaluate in the context of these claims? The company should first analyze the promotional and pay decisions using the 80% rule as the criterion for defining “disparate impact.” Prima facie evidence of discrimination in the form of an 80% rule violation is an excellent predictor of the outcome of court decisions against employers. If there is no 80% rule, careful scrutiny of the entire appraisal system is necessary. You should also advise Darby to investigate possible “disparate treatment” evidence as related to these six individuals. Page 224- “Prima facie evidence of discrimination in the form of an 80 percent rule violation is an excellent predictor of the outcome of court decisions against employers. Organizations should audit their appraisal data to test for possible adverse impact effects long before they get sued. They might even avoid getting sued. Adverse impact statistics have also been used successfully in “disparate treatment” cases to support an individual’s claim of race or gender discrimination. Plaintiffs have used such data to augment claims of “disparate treatment” discrimination indicating a “pattern or practice” of discrimination and to buttress a motion for “class certification” that resulted by the “extreme subjectivity” of the appraisal. “ Evaluating the appraisal process in terms of the recommendations in Figure 7-3 (page 176) would be a good strategy along with a careful analysis of possible “disparate treatment” violations in terms of the six individuals. If there is evidence of adverse impact using the 80% rule against African-Americans, a careful evaluation of the appraisal and decision-making process should be done. Of course, individual analysis of the performance data of the claimants should be derived and adverse impact analysis should be conducted. 5. Based on the survey data and what you know about performance appraisal, what areas are most important for a rater-training program? Any particular rating errors or biases in need of attention based on the survey data? Attribution training related to the actor/observer bias (see survey results on understanding of performance constraints). Provide training especially for fundamental attribution error (page 234) since the survey indicated the lack of understanding that supervisors had in subordinates’ work-related problems. Consider frame of reference training for raters (page 237) which has been shown to increase rating accuracy. “This training consists of creating a common frame of reference among raters in the observation process. Raters are familiarized with the rating scales and are given opportunities to practice making ratings. Following this, they are given feedback on their practice ratings. They are also given descriptions of critical incidents of performance that illustrate outstanding, average, and unsatisfactory levels of performance on each dimension. “ One study (page 233) showed promise for “self-efficacy training for Raters.” Raters who were trained in giving negative feedback produced less lenient ratings than a control group. This training involved observing a successful rater, a simulated appraisal session with a “problem” employee, feedback on performance and then coaching on how to conduct an appraisal discussion. Self-ratings with this system are almost two points higher than supervisory appraisals on average. Should supervisors review self-appraisals before they evaluate performance? Explain your answer based on possible rating errors. Supervisors should review the self-appraisals before the feedback session, not before they evaluate the performance lest anchoring (p. 237) or one of the other rating biases affect the evaluation. Subordinates should review the supervisor’s evaluation prior to the feedback session. The differences in the appraisals should then be discussed at the feedback session. It may be that the situational constraints that the subordinate perceived were not clearly understood by the supervisor. Or it may be that what the employee thought were situational constraints could have been avoided by taking a different action. Page 237- “Supervisors, for example, can be inappropriately affected by the level of subordinates’ initial self-ratings particularly if the supervisor has not anchored future judgments with his/her own prior judgments. Supervisors should make assessments before they review (and consider) self-ratings and be wary of their own preconceived notions also. The origin of this problem again seems to be the holistic consideration of a person’s performance on each rating factor rather than attending to the specific behaviors or outcomes that were exhibited. If rating scales are used that don’t call for an overall judgment but rather elicit estimates of the frequencies with which the behaviors or outcomes anchoring each level occurred, we might overcome (or reduce) the problem of anchoring.” 7. Steve just read a Jack Welsh book (former CEO of GE) and Jack likes “forced distribution.” What should you tell Steve about this rating method? Page 233- “Leniency is probably the primary reason companies have turned to forced distribution systems such as the GE A, B, C system where managers have to put a certain percentage of subordinates into the “C” category. Jack Welch and many others argue that differentiation of employees by performance and making certain the most important positions in the organization (the “A” positions) are occupied by “A” players is a key toward competitive advantage. However, recent research with forced distribution is not favorable.” Page 227- “Companies using forced distribution found improved variability among ratees, a primary purpose of the approach, but a lower overall evaluation of the appraisal system compared to other approaches. Supervisors and managers are often offended that no matter how effective they are as managers they must comply with the required forced distribution “ 8. Should the managers be formally evaluated? If so, describe the system you recommend. Darby should consider the use of multi-raters in the form of 360-degree feedback system. Not only are 360-degree appraisals a characteristic of “High Performance Work Systems,” but also tend to be more accurate, perceived as fairer, have fewer biases and are less likely to be targets for lawsuits. In addition to 360-degree appraisals, Darby should consider benchmarking its positions with their toughest competitor in order to improve quality to their customers and in order to tailor the performance appraisals on customer satisfaction. MBO is also an effective approach for managerial appraisal . 9. A performance appraisal guru said to use ratings of “relative frequency.” What does this mean? Give an example. See pages 232-233. Research says when you do ratings, the focus should be on how frequently (e.g., always, 100% of the time, never) the ratee achieved a certain level of performance in the context of all the times the ratee had the opportunity to achieve at this level. Ratings of frequency are superior than ratings of “intensity” (e.g., strongly agree/disagree) or satisfactoriness (e.g., how satisfied are with your instructor). PDA and (sometimes) summated scales use ratings of “relative frequency.” Pages 232-233- “ Research on rating formats shows that ratings of relative frequency result in higher levels of reliability in ratings (across raters rating the same person) and that the people who receive feedback on their performance actually prefer frequency ratings to other feedback options. The PDA system is most compatible with this approach to appraisal although BOS also calls for frequency ratings.” Chapter Exercise 7.5 The Development of a Performance Appraisal System for Instructors Jeffrey S. Kane Objectives. After completing this exercise students should have a better understanding of the multidimensionality of performance. They will also know how to construct measurable performance dimensions and understand the steps involved in the development of an appraisal system. Description. The purpose of this exercise is to give the student a feel for a more sophisticated approach to performance measurement. This exercise was written by Dr. Jeffrey Kane, the developer of “Performance Distribution Assessment” (PDA), an approach discussed on page 231 and 232. Figure 7-4 (page 226), which defines the six primary criteria on which the value of performance may be assessed. The instructor may choose to skip parts of the exercise using information provided below. The text discusses the superiority of using performance rating scales that call for ratings of relative frequency. PDA is one such rating system. Part A: The Multidimensionality of Performance Assessment You could decide to develop your own dimensions of performance instead of using the eight functional activities presented in the case and defined below. PART A calls for the student to derive importance weights for these eight activities. You could elect to skip PART A and use the average weight presented below: ACTIVITIES/JOB FUNCTIONS LECTURE ORGANIZATION: lectures follow a clear and logical outline compatible with other course materials (e.g., written outlines, notes, Power Points) ORAL EXPLANATION: Communicates and lectures in a clear, understandable manner PROVIDING EXAMPLES: Provides real-life examples to illustrate points CONDUCTING EXERCISES: uses experiential exercises to illustrate points USING AUDIOVISUAL MEDIA: uses a variety of audiovisual aids (e.g., on-line material, Power Point slides, interactive games, videos/movies) GRADING: evaluating classroom performance and work outcomes COURSE-RELATED ADVISING AND FEEDBACK: Provides advice and feedback regarding student classroom performance: CLASSROOM INTERACTION: Provides opportunity for interaction with students both in and out of the classroom. IMPORTANCE RATINGS/WEIGHTS The mean “importance” ratings of these activities as judged by students, using the definitions above, are as follows: LECTURE ORGANIZATION= 16% ORAL EXPLANATION: 20% PROVIDING EXAMPLES: 7% CONDUCTING EXERCISES: 6% USING AUDIOVISUAL MEDIA: 5% GRADING: 26% COURSE-RELATED ADVISING AND FEEDBACK: 10% CLASSROOM INTERACTION: 10% Part B: Determining Performance Criteria Remind students to review Figure 7-4 (page 226) for definitions of the criteria (e.g., quality, quantity, timeliness, interpersonal impact). Exhibit 7.5.1 presents a matrix listing only these particular criteria but point out that “Need for Supervision” and “Cost-effectiveness” could apply to other jobs and work functions or activities. Presented below are the rounded, modal student judgments of the relative weights of the criteria for each work function. (These are the WEIGHT #1 percentages described in Section C for Exhibit 7.5.1 on page 565, expressed as proportions.) Interpersonal Quality Quantity Timeliness Impact LECTURE ORGANIZATION (16) 1.00 ORAL EXPLANATION (20) 1.00 PROVIDING EXAMPLES (7) .50 .25 .25 CONDUCTING EXERCISES (6) . 60 .30 .10 USING AUDIOVISUAL (5) 1.00 GRADING (26) .90 .10 COURSE-RELATED ADVISING .45 .20 .15 .20 AND FEEDBACK (10%) CLASSROOM INTERACTION (10) .50 .20 .30 These percentages serve as the basis for specifying the “relevant” criteria for each work function and for deriving the full weighting scheme. WEIGHT #2 (Multiply the importance weight (e.g., 16 for Lecture Organization, 20 for Oral Explanation, etc.) by the criterion weights for each function. Interpersonal Quality Quantity Timeliness Impact LECTURE ORGANIZATION (16) 16 ORAL EXPLANATION (20) 20 PROVIDING EXAMPLES (7) 3.5 1.75 1.75 CONDUCTING EXERCISES (6) 3.6 1.80 .60 USING AUDIOVISUAL (5) 5 GRADING (26) 23.4 2.6 COURSE-RELATED ADVISING 4.5 2.0 1.5 2.0 AND FEEDBACK (10%) CLASSROOM INTERACTION (10) 5.0 2.0 3.0 (These are the WEIGHT #2 percentages described in Section C for Exhibit 7.5.1 on page 565) Part D: Creation of PLDs (Performance Level Descriptors) See Exhibit 7.5.2 for examples. Write Bernardi@fau.edu for more examples. Parts E and F. Performance as a Distribution Instructors should go through the examples used in class and/or the examples used in the exercise. The exercise can be discussed using Figure 7-1. For the function “Oral Explanation” (note also Prescription # 2 in figure 7-1), the extent to which students understand their instructors’ lectures reflects the “quality” of performance. One way of assessing performance on a dimension such as the Quality of Oral Explanation would be to assign a 1 to 5 rating on the basis of the scores an instructor’s students achieved on some test of the knowledge they were supposed to acquire from the instructor’s lectures. However, for a number of reasons, scores on such a test may not be a practical source of data. Furthermore, such data generally does not allow for comparisons of instructors for decision-making purposes (e.g., tests may not be of equal difficulty across subjects). Another way of getting at the “quality” of instruction is to have students define levels of performance and then rate the extent to which the instructor meets or exceeds these levels of performance. Research suggests that such ratings should focus on the relative frequency (e.g., 90% of the time) with which the ratee achieved each level of performance out of all the ratee’s opportunities to achieve at this level. Ratings of frequency allow greater precision than ratings of “intensity” (e.g., strongly agree/disagree) or satisfactoriness (e.g., how satisfied are with your instructor). So the “5” level on this scale might be: “I had a clear, unambiguous understanding of what s/he was trying to teach; no one needed to ask questions to clarify the material presented.” Raters would rate the quality of performance on the “Oral explanation” function by reporting how often the instructor hit this level of performance out of all the times s/he gave lectures. So, the instructor could get a rating of up to 100% at this level. Obviously, it is also possible that the rating here also could be 0%! That’s why we also need to define other levels along the performance continuum as in Exhibit 7.5.2. Raters then rate how frequently the instructor achieved each level of performance (summing to 100% of the time). Research on rating formats supports the use of ratings of relative frequency. The PDA system is most compatible with this approach to appraisal. KEY REFERENCES ON PERFORMANCE DISTRIBUTION ASSESSMENT (PDA) Kane, J. S. (1984). Performance distribution assessment: A new breed of appraisal methodology. In H. J. Bernardin and R. W. Beatty (Eds.), Performance appraisal: Assessing human behavior at work, (pp. 325-341). Belmont, CA: Kent Publishing Co. Kane, J. S. (1986). Performance distribution assessment. In R. Berk (Ed.),Performance assessment: Methods and applications, (Ch. 9, pp. 237-273). Baltimore, MD: Johns Hopkins University Press. Kane, J. S. (1988). Minimizing the impact of judgmental fallibility on real world decision-making, with some illustrative applications in human resource management. In Cardy, R.L., Puffer, S.M. & Newman, J.M. (Eds.) Advances in Information Processing in Organizations, Volume 3, pages 25-37. Greenwich, CT: JAI Press. Kane, J. S. (1996). The conceptualization and representation of total performance effectiveness. Human Resource Management Review, 6(2), 123-145. Kane, J. S. (1997). Assessment of the situational and individual components of performance. Human Performance, 10(3), 193-226. Kane, J. S. & Woehr, D. J. (2006). Performance measurement reconsidered: An examination of frequency estimation as a basis for assessment. In D. J. Woehr, W. Bennett, & C. Vance (Eds.), Performance measurement: Current perspectives and future challenges. Hillsdale, NJ: Lawrence Erlbaum Associates. Kane, J. S. (2000). Accuracy and its determinants in distributional assessment. Human Performance, 13(1), 47-85. Solution Manual for Human Resource Management John H. Bernardin, Joyce E. A. Russell 9780078029165, 9780071326186

Document Details

Related Documents

person
Ethan Williams View profile
Close

Send listing report

highlight_off

You already reported this listing

The report is private and won't be shared with the owner

rotate_right
Close
rotate_right
Close

Send Message

image
Close

My favorites

image
Close

Application Form

image
Notifications visibility rotate_right Clear all Close close
image
image
arrow_left
arrow_right