Post image for Helping Managers Agree Upon How to Score Employee Performance

Helping Managers Agree Upon How to Score Employee Performance

by paulfalconehr.com on November 29, 2013

Unfortunately there’s no magic bullet that distinguishes a performance review scoring of a 5 from a 4 or a 3 from a 2, for example. What’s important, however, is that leaders within each division and/or department and across divisions and departments should discuss what those scores might look like in a sort of performance “calibration” meeting. That’s where they can discuss key employees and where they should fall on the performance-rating spectrum. To do so, however, they’ll need a tool to help them talk through these very subjective types of considerations.

Imagine this: A division president in a large Fortune 500 company tells you, the human resources practitioner, that on a scale of 1 to 5 (5 being outstanding, 3 meeting expectations, and 1 being a failure), he expects all of his direct reports to be 5s. “If they’re not 5s, they should be fired” goes the president’s logic. Down the hall, the senior vice president of finance tells you that she standardsgraphicbelieves that the vast majority of her team is meeting expectations and performing well and that she intends to award “overall scores” of 3 to the majority of her staffers, reflecting that they’re meeting expectations and performing well. Then she hesitates and says, “Then again, my peers in the three other divisions will probably award more 5 scores than anything else, so maybe I’ll need to award 5s as well; otherwise, my group will receive lower merit increases relative to the other finance divisions.”

Is it okay if a manager expects everyone on the team to be a 5 (i.e., outstanding, stellar, and able to leap tall buildings in a single bound)? Does it bother you that the SVP of Finance doesn’t feel comfortable awarding what she feels is the right score for the majority of her team—3 / “meets expectations”—because her peers in other divisions will inflate scores for their own teams? How do you get everyone on the same page in terms of distinguishing appropriately between scores and assigning grades that truly reflect the level of performance in that group?

Rater Definition Consistency Tool

The following rater definition consistency tool can be used as a point of reference for all involved. The purpose of the tool is to help open the lines of communication and get all leaders “speaking the same language” about what success looks like relative to individual contributions and performance levels over the past year. The tool itself can be broken down as follows:

5—Distinguished Performance (≤ 5%)

Role model status. Potential successor to immediate supervisor/highly promotable now. Performed above and beyond under exceptional circumstances during the review period. Generally recognized #1 (Top 5%) ranking among peer group.

4—Superior Performance (30%)

Overall excellent performer and easy to work with – smart, dedicated, ambitious, and cooperative, but may not yet be ready to promote because there’s still a lot to learn in the current role. May not have been exposed to exceptional circumstances or opportunities that would warrant a higher designation. However, definitely an exceptional contributor who exceeds people’s expectations in many ways and is a long-term “keeper”—just needs more time in current role to grow and develop and gain additional exposure.

3—Fully Successful Performance (50%)

  • (3a) Consistently performs well and is reliable, courteous, and dedicated. Always tries hard and looks for ways of acquiring new skills but doesn’t necessarily perform with distinction. Works to live rather than lives to work. May not stand out as a rarity among peers but consistently contributes to the department’s efforts and is a valuable member of the team.
  • (3b) Meets expectations overall but may be challenged in particular performance areas. May perform well because of tenure in role and familiarity with workload but does not appear ambitious about learning new things or expanding beyond his comfort zone. While performance may be acceptable, conduct may at times be problematic.

2—Partially Successful Performance (10%)

Fails to meet minimum performance or conduct expectations in specific areas of responsibility. Is not able to demonstrate consistent improvement. May appear to be burned out or lack motivation, and fails to go the extra mile for others. Lacks requisite technical skills or knowledge relating to particular aspects of role. May perform well but conduct is so problematic that the entire year’s performance review score may be invalidated. A partial merit increase or bonus may be awarded.

1—Unsuccessful Performance (≤ 5%)

Fails to meet minimum performance or conduct expectations for the role in general. The individual’s position is in immediate jeopardy of being lost. The performance review may be accompanied by corrective action documentation stating that failure to demonstrate immediate and sustained improvement will result in dismissal. No merit increase or bonus should be awarded.

The percentages next to each scoring category reflect what you’d normally expect to see if your company’s scoring results fell under a typical “bell curve” configuration. Also, notice that the “3” category—meets expectations / fully successful performance—has two subsets: one for those who really try hard but don’t necessary perform with distinction and another for those who perform satisfactorily but doesn’t necessarily give it their best effort. Distinguishing between a 3(a) and a 3(b) can be particularly helpful when engaging in dialogs and discussions regarding individual contribution levels.

How these general parameters fit your organization and what they might look like at any given time should indeed differ from year to year. What makes most sense is to blow these descriptors up in a PowerPoint presentation or draft them on butcher-block paper and openly discuss as a management team which employees clearly fall in certain categories. Start with the highest generally recognized performers and see if you can gain agreement as to why the 5s are 5s. Your discussion can then proceed to 4s and 3s, although you may not want to address 2s and 1s in an open forum like that. The point is, get the conversation going. From senior leaders to front line supervisors, conversations like these need to happen to raise awareness of your organizational expectations and to provide leaders with benchmarks and guideposts to align their assessments.

What’s the difference between a 4 and a 5? Is it simply a matter of someone who’s able to promote into the boss’s role now as opposed to two years from now? Is the difference attributable to outstanding circumstances that allowed the individual to assume responsibilities well beyond their job description (which may be out of their control)? Likewise, what’s the difference between a 3(b) and a 2? Is it acceptable to have someone on the team who’s technically capable due to long tenure in the role but who appears to demonstrate little ambition or interest in anything outside of their immediate area? What about occasional inappropriate conduct—How “occasional” does it have to be to fail someone for the entire review year? Should we award partial merit increases to anyone who receives and overall score of 2, or should we take that money and return it to the pool to reward the higher performers?

As you could surmise, there aren’t necessarily right or wrong answers to these questions, and much of this is subject to debate. But it’s healthy debate, and it’s wise to have at least one discussion like this before anyone sits down to start writing performance reviews. Otherwise, you’ll end up with an entire management team working in a silo and “self-interpreting” what the organization wants to see in terms of proposed overall performance review scores without any guidelines or structure. These group meetings set the tone for the upcoming performance review discussions and documentation strategies.

In fact, as the general level of performance increases across your company, raising the bar and setting higher expectations should become the norm and should change the interpretation of these definitions over time. For example, what looked like superior performance last year may only appear as fully successful performance this year. And if that’s the case, congratulations — you’re using this tool and your organization’s performance management system correctly to leverage productivity across the enterprise.

So go ahead: Kick off a conversation that’s long overdue, and remember that total agreement isn’t necessary. Discussing performance interpretations openly, however, is. And that’s how successful organizational calibration sessions fine-tune performance over time in a performance-driven company.

Special Note:

Excerpts from this blog are from Paul Falcone and Winston Tan’s The Performance Appraisal Tool Kit: Redesigning Your Performance Review Template to Drive Individual and Organizational Change (AMACOM Books, 2013). Link to More Information about Book at AMACOM