Request a demo
Request a demo
Contact us
Rater Selection in 360 Degree Feedback: Who and How Many

If you’ve ever administered a 360 degree feedback program, or if you’ve been a 360 feedback participant yourself, then you’ve encountered the topic of rater selection, which inevitably leads to the question: Who and How Many. If you’ve been unsure about how to answer that question, don’t worry, you’re not alone.

In our 50 years of providing validated, customized 360 solutions, this remains one of the most common questions we receive, both from participants and administrators. And like with most topics related to 360s, there is a right way and a wrong way to do it.

In this blog post, we will provide our guidance on effective rater selection. But first, why does this matter?

The foundation of any successful 360 assessment lies in the careful selection of raters. Raters are key to the success of a 360-feedback project because they see the leaders in action day-to-day, which results in the most useful and reliable feedback. By choosing individuals who can provide accurate, honest, and insightful feedback, managers and leaders can gain valuable insights into which strengths they should leverage, and which skills they should further develop.

Inviting raters who aren’t able to give accurate, honest and insightful feedback not only wastes the raters’ time, it also can have consequential impacts on the feedback itself. The results can be diluted, misinterpreted, and in some cases, the feedback received can be flat-out wrong. And if this is happening across your 360 program, it could impact the overall success of your 360 program.

Here’s 4 tips to help ensure you’re following best practices:

Tip One: Invite feedback from various roles and multiple levels within the organization.

Raters should include the participant’s manager/supervisor, peers, direct reports, and even external stakeholders (e.g., customers, suppliers, and/or board members). To obtain a comprehensive understanding of an individual's performance, it's essential that feedback is gathered from people with different perspectives on their work.

Tip Two: Avoid selecting raters based on the type of feedback you expect to receive.

The objective of a 360 assessment is to acquire neutral and credible feedback that can be used to drive improvements. Avoid picking raters with personal connections to the employee or those involved in active conflicts as they may be reluctant to share honest opinions. Inviting only Raters who will give you positive feedback will not help you improve or grow.

Tip Three: Invite Raters who have consistent exposure to your work behaviors.

Select individuals who have consistent professional interactions with the participant for at least three months or more. This will ensure that they have enough background knowledge of the participant’s job and enough experience working with them to be familiar with how they work. If you invite a Peer you worked with on a one-off project 3 years ago, they aren’t going to be able to provide adequate feedback on your current work behaviors. You’ll be wasting their time and yours.

Tip Four: Communicate expectations clearly.

Once the raters have been selected, it's crucial to communicate the organization’s expectations explicitly. This will empower raters with a clear understanding of their role in providing constructive criticism that focuses on improving performance rather than just identifying weaknesses. Encourage raters to offer specific examples related to their observed behaviors and emphasize the importance of confidentiality throughout the process. Make sure the raters know the importance of their feedback being accurate, why the organization is using the 360-feedback process, and how the results will be used.


AddLeader Coaches CTA Banner


Bonus Tip: How Many is Too Many? Consider a random sample!

We recently had a Participant who had nearly 100 Direct Reports (wow!). We suggested they use the tips above (select people with the most exposure to your behaviors who have worked with you on a consistent basis for the last 3 months). If the group was still too large, (e.g. over 25), then we suggested that they select a random sample of an acceptable number of raters, so long as all of them met the criteria mentioned above.

While rater selection may not be the most exciting topic to explore, it is important and doing it wrong can have real consequences, both in the accuracy and usability of 360 degree feedback. Fortunately, it’s relatively easy to get it right. By following the recommendations in this blog, you can help set your managers up for success in their 360 feedback journey.