Performance reviews shape promotions, pay and development – so employees must trust the process. This template gives you concrete performance review survey questions for employees that show how fair, clear and useful your reviews feel today. You get ready-made items, thresholds and actions so HR and leaders can improve fast, not just collect data.
Survey questions
Unless stated otherwise, use a 1–5 scale: 1 = Strongly disagree, 5 = Strongly agree. For “How often…?” items, use 1–5 from “Never” to “Always”.
Clarity & expectations
- Q1. Before my review, I understood the rating scale used (e.g. 1–5, below expectations, exceeds).
- Q2. I understand the criteria used to evaluate my performance.
- Q3. I know what “good performance” looks like in my role.
- Q4. My goals or “Zielvereinbarungen” for the period were clear to me.
- Q5. I knew in advance how my review could affect pay, bonus or promotions.
- Q6. The performance review questions for employees in self-evaluations are easy to understand and answer.
Preparation & evidence
- Q7. I had enough time to prepare for my performance review and self-evaluation.
- Q8. I had access to the information I needed (goals, KPIs, feedback, projects) to prepare.
- Q9. During the review, my manager referred to concrete examples of my work.
- Q10. My manager used objective data (e.g. metrics, customer feedback) to support their assessment.
- Q11. My own examples and evidence were listened to and taken seriously.
- Q12. How often do performance reviews at our company include specific examples of your contributions?
Quality of the conversation
- Q13. My manager created enough time and space for a proper performance conversation.
- Q14. I felt listened to during my performance review.
- Q15. I received clear feedback on my strengths.
- Q16. I received clear feedback on where I can improve.
- Q17. We agreed on concrete next steps or actions after the review.
- Q18. Overall, the review conversation was respectful and constructive.
Fairness & bias
- Q19. I feel my performance review rating was fair.
- Q20. My review focused on my results and behavior, not on personal preferences or politics.
- Q21. My manager evaluates people in our team consistently using the same standards.
- Q22. I did not experience favoritism in the way my performance was assessed.
- Q23. I did not experience stereotypes or bias related to gender, age, origin or other characteristics.
- Q24. I trust that calibration or “Beurteilungsrunden” between managers help keep reviews fair.
Impact on development & career
- Q25. My performance review helped me understand my strengths better.
- Q26. My performance review helped me understand what I need to learn or improve.
- Q27. After the review, I have a clear development or training plan.
- Q28. The review covered my medium-term career goals and possible career paths.
- Q29. Since my last review, I have seen real follow-up on agreed development actions.
- Q30. How often do performance reviews lead to visible development opportunities for you (projects, training, mentoring)?
Link to compensation & promotions
- Q31. The link between performance ratings and salary increases is transparent to me.
- Q32. The link between performance ratings and bonus payments is transparent to me.
- Q33. The link between performance ratings and promotion decisions is transparent to me.
- Q34. I understand why I did or did not receive a pay increase after my last review.
- Q35. I understand why I did or did not receive a promotion after my last review.
- Q36. I believe performance review results have more impact on rewards than internal politics.
Psychological safety & overall experience
- Q37. I felt safe to speak openly during my performance review.
- Q38. I felt safe to disagree with my manager’s assessment if I saw things differently.
- Q39. I could give honest upward feedback about our review process or my manager’s approach.
- Q40. I know where to go if I strongly disagree with my performance evaluation.
- Q41. Overall, performance reviews in our company help me perform better in my role.
- Q42. Overall, I trust our company’s performance review process.
Overall rating question (0–10 scale)
- Q43. How likely are you to recommend our performance review process to a colleague as fair and useful? (0 = Not at all likely, 10 = Extremely likely)
Open-ended questions
- O1. What would make performance reviews more useful for you personally?
- O2. What should your manager do differently in your next performance review?
- O3. What should our company change in the overall performance review process?
- O4. If you feel your review was unfair, what happened?
Decision & action guide
| Area / Questions | Threshold (average score) | Required action | Owner | Timeline |
|---|---|---|---|---|
| Clarity & expectations (Q1–Q6) | Score <3.5 | Review rating scales, goal-setting templates and manager briefing; run 1–2 clarity workshops. | HR + Business leaders | Within 30 days after survey results |
| Preparation & evidence (Q7–Q12) | Score <3.5 or >30% choose “Never/Rarely” in Q12 | Introduce standard review agenda with evidence checklist; communicate minimum 2-week prep time. | HR designs; managers apply | New process live for next review cycle |
| Quality of conversation (Q13–Q18) | Score <3.5 or any item <3.0 | Offer mandatory manager training on feedback, active listening and difficult talks. | HR L&D | Training scheduled within 45 days; completion ≥90% before next cycle |
| Fairness & bias (Q19–Q24) | Score <3.5 overall or gap ≥0.5 between groups | Strengthen calibration, run bias-awareness sessions, review rating distributions and appeals process. | HR + Department heads | Bias actions defined within 30 days, implemented before next cycle |
| Development & career impact (Q25–Q30) | Score <3.5 or >40% choose low frequency in Q30 | Standardize Individual Development Plans; link reviews with learning and internal mobility options. | HR + Managers | IDP template rolled out within 60 days |
| Comp & promotions link (Q31–Q36) | Score <3.2 | Publish simple explanation of pay / bonus / promotion process; add talking points for managers. | Comp & Benefits + HR | Transparency pack released within 45 days |
| Psychological safety & trust (Q37–Q42) | Score <3.5 or >15% “Strongly disagree” in any item | Run focus groups, offer escalation channels, review manager behavior; consider manager-specific surveys. | HR BP + Local leaders | Focus groups within 21 days; action plan within 45 days |
| Overall NPS-style rating (Q43) | Average <7.0 or detractors >25% | Identify top 3 drivers from data, set 2–3 company-wide improvement priorities, communicate plan. | HR + Executive team | Company plan shared within 30 days |
Key takeaways
- Use this survey right after reviews to capture fresh employee experiences.
- Group questions by theme to spot where your system fails people.
- Turn thresholds into clear owner-plus-deadline improvement actions.
- Combine scores with calibration data for a full fairness picture.
- Repeat yearly to track trust, usefulness and psychological safety trends.
Definition & scope
This survey measures how employees experience your performance review process: clarity, fairness, usefulness, impact on pay and psychological safety. It targets all employees who took part in a recent “Leistungsbeurteilung” or “Mitarbeitergespräch”, not managers rating others. Results support decisions on process design, manager training, calibration rules, communication about rewards and the cadence of reviews.
Scoring & thresholds
Use a 1–5 scale for agreement questions: 1 = Strongly disagree, 2 = Disagree, 3 = Neither, 4 = Agree, 5 = Strongly agree. Treat items with “How often…?” as 1 = Never to 5 = Always. For most questions, average ≥4.0 means strong, 3.0–3.9 means “needs improvement”, and <3.0 is critical.
Start by calculating averages per item and per dimension (Q1–Q6, Q7–Q12, etc.). Then compare teams, locations and levels. Use clear rules to trigger actions: for example, if any fairness item (Q19–Q24) drops below 3.0 in a team, schedule a deep dive and extra calibration support there.
- HR calculates item and dimension scores within 7 days after survey closing.
- HR flags red areas: any question with average <3.0 or >20% “Strongly disagree”.
- Business leaders review their area’s results with HR within 14 days.
- Each leader selects max. 3 focus areas and defines actions with deadlines.
- Progress on actions is checked in quarterly performance or talent reviews.
Follow-up & responsibilities
Good performance review survey questions for employees only help if you follow through. Everyone needs a clear role: HR designs and analyses; managers act in teams; executives own company-wide changes. Response time signals how seriously you take employee voice.
Set simple service levels: first acknowledgement of results within ≤7 days; team-level discussion within ≤30 days; structural changes (policies, tools) planned within ≤60 days. Document actions so you can show Works Council or leadership what changed.
- HR owns survey design, data quality and central reporting within 7 days.
- People Business Partners run result workshops with managers within 21 days.
- Managers discuss their team’s scores and next steps in a team meeting within 30 days.
- Executive team agrees 2–3 company-wide priorities per cycle within 45 days.
- HR publishes a short “You said, we did” update to employees within 60 days.
Fairness & bias checks
Fairness in “Leistungsbeurteilung” is about both process and outcomes. Use this survey to compare experiences across gender, age, location, contract type, working model (remote vs. office) and level – but always respect anonymity thresholds (e.g. min. 5 responses per group).
Look for systematic gaps: for example, women rating fairness questions 0.4 points lower than men, or remote workers feeling less heard in reviews. Combine this with rating distributions and calibration outcomes. Tools like the calibration meeting template help you standardize bias checks in manager conversations.
- HR slices results by key groups (gender, location, level, remote/office) where n≥5.
- Any gap ≥0.4 on fairness or safety triggers a bias review with the relevant leader.
- HR compares survey gaps with rating distributions and promotion decisions each cycle.
- Where patterns repeat twice, HR proposes concrete policy or training changes.
- Works Council / Betriebsrat is briefed on patterns and planned mitigations annually.
Short survey blueprints
You do not always need the full set of performance review survey questions for employees. Use these ready-made cuts for different purposes while still relying on the same core question bank (Q1–Q43).
(a) Post-review cycle survey (10–15 questions)
Goal: fast pulse after a company-wide review cycle, to track trust and usefulness without survey fatigue. Timing: send 2–5 days after each employee’s review meeting, keep it open for 10–14 days. Anonymity: full anonymity; only report cuts with ≥5 responses.
- Suggested items: Q1, Q3, Q7, Q9, Q13, Q14, Q17, Q19, Q25, Q27, Q31, Q37, Q41, Q42, Q43 + O1.
- Owner: HR runs survey; managers communicate purpose locally.
- Frequency: every major review cycle (usually 1–2 times per year).
(b) Pilot of a new review process (10–12 questions)
Goal: test a new form, cadence or rating model with a pilot group before you roll it out. Timing: run once, directly after the first pilot cycle. Anonymity: if pilot team is small, use minimum reporting size of ≥7 and avoid narrow cuts.
- Suggested items: Q1, Q2, Q4, Q6, Q8, Q10, Q15, Q18, Q24, Q28, Q30, Q41, Q43 + O2, O3.
- Owner: Project team designs; HR analyses and presents recommendations.
- Use results to tweak forms, guidance and training before global rollout.
(c) Manager-specific review survey (teams where reviews went badly) (12–15 questions)
Goal: dig deeper where complaints or red flags appear, without exposing individuals. Timing: within 1–3 weeks after issues surface. Anonymity: never run for groups smaller than 5; clearly state that only aggregated data is shared with the manager.
- Suggested items: Q7, Q9, Q13, Q14, Q15, Q16, Q18, Q19, Q20, Q21, Q22, Q23, Q37, Q38, Q39 + O2, O4.
- Owner: HR BP initiates; results are discussed in a coaching session with the manager.
- If low scores repeat next cycle, escalate to the manager’s leader and HR Director.
(d) Ultralight pulse (5–7 questions)
Goal: quick check between big cycles, especially during changes (new rating scale, new tool). Timing: 1–2 months after change. Anonymity: same thresholds.
- Suggested items: Q1, Q3, Q13, Q19, Q25, Q37, Q41 + O1.
- Owner: HR; share results in one slide with next steps.
Examples / use cases
Use case 1: Low clarity, decent fairness
A scale-up scored 3.1 on Clarity & Expectations (Q1–Q6) but 3.9 on Fairness & Bias (Q19–Q24). People trusted managers’ intentions, but did not understand criteria or how reviews linked to pay. HR simplified rating definitions, added examples per role and published a one-page overview of the pay process. In the next cycle, clarity rose to 3.9 and questions about “how my salary is set” dropped in open comments.
Use case 2: Good conversations, weak development follow-through
An engineering team rated conversation quality high (4.2 on Q13–Q18), but only 2.8 on “development impact” (Q25–Q30). Reviews felt nice but did not change anything. HR introduced a standard Individual Development Plan linked to learning paths from their talent development guide, and managers had to log 3 concrete actions per person. Within two cycles, development scores rose above 3.8 and internal moves increased.
Use case 3: Psychological safety problems in one region
A DACH region showed an average of 2.9 on Q37–Q39 (psychological safety) while other regions were above 3.8. Open comments mentioned fear of speaking up and a “no-appeal culture”. HR ran anonymous focus groups, then trained local leaders using materials from their engagement and survey playbooks. They also formalized an appeal path for reviews. A year later, psychological safety rose to 3.6 and escalated conflicts fell sharply.
- Always pair numeric scores with open comments and concrete stories.
- Translate findings into 1–3 specific changes, not dozens of small tweaks.
- Re-measure after one review cycle to see if actions worked.
Implementation & updates
Start small: pilot the full survey with one function or location first, then adjust and roll out company-wide. In DACH, involve the Betriebsrat / Works Council and your Datenschutz / GDPR officer early so anonymity rules and retention periods are clear. A talent platform like Sprad Growth can help automate survey sends, reminders and follow-up tasks.
For segmentation, collect only a few attributes (team, level, location, remote/office). Avoid combinations that could identify someone (e.g. “only one Senior in this small site”). Set standard anonymity thresholds: no cuts for groups under 5 people; no open-text excerpts for groups under 10.
- Phase 1 – Pilot: Choose 1–2 departments, run the survey after the next review cycle.
- Phase 2 – Rollout: Adapt wording, translate if needed, agree timelines with Works Council.
- Phase 3 – Manager training: Teach managers how to explain results and adjust their approach.
- Phase 4 – Annual refresh: Once per year, review questions, thresholds and blueprints with HR and leaders.
- Track KPIs: survey participation (target ≥70%), dimension scores, rating disputes, promotion/pay complaints.
Connect survey insights with other performance processes. For example, if fairness scores are low while calibration meetings are chaotic, use your performance calibration templates to tighten that process. Likewise, if development impact scores lag, link reviews more clearly to Individual Development Plans and internal mobility options from your performance management guide.
Conclusion
Performance review survey questions for employees give you something your standard review forms cannot: a mirror of how the process feels from the employee side. You uncover whether people understand expectations, trust ratings and see real development and reward follow-through. That means earlier detection of problems, better quality conversations and sharper priorities for fixing your system.
To move from idea to action, pick a first pilot area, plug this question set into your survey tool and agree on owners and thresholds. After the pilot, adjust wording, involve the Works Council where needed and define a simple annual rhythm: review cycle → survey → actions → calibration of the process itself. Over a few cycles, you will see higher trust, fewer disputes and reviews that genuinely support performance and growth instead of just ticking a compliance box.
FAQ
How often should we run this survey?
For most organizations, once per major review cycle (usually annually or twice a year) is enough. Run it shortly after each person’s review conversation, so memories are fresh. Some companies add a short 5–7 item pulse when they change the process. More than 3 surveys per year about reviews usually creates fatigue and lowers response quality.
What should we do if scores are very low in one team?
First, protect anonymity when you share results. Then discuss the data with the manager and HR BP, including open comments. Run a manager-specific survey (see blueprint c) if you need more detail. Agree on a coaching plan, extra calibration support or, in serious cases, leadership intervention. Re-measure in the next cycle to check whether employee experience improved.
How do we handle very critical or emotional comments?
Do not ignore them. Cluster comments by theme and look for patterns rather than reacting to single messages. If comments point to misconduct, discrimination or legal risk, escalate immediately through your normal HR or compliance channels. For general frustration, share anonymized themes with leaders, admit what is not working and explain planned changes. Research summarized by Harvard Business Review shows that closing the feedback loop strongly boosts trust.
How can we involve managers without making them defensive?
Frame the survey as a tool to help managers run better “Mitarbeitergespräche”, not as a performance rating of them. Share results first in a safe setting with context and benchmarks, then co-create actions. Highlight where employees already rate them well. Offer training, peer learning and coaching. Only in repeated problem cases should results feed into manager evaluations or talent decisions.
How often should we update the question set?
Review the survey annually. Keep the core dimensions (clarity, preparation, conversation quality, fairness, development, rewards, psychological safety) stable so you can track trends. You can swap out 2–3 items based on new priorities or feedback. After big process changes (new rating scale, new tool), add targeted questions for one or two cycles, then remove them once the change is embedded.



