These succession planning survey questions help you see what your Nachfolgeplanung looks like from a leader’s seat: where readiness criteria are clear, where risk stays hidden, and where “fair on paper” turns into politics in practice. If you run this after Talentrunden or a Kalibrierungssitzung, you get early warning signals and concrete fixes—before the next vacancy forces a rushed decision.
Succession planning survey questions (manager-only question bank)
This survey is for managers and leaders only (people managers, functional leaders, country leaders). It measures process quality—not individual successor names.
2.1 Closed questions (5-point Likert scale)
Answer scale (unless you label otherwise): 1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree.
- Clarity & criteria
- Q1. I understand what “ready now” means in our succession process.
- Q2. I understand what “ready in 1–2 years” means in our succession process.
- Q3. Success profiles for critical roles are specific enough to assess readiness.
- Q4. We distinguish performance, potential, and role readiness consistently.
- Q5. Risk-of-loss categories (e.g., low/medium/high) are defined and usable.
- Q6. I can explain to another leader why someone is rated at a given readiness level.
- Process & governance
- Q7. Talent reviews follow a clear agenda and decision flow.
- Q8. The right stakeholders are in the room for succession decisions.
- Q9. We have clear rules for who can propose, challenge, and approve ratings.
- Q10. We review succession coverage often enough for our business pace.
- Q11. Outcomes are documented with rationale, not just a final label.
- Q12. Follow-up actions from the last cycle were completed on time.
- Tools & data
- Q13. The tool or template for succession planning is easy to use.
- Q14. I can access current performance evidence without chasing spreadsheets.
- Q15. Skill and experience data is current enough to support readiness calls.
- Q16. The 9-box (or equivalent) inputs are consistent across teams.
- Q17. Calibration inputs (peer feedback, project evidence) are available when needed.
- Q18. Reporting makes gaps visible (coverage, bench strength, risk-of-loss).
- Fairness & bias
- Q19. I believe readiness ratings are fair across functions and locations.
- Q20. I believe readiness ratings are fair across gender and age groups.
- Q21. People can challenge ratings without negative consequences.
- Q22. “Visibility” (who is known) matters less than evidence (what they did).
- Q23. We spot and correct common biases (recency, halo, similarity) in calibration.
- Q24. The process supports equal access to development opportunities.
- Talent pools & moves
- Q25. Succession plans lead to concrete development plans for successors.
- Q26. We use stretch assignments to test readiness before promotions.
- Q27. Internal moves happen fast enough once a readiness gap is identified.
- Q28. We have a realistic path for “nearly ready” successors.
- Q29. Critical roles have at least 1 viable successor option.
- Q30. We actively manage risk-of-loss for critical roles and successors.
- Communication & psychologische Sicherheit
- Q31. I know what I can and cannot communicate to potential successors.
- Q32. The process avoids harmful labeling (e.g., “low potential”) in daily leadership.
- Q33. Leaders feel safe to raise concerns about succession outcomes.
- Q34. Successors receive enough feedback to understand development expectations.
- Q35. Non-selected employees are handled respectfully and consistently.
- Q36. The process improves career conversations in 1:1s, not just annual reviews.
- Support from HR & leadership
- Q37. HR facilitation during talent reviews keeps discussions evidence-based.
- Q38. HR provides templates and guidance that reduce ambiguity.
- Q39. Leaders receive enough training to assess readiness consistently.
- Q40. Senior leadership reinforces the rules (not exceptions for favorites).
- Q41. I receive timely support when a critical role becomes at risk.
- Q42. The process time investment is reasonable for the value it creates.
- Overall impact & confidence
- Q43. I trust the readiness ratings we assign.
- Q44. I trust the final outcomes (moves, development actions) of the process.
- Q45. The process reduces last-minute emergency hiring for key roles.
- Q46. The process improves retention of high potentials in my area.
- Q47. The process improves internal mobility across the organisation.
- Q48. Overall, our succession planning process works well in practice.
2.2 0–10 rating questions (overall / NPS-style)
- R1 (0–10). How confident are you in our readiness ratings for critical roles?
- R2 (0–10). How fair do you perceive succession outcomes (moves, visibility, access) overall?
- R3 (0–10). How likely are you to recommend our succession process to another leader?
2.3 Open-ended questions (manager-only)
- O1. Where does our current succession process help you most as a leader?
- O2. Where does it create noise, politics, or extra work without better decisions?
- O3. Which readiness definition is most confusing in practice, and why?
- O4. What evidence do you wish you had during calibration but rarely do?
- O5. Which step in the process slows decisions down the most?
- O6. Where do you see unfairness between functions, locations, or demographics?
- O7. When did you last challenge a rating? What made it easy or hard?
- O8. What would improve psychologische Sicherheit in Talentrunden?
- O9. Which critical role worries you most from a coverage perspective (no names needed)?
- O10. What is the biggest blocker to making more internal moves or stretch assignments?
- O11. What should HR start doing to make succession planning easier for leaders?
- O12. If you could change 1 rule in the process for next cycle, what would it be?
| Question(s) / area | Score / threshold | Recommended action | Responsible (Owner) | Goal / deadline |
|---|---|---|---|---|
| Clarity & criteria (Q1–Q6) | Average <3,0 or ≥25% “Disagree/Strongly disagree” | Rewrite readiness definitions + 1-page examples per level; run 45-min leader briefing. | Succession Program Owner + HRBP | Draft within 14 days; briefing within 30 days |
| Process & governance (Q7–Q12) | Average <3,2 or Q12 <3,0 | Add decision log + action tracker; assign approver per role family; publish cadence. | HR Director + Business Unit Lead | Governance set within 21 days; next cycle uses tracker |
| Tools & data (Q13–Q18) | Q13 <3,0 or Q18 <3,0 | Standardise evidence packet; fix data sources; reduce duplicate spreadsheets. | People Analytics Lead + HRIS Owner | Minimum dataset in 45 days; reporting in 60 days |
| Fairness & bias (Q19–Q24, R2) | R2 <7,0 or any subgroup gap ≥0,5 points | Run bias review in calibration; add facilitator script; audit outcomes by group. | DEI Lead + Calibration Facilitator | Bias checks in next session; audit report within 30 days |
| Talent pools & moves (Q25–Q30) | Q25 <3,2 or Q27 <3,0 | Require 1 development action per successor; launch stretch assignment list. | Functional Leaders + L&D | Actions assigned within 14 days; first moves within 90 days |
| Communication & safety (Q31–Q36) | Average <3,2 or Q33 <3,0 | Create communication guardrails; train leaders on “what to say” scripts. | HRBP + Legal/Compliance | Guardrails in 30 days; training within 60 days |
| HR & leadership support (Q37–Q42) | Average <3,2 | Upgrade facilitation; publish templates; offer 2 office-hour slots per month. | Head of HR + Talent Lead | New support model within 45 days |
| Overall confidence (Q43–Q48, R1/R3) | R1 <7,0 or Q48 <3,5 | Run a retrospective workshop; pick 3 fixes; communicate “what changes next cycle”. | CHRO + Business Unit Lead | Retro within 21 days; changes announced within 30 days |
Key takeaways
- Measure leader experience, not just who is “ready now”.
- Use thresholds to trigger actions within 14–30 days.
- Separate fairness signals from individual performance debates.
- Force follow-through: every successor needs 1 concrete development action.
- Track subgroup gaps to spot hidden bias and inconsistent standards.
Definition & scope
This manager-only survey measures how well your succession planning and 9-box process works in practice: readiness clarity, risk visibility, governance, fairness, tools, communication, and support. It’s designed for leaders who participate in Talentrunden and Kalibrierungssitzungen. Use the results to improve criteria, facilitation, development actions, and internal moves—without turning it into a rating of individual managers.
When to run these succession planning survey questions (timing & audience)
Run the survey right after a talent review or calibration session, while details are fresh. If you wait 4–6 weeks, leaders answer based on mood, not process steps. Keep it manager-only and clarify that HR will report results in aggregates, not as individual “manager scores”.
If you use structured artefacts like role maps or readiness criteria, align the send to that workflow. Many teams pair the survey with a quick review of their succession planning templates and readiness criteria so feedback connects to real inputs, not opinions.
- HR Ops sends survey to eligible leaders within ≤72 h after the Talentrunde; close after 7 days.
- People Analytics checks anonymity threshold (e.g., n≥5 per slice) within 2 days.
- Succession Program Owner shares top 5 insights (not raw comments) within 14 days.
- Business Unit Lead confirms 3 actions with owners and deadlines within 21 days.
How to interpret results: readiness clarity, risk visibility, and governance
Start with clarity and governance before you debate “who is ready”. If Q1–Q6 are low, leaders will assign readiness labels inconsistently, even with a 9-box. If Q7–Q12 are low, you may have good discussions but weak decisions and follow-through.
Use simple cut lines so debates end quickly. For example: average <3,0 in any dimension is a red flag; 3,0–3,9 means “needs tightening”; ≥4,0 means “works well”. For risk visibility, treat Q18 <3,0 as urgent because leaders cannot see coverage gaps.
- If Q1–Q6 average <3,0, then pause readiness scoring and rewrite definitions within 14 days.
- If Q12 <3,0, then enforce an action tracker and review completion within 30 days.
- If Q18 <3,0, then define a minimum dataset (roles, successors, readiness, risk-of-loss) within 45 days.
- If R1 <7,0, then add evidence standards and calibrator scripts before the next cycle.
How to use succession planning survey questions to improve the 9-box and calibration
The fastest wins usually sit in calibration hygiene: evidence quality, speaking order, and bias guardrails. If Q16–Q17 are low, leaders probably use different inputs, so ratings drift by team. If Q21 or Q33 are low, people do not challenge ratings, and bias stays uncorrected.
Make calibration more repeatable with a lightweight playbook. A structured approach like the one in the talent calibration guide helps you standardise pre-work, timeboxes, and decision logs so meetings stop being “who talks loudest”.
- Calibration Facilitator enforces a fixed speaking order and evidence-first discussion in the next session.
- HRBP introduces a 1-page “evidence packet” rule (3 bullets max) within 30 days.
- DEI Lead adds a 10-minute bias checkpoint when Q19–Q24 average <3,5, starting next cycle.
- People Analytics audits rating variance by team; flag gaps ≥0,5 points within 21 days.
Turning results into moves: development plans, stretch assignments, internal mobility
Succession planning fails when it produces lists, not movement. If Q25–Q30 are low, leaders may agree on successors but do not create time, roles, or assignments that build readiness. Treat Q27 <3,0 as a process bottleneck: internal moves are too slow or blocked.
Connect succession outputs to development tracking you already run. Teams that link actions to performance and growth routines—like performance management check-ins—see faster follow-through because actions live where managers work.
- Functional Leaders assign 1 stretch assignment per “ready in 1–2 years” successor within 30 days.
- L&D builds 3 role-specific development pathways for critical roles within 60 days.
- HRBP schedules a 15-minute monthly review of open development actions; start within 30 days.
- Business Unit Lead removes 1 systemic blocker to internal moves (policy, headcount, approvals) within 90 days.
| Signal from the survey | What it often means | Fix | Owner | Deadline |
|---|---|---|---|---|
| Q25 low (plans exist, but weak actions) | Development plans are generic or not funded with time | Require 1 “on-the-job” action + 1 mentor per successor | L&D Lead | Within 45 days |
| Q27 low (moves are slow) | Approval chains or headcount rules block mobility | Create a fast-track mobility lane for critical roles | COO + HR Director | Within 90 days |
| Q29 low (no successor coverage) | Critical roles are unclear or too narrow | Reconfirm critical role list; add “near-critical” pipeline roles | Succession Program Owner | Within 30 days |
DACH/GDPR + Betriebsrat basics (non-legal)
In DACH contexts, trust often depends on clear governance with the Betriebsrat and data protection early. Keep this survey focused on process experience, minimise free text that could identify individuals, and report results only in aggregates. Set a retention window (for example, 12 months) and document who can access raw comments.
If you support the workflow with a platform, keep it practical: a talent platform like Sprad Growth can help automate survey sends, reminders, and follow-up tasks while maintaining role-based access and audit trails.
- HR + DPO define purpose, data fields, and retention (e.g., 12 months) before launch; within 14 days.
- HR aligns reporting cuts with Betriebsrat (site/function/company) and n≥5 threshold; within 21 days.
- People Analytics removes or masks comments that contain names before sharing summaries; within 7 days post-close.
- CHRO confirms the survey is not used for individual manager performance evaluation; communicate before sending.
Survey blueprints (pick the right length)
Use the full version after major Talentrunden, then pulse the process in smaller cycles. If you’re redesigning Nachfolgeplanung, run a baseline first so you can prove improvement later.
| Blueprint | When to use | Items (recommended) | Include | Owner + timing |
|---|---|---|---|---|
| A) Full post-talent-review manager survey | After annual/biannual Talentrunden | 22–26 Likert + 2 ratings + 4 open | Q1–Q6, Q7–Q12, Q16–Q18, Q19–Q23, Q25–Q27, Q31–Q33, Q37–Q38, Q43–Q48; R1–R2; O1, O2, O10, O11 | People Analytics; send within ≤72 h, close after 7 days |
| B) Light pulse after each cycle | After each quarterly or mid-year checkpoint | 12–15 Likert + 1 rating + 2 open | Q3, Q6, Q8, Q11, Q12, Q16, Q18, Q21, Q25, Q27, Q33, Q48; R1; O2, O12 | Succession Program Owner; send within 5 days |
| C) Targeted survey (critical functions/countries) | When you see attrition or coverage risk hotspots | 12–15 Likert + 2 ratings + 3 open | Q1–Q2, Q5, Q10, Q18, Q19–Q22, Q29–Q30, Q41, Q45; R1–R2; O4, O6, O9 | HR Director (Region/Function); run within 30 days |
| D) One-time baseline before redesign | Before new success profiles, tools, or governance | 18–22 Likert + 3 ratings + 6 open | Q1–Q18 (select), Q19–Q24 (select), Q43–Q48 (select); R1–R3; O1–O6 | CHRO + Works Council alignment; run 6–8 weeks pre-redesign |
Scoring & thresholds
Use the 1–5 scale for Likert items (1 = Strongly disagree, 5 = Strongly agree). Calculate averages per dimension: Clarity (Q1–Q6), Governance (Q7–Q12), Tools/Data (Q13–Q18), Fairness (Q19–Q24), Moves (Q25–Q30), Communication/Safety (Q31–Q36), Support (Q37–Q42), Impact (Q43–Q48).
Thresholds that work in practice: average <3,0 = critical; 3,0–3,9 = needs improvement; ≥4,0 = strong. Convert scores into decisions by tying each critical dimension to a fixed intervention (rewrite criteria, facilitation training, evidence packets, mobility actions) with an owner and a deadline.
Follow-up & responsibilities
Route signals so leaders don’t feel “surveyed and forgotten”. HR owns process fixes; business leaders own moves and development capacity. Use short reaction times so momentum stays: ≤24 h for severe comments about discrimination or psychological safety, ≤7 days for a first findings summary, and ≤21 days for an approved action plan.
- People Analytics publishes an aggregated dashboard within 14 days of close.
- Succession Program Owner drafts an action plan with owners within 21 days of close.
- Business Unit Lead approves top 3 actions and removes blockers within 30 days of close.
- HRBP checks action completion monthly; escalate overdue actions after 45 days.
Fairness & bias checks
Break results down by relevant groups where anonymity holds: location, function, leadership level, remote vs. office. Look for gaps ≥0,5 points in Q19–Q24 or R2, then ask “which step causes this?” rather than blaming individuals. Use open comments to identify patterns, but never to hunt for a person.
Typical patterns and responses: (1) One function reports lower fairness (Q19) → calibrate cross-function evidence standards. (2) One location reports lower safety (Q33) → strengthen facilitation and challenge norms in that site. (3) Leaders report low comfort challenging ratings (Q21) → add scripted challenge moments and neutral facilitation.
- People Analytics runs subgroup comparisons within 14 days; suppress cuts with n<5.
- Calibration Facilitator adds a “challenge round” when Q21 <3,5; start next session.
- DEI Lead reviews outcomes (moves, pools) for disparate impact within 30 days.
Examples / use cases
Use case 1: Low readiness clarity
After the annual Talentrunde, Clarity (Q1–Q6) averages 2,8 and R1 is 6,2. HR pauses expanding the successor list and rewrites readiness definitions with examples per level. In the next cycle, leaders use the same anchors and confidence rises because debates shift from labels to evidence.
Use case 2: Fairness concerns across locations
Fairness (Q19–Q24) averages 3,1, but one country is 0,7 points lower on Q21 and Q33. The decision: introduce a neutral facilitator and standard speaking order for that location’s calibration. Leaders report higher willingness to challenge ratings, and open comments show less “politics talk” in follow-up surveys.
Use case 3: Plans without moves
Moves (Q25–Q30) averages 2,9, with Q27 at 2,6. The business decides to create a fast-track internal mobility lane for critical roles and requires 1 stretch assignment per successor. Within 90 days, leaders report faster internal moves and clearer development ownership.
Implementation & updates
Roll this out in steps so you can learn fast without overwhelming leaders. Start with a pilot in 1 business unit, then scale once your thresholds, reporting cuts, and follow-up rhythm work. Train facilitators and leaders on how results will be used: to improve governance and tools, not to judge individuals.
- Pilot: People Analytics runs Blueprint B in 1 unit within 30 days.
- Rollout: Succession Program Owner scales to all units after 1 cycle; within 6 months.
- Training: HR delivers a 60-minute calibration refresher for leaders; within 60 days.
- Review: HR updates 10–20% of items annually based on comments and score stability.
Track a small KPI set so updates stay evidence-based: participation rate, average dimension scores, subgroup gaps, action completion rate, internal fill rate for critical roles, and time-to-move for identified successors.
Conclusion
Succession planning often looks tidy in a slide deck, but leaders experience the real process: unclear readiness definitions, hidden risk, and uneven standards across teams. These succession planning survey questions give you a simple way to measure that reality and catch issues early, before the next vacancy forces a rushed promotion or external hire.
If you want to start tomorrow, pick 1 pilot area, load Blueprint B into your survey tool, and align owners for follow-up before you hit send. After the pilot, keep the same thresholds, publish a short aggregated summary within 14 days, and lock 3 improvements for the next Talentrunde. You’ll get better conversations, clearer development priorities, and fewer surprises in critical roles.
FAQ
How often should we run a manager succession survey?
Run the full survey after each major Talentrunde (often annual or biannual). Use a short pulse after interim checkpoints, especially if your org changes quickly or you had leadership churn. Keep cadence consistent so leaders can compare trends across cycles. If you change success profiles or tools, run a baseline first, then repeat after the next cycle to confirm improvement.
What should we do if scores are very low (e.g., average <3,0)?
Treat it as a process failure signal, not as “leaders resisting HR”. Pick the lowest dimension and fix the system step causing it. Example: low clarity (Q1–Q6) means you rewrite readiness anchors; low follow-through (Q12) means you add an action tracker and enforce deadlines. Publish what will change within 30 days, then pulse-check progress.
How do we handle critical open-text comments?
First, route safety and discrimination-related comments within ≤24 h to the right channel (HR, compliance, employee relations) without starting a witch hunt. Second, anonymise summaries before sharing broadly. Third, translate comments into process changes: unclear rules, weak facilitation, or missing data. Close the loop with leaders by stating what you will change next cycle.
How do we involve the Betriebsrat and still get honest feedback?
Involve the Betriebsrat early with a clear purpose statement: you are measuring process experience, not evaluating individual managers. Define anonymity thresholds (for example, n≥5 per slice), aggregation levels, access rights, and retention (for example, 12 months). Document the approach and align it to GDPR principles; the General Data Protection Regulation (GDPR) is a solid reference point for data minimisation and purpose limitation.
How do we keep the question bank current over time?
Review items once per year, and only change what you must. Keep core trend items stable (overall confidence, fairness, follow-through) so you can compare cycles. Replace questions that show little variance or that leaders consistently misunderstand. Use open-ended responses to spot missing topics, then pilot new items in one unit before adding them company-wide. Keep a version log so changes remain transparent.



