If internal moves slow down, managers are often the bottleneck you can’t see in your dashboards. These internal mobility survey questions for managers help you spot where leaders struggle (process, risk, fairness, backfills) so you can fix workflows early and have better conversations with employees before frustration builds.
Internal mobility survey questions for managers (question bank)
Use a 5-point Likert scale (1 = Strongly disagree, 5 = Strongly agree) unless stated otherwise. Keep items in the same order so you can trend results across cycles.
2.1 Closed questions (Likert scale)
- Q1. I understand our internal mobility (interne Mobilität) strategy and why it matters.
- Q2. I know what I’m expected to do when an employee wants to move internally.
- Q3. I know where to find current internal mobility policies, timelines, and templates.
- Q4. Senior leadership supports internal moves, even when it creates short-term team gaps.
- Q5. My goals or incentives do not discourage me from developing talent for other teams.
- Q6. I feel accountable for enabling internal opportunities, not just retaining headcount.
- Q7. The process for posting roles internally is clear and easy to follow.
- Q8. The process for reviewing internal candidates is consistent across departments.
- Q9. Approval steps for internal transfers are clear (who decides what, and when).
- Q10. Internal move timelines are predictable enough to plan delivery and backfills.
- Q11. I can escalate complex mobility cases and get decisions quickly.
- Q12. The internal mobility workflow creates less admin effort than external hiring.
- Q13. I can see which internal roles, gigs, or projects are open and relevant to my team.
- Q14. I have access to useful skills data (not just job titles) for internal candidates.
- Q15. Performance and potential information is available in a way I can use responsibly.
- Q16. I trust the accuracy of employee profiles (skills, interests, readiness) in our systems.
- Q17. Matching or search results help me identify internal talent I would otherwise miss.
- Q18. I have enough context to assess internal candidates fairly (scope, level, expectations).
- Q19. When someone leaves my team internally, I can manage delivery impact realistically.
- Q20. Backfill support (budget, hiring support, interim coverage) is adequate for internal moves.
- Q21. Internal moves are coordinated to avoid sudden productivity drops in critical teams.
- Q22. Transition plans (handover, documentation, overlap) are defined and followed.
- Q23. Internal mobility improves retention in my area more than it increases churn risk.
- Q24. The business accepts that internal mobility is part of building capability, not a failure.
- Q25. Internal candidates are considered fairly, regardless of team, location, or working model.
- Q26. I believe selection decisions are based on role requirements and evidence.
- Q27. I see consistent standards for who gets access to stretch roles, gigs, and promotions.
- Q28. I feel safe challenging a mobility decision if I suspect unfairness or bias.
- Q29. I understand what data is used in mobility decisions and what is intentionally excluded.
- Q30. I trust that the process protects privacy and avoids unnecessary “people labeling.”
- Q31. HR provides clear guidance when a move is sensitive (performance issues, conflicts).
- Q32. HR helps me navigate compensation or level changes in internal moves.
- Q33. I receive practical support to backfill roles after internal transfers.
- Q34. HR supports me in coaching employees toward realistic internal options.
- Q35. I get useful training on internal mobility conversations and decision standards.
- Q36. When I raise mobility risks, HR and leadership respond in a timely way.
- Q37. I feel comfortable discussing internal opportunities with my team members early.
- Q38. I do not fear negative consequences if I support a strong performer moving out.
- Q39. I can have honest conversations about readiness without damaging trust.
- Q40. I can coordinate well with other managers when an internal move affects both teams.
- Q41. I feel psychologically safe (psychologische Sicherheit) raising issues in mobility decisions.
- Q42. Internal mobility conversations in my area are respectful and structured, not political.
- Q43. Internal mobility helps us fill roles faster than relying mainly on external hiring.
- Q44. Internal moves lead to stronger performance outcomes after a reasonable ramp-up period.
- Q45. I would support increasing internal mobility targets in my function.
- Q46. The current approach to internal mobility fits our DACH/EU compliance expectations.
- Q47. I have the tools and data I need to support more internal moves next quarter.
- Q48. Overall, internal mobility is working well in my area.
2.2 Overall / 0–10 rating questions
- R1 (0–10). How confident are you handling an internal transfer end-to-end (timelines, decision, transition)?
- R2 (0–10). How fair do you believe internal mobility outcomes are in practice (not just in policy)?
- R3 (0–10). How much does internal mobility improve retention and capability in your area?
2.3 Open-ended questions (open text)
- O1. Describe one internal move you supported that worked well. What made it work?
- O2. What makes you hesitate to approve an internal transfer, even when it’s good for the employee?
- O3. Where does the internal mobility process slow down most (step, stakeholder, or tool)?
- O4. What information would help you assess internal candidates faster and more fairly?
- O5. What would “good backfill support” look like in your function?
- O6. Which policy or rule creates the most friction for internal moves today?
- O7. Where do you see unfairness risks (teams, locations, demographics, remote vs. office)?
- O8. What would help you have earlier mobility conversations with employees?
- O9. What training would make you more confident handling internal moves?
- O10. What should HR start doing to make internal mobility easier for managers?
- O11. What should HR stop doing because it creates delays or confusion?
- O12. What should HR continue doing because it supports good internal decisions?
| Question area | Score / threshold | Recommended action | Responsible (Owner) | Target / deadline |
|---|---|---|---|---|
| Awareness & ownership (Q1–Q6) | Area average <3.5 OR R1 <7 | HR Mobility Lead runs 45-minute manager briefing + clarifies responsibilities in 1-page SOP. | HR Mobility Lead + Functional Director | SOP published in ≤14 days; briefing completed in ≤30 days |
| Process & workflows (Q7–Q12) | Area average <3.2 | Run a 2-week workflow audit; remove 1 approval step; define an escalation path. | People Ops + Process Owner | New workflow live in ≤45 days |
| Talent visibility & data (Q13–Q18) | Q16 average <3.0 OR Q14 average <3.2 | Improve profile data: minimum skill fields, quarterly refresh, manager validation checklist. | HRIS/People Analytics Lead | Data standard defined in ≤21 days; first refresh in ≤60 days |
| Impact on team & backfill (Q19–Q24) | Q20 average <3.0 | Create a backfill playbook: interim coverage, budget rules, and approved vendor/ATS steps. | Finance Partner + TA Lead | Playbook agreed in ≤30 days; used in next move |
| Fairness & bias (Q25–Q30, R2) | R2 <7 OR any group gap ≥0.5 points | Run a fairness review: calibrate criteria, document decisions, add a challenge channel. | HRBP + DEI/Compliance | Review completed in ≤30 days; changes in ≤60 days |
| HR & leadership support (Q31–Q36) | Q31 average <3.2 OR Q36 average <3.0 | Set HR service levels (response + decision); publish templates for sensitive moves. | Head of People | SLAs published in ≤14 days; template pack in ≤30 days |
| Communication & psychological safety (Q37–Q42) | Q38 average <3.0 OR Q41 average <3.2 | Manager training: scripts for early conversations + “talent exporter” recognition mechanism. | L&D Lead + Leadership Team | Training delivered in ≤60 days; recognition rule set in ≤45 days |
| Overall value & readiness (Q43–Q48, R3) | Q48 average <3.5 OR R3 <7 | Re-align targets: define mobility goals by function; communicate trade-offs and support. | CHRO + Business Leaders | Targets set in ≤60 days; reviewed quarterly |
Key takeaways
- Measure manager friction, not just employee sentiment, to unlock internal moves.
- Use thresholds to trigger owners, actions, and deadlines automatically.
- Separate workflow problems from fairness problems; they need different fixes.
- Backfill support determines manager buy-in more than messaging does.
- Trend results by function and region, with strict anonymity rules in DACH.
Definition & scope
This survey measures how managers experience internal mobility: clarity of process, access to talent data, business impact, perceived fairness, and support from HR and leadership. It’s designed for people managers who have approved, blocked, or negotiated internal moves in the last 12 months. Results support decisions on workflow changes, manager training, HR service levels, and governance.
When to run internal mobility survey questions for managers
Run this when managers have fresh memories and real stakes. The best moments are right after a talent review, after a mobility campaign, or after a pilot of an internal marketplace. If you already run an employee survey, pair it with the manager version so you can compare perceptions and find mismatches; the employee-focused internal mobility survey is a useful complement.
Don’t over-survey. A yearly deep dive tracks culture change. A short pulse catches workflow regressions after you change tools or policies. For DACH organisations, align timing with Betriebsrat expectations and avoid coinciding with sensitive cycles like compensation changes, where feedback may be interpreted as performance-related.
| Blueprint | Best timing | Recommended items | Who answers | Primary decision supported |
|---|---|---|---|---|
| (a) Post-cycle survey | Within ≤10 days after talent reviews or mobility campaigns | 20–24 items (mix Q7–Q12, Q19–Q24, Q31–Q36, plus 3–4 open) | Managers involved in decisions | Fix workflow bottlenecks and HR support gaps fast |
| (b) Annual manager mobility survey | Same month each year | 18–22 items (balanced across all 8 areas) | All people managers | Track culture, fairness, and capability trends |
| (c) Short pilot pulse | After 6–8 weeks of pilot usage | 10–12 items (Q7–Q12, Q13–Q18, Q47) | Pilot-group managers | Decide go/no-go and what to change before rollout |
| (d) Targeted low-mobility survey | When a function/site has low internal fill rate | 12–15 items (focus Q19–Q24, Q25–Q30, Q37–Q42) | Managers in the hotspot | Address local blockers (backfill, fairness, leadership signals) |
- HR Mobility Lead selects blueprint and final items within ≤5 business days.
- People Analytics sets reporting cuts and anonymity threshold before launch (n ≥7 recommended).
- Functional Directors pre-commit owners for actions within ≤7 days after results.
- HR sends results summary and action triggers to owners within ≤10 days after close.
Turning survey results into workflow fixes (process & tools)
Most manager frustration lives in Q7–Q12: unclear steps, too many approvals, and slow escalations. If Q9 or Q10 drops, managers often “solve” it by blocking moves informally. That’s why internal mobility survey questions for managers should be tied to workflow metrics like decision lead time and backfill lead time, not only sentiment.
Keep the fix process simple: if scores are low, map the current flow, remove one friction point, and set service levels. A talent platform like Sprad Growth can help automate survey sends, reminders and follow-up tasks, but the core win still comes from fewer steps and clearer ownership.
- If Q7–Q12 average <3.2, run a 60-minute workflow mapping session with 5–7 managers.
- Then identify the top 2 delays (approval, data, backfill, or HR response).
- Then remove or timebox 1 step and publish a 1-page “how moves work” SOP.
- Then set SLAs and an escalation path for exceptions.
- Re-measure with a pulse after ≤8 weeks.
- People Ops maps the workflow and proposes 2 changes within ≤14 days.
- HR Mobility Lead publishes SOP and escalation contacts within ≤21 days.
- Functional Directors enforce the new SLAs and remove “side agreements” within ≤30 days.
- People Analytics tracks decision lead time monthly and flags breaches within ≤5 days.
Manager capability: conversations, psychological safety, and incentives
Even with a clean process, internal mobility stalls when managers avoid early conversations. Q37–Q42 tells you if people fear losing talent, struggle with readiness talks, or expect punishment for being a “talent exporter.” When Q38 is low, you’re not facing a communication problem; you’re facing an incentive and culture problem.
Treat this like manager enablement, not compliance. Give managers scripts for career conversations, clarify what “supporting mobility” looks like, and visibly reward the behaviour. If you already run structured 1:1s, embed mobility check-ins into that rhythm; the 1:1 meeting resources can help standardise prompts without turning chats into interrogation.
- If Q37 or Q38 <3.2, run a manager listening session focused on fears and trade-offs.
- Then align leadership: what happens to managers who develop talent for other teams?
- Then train scripts for “interest,” “readiness,” and “timing” conversations.
- Then publish transition expectations (handover, overlap, documentation) to reduce anxiety.
- L&D Lead delivers a 60-minute training with scripts and role-play within ≤60 days.
- Leadership Team defines a “talent exporter” recognition rule within ≤45 days.
- HRBP embeds mobility prompts into quarterly check-ins within ≤30 days.
- Managers document one mobility conversation per direct report per half-year within ≤180 days.
Data and talent visibility: skills, potential, and readiness signals
If Q14–Q17 are weak, managers are deciding in the dark. They fall back to personal networks, job titles, and who is visible. That increases bias risk and slows staffing. Fixing this is less about buying another tool and more about agreeing what “good data” means: a few validated skills, recent evidence, and clear role expectations.
Start with a shared skill language and keep it lightweight. The skill management guide is a good reference for building profiles that stay current without turning into spreadsheet theatre. Then connect this to talent reviews and calibration so managers trust what they see; otherwise Q16 stays low because the data feels “HR-owned” and outdated.
- If Q16 <3.0, define a minimum profile standard (3–5 skills + evidence + interests).
- Then run a quarterly refresh: employee updates, manager validates, HR audits samples.
- Then align “readiness” definitions with talent reviews and calibration.
- Re-test Q14–Q17 in a short pulse after ≤12 weeks.
- People Analytics defines profile fields and evidence rules within ≤21 days.
- Managers validate profiles for their teams within ≤45 days.
- HR Mobility Lead links profiles to internal roles/gigs within ≤60 days.
- Functional Leaders run a calibration session using agreed evidence standards within ≤90 days.
Governance in DACH/EU: Betriebsrat, GDPR, and trust
In DACH, manager feedback becomes unusable when people fear it will be tied to performance evaluation or used for “manager ranking.” Name this risk explicitly. Position the survey as process improvement, not an appraisal. Keep results aggregated, set a strict anonymity threshold (n ≥7 or higher where needed), and minimise any free-text that could identify individuals or sensitive cases.
Works council (Betriebsrat) alignment isn’t paperwork; it’s how you protect participation. If you already run broader Mitarbeiterbefragung programs, reuse your governance patterns, reporting thresholds, and retention rules; the employee survey governance checklist is a practical baseline to align expectations before you launch manager-specific internal mobility survey questions for managers.
- People Team agrees purpose, data fields, and retention period with Betriebsrat within ≤30 days pre-launch.
- People Analytics enforces anonymity thresholds (n ≥7) and suppresses small groups at reporting time.
- HR routes critical free-text (harassment, discrimination claims) to a defined case process within ≤24 h.
- HR deletes raw exports and limits access rights to named roles within ≤14 days after reporting.
Scoring & thresholds
Use a 1–5 agreement scale (1 = Strongly disagree, 5 = Strongly agree). Score each dimension as the mean of its items (e.g., Process & workflows = Q7–Q12). Also track “favourable” rate: % of responses that are 4 or 5. Combine both so you see intensity and distribution.
Use these thresholds to make decisions: Score <3.0 = critical (fix immediately); 3.0–3.9 = needs improvement (plan changes); ≥4.0 = strong (standardise and scale). Tie actions to owners: if Process is critical, it’s a People Ops problem; if Psychological Safety is critical, it’s a leadership and incentives problem; if Fairness gaps appear, it’s a governance and calibration problem.
| Score band | What it usually means | Decision trigger | Typical intervention |
|---|---|---|---|
| <3.0 | Managers experience daily friction or risk; informal workarounds are likely. | Immediate action required | Workflow change, SLA, escalation path, policy clarification within ≤45 days |
| 3.0–3.9 | Process exists but is inconsistent; outcomes depend on who you know. | Prioritise in next quarter | Training, templates, data refresh, targeted pilots within ≤90 days |
| ≥4.0 | Capability and trust are present; focus on scaling and keeping data current. | Codify and replicate | Roll out best practice, peer learning, automation, quarterly monitoring |
Follow-up & responsibilities
Speed matters more than perfection. Managers lose trust when surveys disappear for months. Set response times upfront and treat them like operational SLAs. Your follow-up should always be “Owner + action + deadline,” even when the action is simply “diagnose further.”
| Signal | Who owns it | First response SLA | What “response” means |
|---|---|---|---|
| Critical scores (any area average <3.0) | HR Mobility Lead + Functional Director | ≤7 days | Assigned owner, agreed fix approach, timeline communicated to managers |
| Fairness concern (R2 <7 or group gap ≥0.5) | HRBP + Compliance/DEI | ≤10 days | Fairness review plan, data cuts, and decision criteria review scheduled |
| Severe free-text allegation (discrimination/harassment) | ER/Compliance | ≤24 h | Case triage opened, confidentiality rules applied, next steps documented |
| Low backfill support (Q20 <3.0) | Finance Partner + TA Lead | ≤14 days | Backfill rules clarified, interim coverage options agreed, blockers removed |
- People Analytics publishes results dashboards within ≤10 days after survey close.
- HR Mobility Lead runs a 30-minute results readout for leaders within ≤14 days.
- Owners publish a short action plan (max 3 actions) within ≤21 days.
- HR reports completion rate of actions monthly; overdue items get escalated within ≤7 days.
Fairness & bias checks
Fairness is where manager and employee views often diverge. Managers may rate fairness high because they follow policy, while employees see hidden barriers (visibility, networks, “permission culture”). Use group comparisons carefully: you want to find patterns, not single out teams. In DACH, keep group sizes large enough to protect identities and agree the reporting approach with the Betriebsrat.
Start with a few high-signal cuts: function, location, job level, tenure band, and remote vs. office. Then look for gaps ≥0.5 points on Q25–Q30 or R2. Typical patterns: one site shows lower fairness because roles aren’t posted there; one function blocks transfers due to delivery pressure; remote managers report lower comfort challenging decisions (Q28) because escalation feels political.
| Check | How to run it | Threshold | What to do next |
|---|---|---|---|
| Outcome fairness perception | Compare R2 by function and location (n ≥7 per group). | Gap ≥0.5 | HRBP reviews criteria and approvals; leaders clarify standards within ≤30 days. |
| Challenge safety | Compare Q28 and Q41 by level (manager layer). | Any group average <3.2 | L&D runs escalation and “how to challenge” training within ≤60 days. |
| Process transparency | Compare Q9 and Q29 across regions. | Any region average <3.2 | People Ops simplifies steps and publishes decision criteria within ≤45 days. |
| Data trust | Compare Q16 by function (where profiles differ). | Any function average <3.0 | People Analytics runs a targeted data refresh and manager validation within ≤60 days. |
- People Analytics runs group cuts and flags gaps ≥0.5 within ≤10 days post-close.
- HRBP validates interpretation with 2–3 manager interviews within ≤14 days.
- Functional Director agrees one fairness improvement action within ≤30 days.
- HR audits whether the gap narrows in the next annual cycle.
Examples / use cases
Use case 1: Low process clarity blocks moves. Scores in Q7–Q12 come back low, and comments mention “too many approvals.” Decision: remove one approval gate, timebox the remaining approvals, and publish an escalation path. Action: People Ops rewrites the workflow and HR trains managers on the new SOP. Change you look for: fewer “informal blocks,” faster decisions, and more predictable transitions.
Use case 2: Managers fear being punished for exporting talent. Q38 is low and open text shows managers worry about losing top performers before deadlines. Decision: leadership defines that supporting mobility is expected, then sets an explicit backfill promise for critical moves. Action: functional leaders review workload planning and HR adds scripts for early conversations. Change you look for: earlier mobility discussions and fewer last-minute “surprise” transfers.
Use case 3: Fairness concerns emerge between locations. R2 differs strongly by site, and Q25/Q27 comments mention “roles are decided centrally.” Decision: require roles above a certain level to be posted internally for a minimum window, with documented exceptions. Action: HRBPs run a fairness review and align decision criteria using calibration practices; the talent calibration guide is a useful reference for making standards explicit. Change you look for: more consistent consideration of internal candidates across sites and fewer escalations.
Implementation & updates
Implement in three phases so you learn without creating survey fatigue. Start with a pilot in one function, then expand once you can show fast follow-through. Train leaders on how to read results without blaming individuals. Review the question set once per year so it stays aligned with your mobility model and tools.
If you run an internal talent marketplace, align manager questions with your marketplace workflow and adoption goals. For broader context on what good “marketplace” looks like, the talent marketplace guide is a practical overview; if you’re still comparing systems and governance needs, you can also use an internal talent marketplace software comparison to structure requirements without mixing it into the survey itself.
- Pilot: run blueprint (c) with 1 function for 6–8 weeks; keep to 10–12 items.
- Rollout: expand to 3–5 functions; add fairness and backfill items (Q19–Q30).
- Enablement: train managers and HRBPs on actions, owners, and deadlines.
- Annual review: drop low-signal items, add items for new policies or tools.
- Participation rate (target ≥70% annual; ≥55% pulse) owned by HR Mobility Lead, reviewed within ≤7 days.
- Average score by dimension, owned by People Analytics, reported within ≤10 days post-close.
- Action completion rate (target ≥80% within 90 days), owned by Head of People, reviewed monthly.
- Median decision lead time for internal moves, owned by People Ops, reviewed monthly.
- Internal fill rate for key roles (context KPI), owned by TA/Workforce Planning, reviewed quarterly.
Internal mobility becomes real when managers can support moves without sacrificing delivery or fairness. Internal mobility survey questions for managers give you early warning signals: unclear workflows, weak backfill support, low trust in data, and fear-driven behaviour that blocks talent flow. When you tie results to thresholds, owners, and deadlines, you improve conversation quality and remove hidden friction quickly.
Next, pick one pilot group, load the question set into your survey tool, and pre-assign owners for the top 3 action types (workflow, backfill, fairness). Agree anonymity thresholds with your Betriebsrat and publish a short note on how results will be used. After the first cycle, keep what works, cut noise, and trend the same core items year over year.
FAQ
How often should you run a manager internal mobility survey?
Run an annual survey for trend tracking and culture signals, then add short pulses after major changes (policy, tooling, marketplace rollout). A good cadence is 1 annual deep dive plus 1–2 pulses tied to real events (talent review, campaign, pilot). Avoid quarterly “always-on” surveys unless you have strong follow-through capacity and stable anonymity.
What should you do when scores are very low?
Treat low scores as an operational issue first, not a manager attitude issue. If any dimension average is <3.0, assign an owner within ≤7 days and pick one fix you can ship within ≤45 days (remove a step, define SLAs, publish escalation). Then re-measure with a short pulse after ≤8 weeks to confirm the change helped.
How do you handle critical comments without turning it into performance evaluation?
Route comments to themes, not individuals. Suppress or redact identifying details, and only share aggregated insights. Make it explicit that the survey improves process, not performance ratings. If comments include allegations (discrimination, harassment), route them to your formal case process within ≤24 h. Otherwise, focus on fixable systems: workflow, data, backfill, and decision standards.
How do you involve the Betriebsrat and stay GDPR-aligned?
Involve the Betriebsrat before you launch: clarify purpose, data minimisation, anonymity thresholds (often n ≥7), who sees what, and retention periods. Keep access role-based and delete raw exports after reporting. For a plain-language GDPR overview, the European Commission’s GDPR explanation can help align stakeholders on fundamentals without legal deep-dives.
How do you update the question bank over time without breaking trends?
Keep a stable core of 12–16 items (one or two per dimension) so trends remain comparable. Rotate the remaining items based on your current risks: new workflow, new marketplace feature, backfill policy changes, or fairness concerns. Document every change in a simple version log. If you add new items, run them for at least 2 cycles before deciding to drop them.



