Psychological Safety Survey Questions Template: Trust, Voice, and Risk-Taking at Work

By Jürgen Ulbrich

Psychological safety describes whether people feel safe to speak up, admit mistakes and take risks at work. This survey template gives you ready-to-use psychological safety survey questions so you can spot trust issues early, improve error culture and make leadership conversations more honest. You can run it with whole departments or as a quick pulse in single teams.

Psychological safety is about the work context (“psychologische Sicherheit”, “Fehlerkultur”), not private mental health. You measure whether collaboration, leadership and decisions feel safe enough for people to use their voice. The results help you prioritise actions alongside your broader employee engagement and retention work.

Survey questions

Use a 5-point Likert scale for all closed items unless stated otherwise (1 = Strongly disagree, 5 = Strongly agree). Frequency items use 1 = Never, 5 = Very often. Tags like [Annual] or [Pulse] show where each item fits best.

2.1 Closed questions (Likert scale)

  • Q1. In my team, I can raise concerns without worrying about negative consequences. [Annual & pulse]
  • Q2. I feel comfortable admitting when I made a mistake. [Annual & pulse]
  • Q3. I can say “I don’t know” without feeling judged. [Pulse]
  • Q4. People at my level are listened to when we challenge a decision. [Annual]
  • Q5. When I speak up, I feel my perspective is taken seriously. [Annual & pulse]
  • Q6. How often do you hold back ideas because you fear negative reactions? (reverse-scored) [Pulse]
  • Q7. I know clear channels for raising concerns (e.g. escalation paths, whistleblowing). [Annual]
  • Q8. In meetings, people are given time to finish their thoughts without being interrupted. [Pulse]
  • Q9. Different opinions (including minority views) are welcomed in my team. [Annual]
  • Q10. I feel respected regardless of my background, role, or tenure. [Annual]
  • Q11. Jokes or comments that target certain groups are called out and stopped. [Annual]
  • Q12. Remote or hybrid colleagues are included as fully as on-site colleagues. [Annual]
  • Q13. How often do you see the same few voices dominating discussions? (reverse-scored) [Pulse]
  • Q14. My team actively seeks input from quieter or underrepresented colleagues. [Annual & pulse]
  • Q15. When mistakes happen, we focus on learning, not blaming individuals. [Annual & pulse]
  • Q16. We regularly review incidents or errors to understand root causes. [Annual]
  • Q17. Lessons learned are documented and shared so others can avoid similar issues. [Annual]
  • Q18. I feel safe flagging risks or near-misses before they cause real damage. [Annual & pulse]
  • Q19. How often are people criticised in public for honest mistakes? (reverse-scored) [Pulse]
  • Q20. My manager uses errors as teachable moments, not reasons to shame people. [Annual]
  • Q21. I can give constructive feedback to colleagues without harming our relationship. [Annual & pulse]
  • Q22. My manager invites honest feedback on their own behaviour. [Annual]
  • Q23. Difficult topics (e.g. tension, conflict) are addressed early in my team. [Annual & pulse]
  • Q24. When there is conflict, we have clear and fair ways to resolve it. [Annual]
  • Q25. How often do you avoid feedback conversations because they feel unsafe? (reverse-scored) [Pulse]
  • Q26. Feedback here focuses on behaviour and impact, not on attacking people. [Annual]
  • Q27. My manager reacts calmly when hearing bad news or criticism. [Annual & pulse]
  • Q28. Leaders in my area admit their own mistakes openly. [Annual]
  • Q29. Leaders ask for input from all levels before final decisions. [Annual]
  • Q30. I trust leaders to protect people who speak up, even if views are unpopular. [Annual]
  • Q31. How often do you see people adapt what they say to please senior leaders? (reverse-scored) [Pulse]
  • Q32. Power distance (hierarchy) does not stop honest discussion in my team. [Annual]
  • Q33. I can ask colleagues for help without feeling I’m a burden. [Annual & pulse]
  • Q34. When someone struggles, the team steps in to support them. [Annual]
  • Q35. My teammates back me up in front of other teams or customers. [Annual]
  • Q36. How often do you feel isolated with your workload or challenges? (reverse-scored) [Pulse]
  • Q37. People keep their promises and follow through on commitments. [Annual]
  • Q38. I trust my colleagues to be honest about risks and mistakes. [Annual & pulse]
  • Q39. I feel safe trying new ideas, even if they might not work. [Annual & pulse]
  • Q40. We can challenge unrealistic deadlines or demands without fearing pushback. [Annual]
  • Q41. I feel comfortable challenging decisions that may be wrong or risky. [Annual]
  • Q42. Experiments and pilots are encouraged, even when outcomes are uncertain. [Annual]
  • Q43. How often do you stay silent when you disagree with a decision? (reverse-scored) [Pulse]
  • Q44. Overall, I feel psychologically safe (“psychologisch sicher”) in this team. [Annual & pulse]

2.2 Overall psychological safety “NPS-style” question

  • Q45. How likely are you to recommend this team as a psychologically safe place to work to a colleague? (0 = Not at all likely, 10 = Extremely likely) [Annual & pulse]

2.3 Open-ended questions

  • O1. In which situations do you hesitate most to speak up in this team?
  • O2. What would make it easier for you to raise concerns or share bad news?
  • O3. What is one concrete change that would improve our “Fehlerkultur” (how we handle mistakes)?
  • O4. What is one practice we should definitely continue because it supports psychological safety?

Decision & action table for psychological safety survey questions

Use this table to turn scores on your psychological safety survey questions into clear actions with owners and deadlines.

Dimension / Question range Trigger threshold Required action Owner Timeline
Speaking up & voice (Q1–Q7) Avg score <3.0 or >30% negative on any item Run a team workshop to map “speak-up blockers”; manager agrees on 2 new rituals (e.g. rotating chair, question round). Team lead with HR support Workshop within 14 days; rituals live in 30 days
Respect & inclusion (Q8–Q14) Avg <3.0 or gap ≥0.5 vs company average Review meeting norms; provide inclusive meeting training; set rule on interruptions and remote participation. Department head + DEI/HR Plan in 21 days; first training in 45 days
Mistakes & learning (Q15–Q20) Avg <3.0 or >25% negative on Q15 or Q19 Introduce “blameless postmortems” and monthly learning review; share 1 leader mistake story publicly. Area leader + HRBP First session within 30 days
Feedback & conflict (Q21–Q26) Avg <3.2 for the set or Q25 high (>3.5) Train managers on feedback scripts; every team sets clear feedback norms and escalation path. People team + line managers Training within 30 days; norms agreed in 45 days
Leadership & power distance (Q27–Q32, Q31) Avg <3.0 or ≥40% negative on Q27 or Q31 Individual coaching for managers; run skip‑level listening sessions; clarify protection for dissenting voices. HRBP + senior leader Coaching plan in 21 days; sessions within 60 days
Team trust & support (Q33–Q38) Avg <3.0 or Q36 >3.0 Adjust workload; introduce peer-support pairs; review cross-team behaviour in incident reviews. Team lead + ops manager Quick fixes in 14 days; structural changes in 60 days
Risk-taking & overall safety (Q39–Q44, Q45) Avg <3.0 or Q45 ≤6 (0–10) Align expectations: define “safe-to-fail” experiments; review how you handle failed experiments in performance talks. Head of function + HR Guidelines published in 30 days; review impact after 90 days
Critical comments in O1–O4 Any mention of harassment, discrimination, health/safety Trigger confidential review; follow internal investigation protocol; inform works council if required. HR + Legal Initial assessment ≤24 h; investigation steps agreed in 7 days

Key takeaways

  • Use scores by dimension (not single items) to spot weak spots.
  • Agree concrete rituals per team, not just generic “improvements”.
  • Owners and deadlines turn feedback into measurable change.
  • Share positive outliers to spread strong safety practices.
  • Repeat pulses to see if actions raise psychological safety scores.

Definition & scope

This survey measures “psychological safety” as Amy Edmondson defines it: a shared belief that the team is safe for interpersonal risk-taking. It targets all employees in knowledge-work, hybrid or remote setups and focuses strictly on work behaviour, not clinical diagnoses. Results guide leadership development, team coaching, changes to “Fehlerkultur” and follow-up surveys or 360° feedback.

Survey blueprints built from the psychological safety question bank

You rarely need all 40+ psychological safety survey questions at once. These blueprints give you shorter, targeted surveys for different situations.

Blueprint Purpose Length Question ranges Cadence
A. Team Psychological Safety Pulse Quick check of climate in one team or squad. 12 closed + 2 open Q1–Q6, Q8, Q15, Q21, Q33, Q39, Q44, Q45, O1–O2 Quarterly or after major incidents
B. Leadership & Psychological Safety Survey Deep dive on manager behaviour and power distance. 15 closed + 3 open Q1, Q4–Q5, Q22–Q24, Q27–Q32, Q40–Q41, Q44–Q45, O2–O4 Annually, or as part of manager 360°
C. Post‑Incident / Post‑Change Learning Pulse After outages, reorganisations or big strategy shifts. 10 closed + 2 open Q2, Q6, Q15–Q20, Q27, Q39, Q42, Q45, O1, O3 Within 2–3 weeks after the event
D. Cross‑Functional Project Team Pulse Short survey for temporary project or task forces. 10 closed + 2 open Q1, Q3, Q8–Q10, Q21, Q33–Q35, Q39, Q44–Q45, O2, O4 Mid‑project and at project end

For company-wide culture surveys, combine blueprint B with a selection from the other dimensions and run it alongside a broader engagement or experience survey. You can reuse scale and meta-data from your existing employee engagement survey questions to keep reporting consistent.

Scoring & thresholds

All psychological safety survey questions use the same 5-point scale, which keeps reporting simple and avoids confusion for employees.

  • Scale: 1 = Strongly disagree, 2 = Disagree, 3 = Neither, 4 = Agree, 5 = Strongly agree.
  • Frequency items: 1 = Never, 2 = Rarely, 3 = Sometimes, 4 = Often, 5 = Very often (reverse-score “unsafe” behaviours).
  • Low scores: Avg <3.0 on any dimension = critical. Trigger immediate follow-up.
  • Medium scores: 3.0–3.9 = improvement zone. Plan concrete actions over the next 3–6 months.
  • High scores: ≥4.0 = strength. Celebrate and share practices with other teams.

For Q45 (0–10 scale), treat 0–6 as detractors, 7–8 as neutral, 9–10 as promoters. A team-level average below 7.0 signals that many people would not actively recommend the team as safe, so the manager and HR should review the detailed scores and comments.

Turn scoring into decisions with simple if–then rules: “If avg speaking-up score <3.0, then run a facilitated workshop and coaching”; “If leadership items are ≥4.2 but inclusion is <3.2, then prioritise DEI and meeting norms rather than generic manager training.” Keep thresholds stable across cycles so trends are meaningful.

Follow-up & responsibilities

No psychological safety survey questions will help if nobody owns the follow-up. Decide roles and timelines before you launch.

  • HR / People team: designs the survey, guards anonymity rules, segments results, and tracks action plans across teams (report within 7 days after closing).
  • Direct managers: debrief results with their team, agree 1–3 changes, and document owners and dates (first meeting within 14 days).
  • Senior leaders: review hotspots and positive outliers, remove structural blockers (e.g. workload, processes) and allocate budget or time (quarterly review).
  • Works council (“Betriebsrat”) and Legal: review survey concept, legal basis and data retention rules before launch; handle critical cases from comments as needed.
  • Employees: give honest input, help design new rituals and hold managers accountable for agreed changes.

Set clear service levels. Example: serious allegations in comments are reviewed by HR/Legal within ≤24 h; any team with a dimension avg <3.0 gets support to create a written action plan within 30 days. You can manage these tasks in a simple tracker or through a talent platform like Sprad Growth that automates reminders and ownership.

Fairness & bias checks

Psychological safety survey questions often surface inequalities between groups. You need a fair way to look at those differences without exposing individuals.

  • Set anonymity thresholds: never show breakdowns for groups with <5 responses (or <7 for very sensitive topics). Combine small groups where needed.
  • Segment results by team, location, level, tenure, contract type and mode (remote, hybrid, office). Flag any subgroup scoring ≥0.5 points below the overall average on a dimension.
  • Look for patterns: e.g. juniors score 0.8 lower on “speaking up” than seniors, or women rate leadership safety 0.6 lower than men. Treat those as systemic, not individual, issues.
  • React fairly: run listening sessions with affected groups, ensure impacted managers receive coaching, and update policies (e.g. harassment procedures, whistleblowing).
  • Document bias checks: keep a short note of which cuts you ran and what you concluded. This protects anonymity and shows seriousness towards the works council.

Never use psychological safety scores to “hunt” individual managers or teams. Focus on patterns, not names. Pair this survey with tools that reduce bias in performance decisions, such as behaviourally anchored rubrics or guidance from resources on performance review biases.

Examples / use cases

Use case 1: Low speaking-up scores in a product squad

A digital product squad in a hybrid setting scores 2.6 on Q1–Q6, with many comments like “decisions are made before the meeting.” Leadership scores are fine. HR and the engineering manager run a retro focused on decision-making, add a rotating facilitator role, and require written rationales for major decisions.

Three months later, a pulse shows speaking-up scores up to 3.7. Incident reports rise slightly, but outages drop because risks get raised earlier.

Use case 2: Remote workers feel excluded

An international team discovers that remote employees score 0.7 lower on Q8–Q14 and Q33–Q36 than office colleagues. Comments mention cameras off, side chats in the room, and decisions made “after the call.” The area lead introduces clearer meeting rules: everyone on separate laptops, round‑robin check‑ins, and written summaries in the project channel.

Next quarter, the gap shrinks to 0.2 points. Remote staff report feeling more involved and start volunteering ideas for process improvements.

Use case 3: Punitive error culture after a failed rollout

Following a failed release, a post‑incident pulse shows Q15 and Q19 at 2.3, with comments describing public blame and panic. The COO apologises openly, introduces “blameless postmortems” and forbids naming individuals in incident decks. Managers get a short training on how to respond to errors.

Within two cycles, error-reporting volume doubles, while severe incidents fall. Psychological safety scores in the “mistakes & learning” dimension rise above 3.8.

Implementation & updates

Roll out your psychological safety survey questions in small, safe steps. In DACH especially, coordinate with the works council and data protection early.

  • Pilot. Start with 1–2 volunteer teams. Test questions, timing, anonymity rules and the debrief format (pilot within 30 days).
  • Legal & GDPR. Agree legal basis (usually legitimate interest), data minimisation, access rights and deletion schedule with HR/Legal and the works council before launch.
  • Rollout. Extend to more teams or the full company once the pilot works. Use survey software or a people platform to automate invites, reminders and follow-up tasks.
  • Manager training. Offer a 60–90 minute session on how to read scores, hold psychological-safety conversations and avoid defensiveness (within 2 weeks after company-wide launch).
  • Regular review. Revisit question wordings, thresholds and processes at least once per year. Track metrics like participation rate, average scores per dimension, number of actions completed, and time from survey close to team debrief.

In DACH, share a short data-protection info sheet: what you collect (and what not), how long you store results (e.g. 24 months), which tools you use, and that comments are never reported in a way that reveals individuals. Resources on general employee survey templates and GDPR/works council checklists can help you align your psychological safety survey with existing standards.

Conclusion

Done well, psychological safety survey questions give you three big advantages. They reveal hidden risks before they turn into incidents or resignations. They raise the quality of conversations between managers and teams by putting sensitive topics into neutral, shared language. And they help you prioritise where to invest time and training to strengthen trust, inclusion and “Fehlerkultur”.

Start small: choose one blueprint, agree on anonymity rules with the works council, and launch a pilot. Load the items into your survey tool or HR platform, set clear owners and dates for follow-up, and plan the first team debriefs before the survey even opens. Use the first round less as a verdict and more as a learning exercise on how your organisation talks about psychological safety.

From there, make it a rhythm. Repeat short pulses, track trends, and connect actions to other processes like development plans, performance reviews and leadership training. Over time, you’ll see clearer patterns, fewer surprises and a culture where people can raise problems early, experiment more confidently and support each other under pressure.

FAQ

  1. How often should we run psychological safety surveys?
    For most organisations, once per year plus targeted team pulses works well. Use the annual survey for a deep dive across all psychological safety dimensions. Then run small pulses (10–12 items) in specific teams after big changes, incidents or leadership transitions. Avoid surveying the same group more than quarterly to limit fatigue and keep response quality high.
  2. What should we do if a team’s scores are very low?
    Treat averages below 3.0 on several dimensions as urgent. First, meet the team in a safe format (anonymous questions, external facilitator if needed) to understand examples behind the scores. Then agree 1–3 concrete changes with owners and deadlines. Support managers with coaching, and check in again using a short pulse after 8–12 weeks to see if things improved.
  3. How do we protect anonymity, especially in small teams?
    Set minimum group sizes for reporting (e.g. ≥5 responses) and never show results for smaller subgroups. Combine locations, roles or genders where needed. For open comments, remove names or identifying details before sharing. In DACH, align this approach with the works council and document it. Communicate clearly that individual responses are never shown to managers or used in performance decisions.
  4. Should psychological safety scores influence performance reviews?
    Do not tie individual compensation directly to psychological safety survey scores. You can use patterns to decide which managers need extra support or training, or to recognise teams that model strong “psychologische Sicherheit”. Keep the survey primarily as a development tool. This reduces pressure on employees and managers and aligns with best practices described by researchers like Amy Edmondson.
  5. How do we keep the question set up to date?
    Review the items annually with HR, a few managers and employee representatives. Ask: “Which questions gave us actionable insights?” and “What felt unclear or repetitive?” Add or adjust items to reflect new realities, such as remote-work norms or rising workload issues. Keep a stable core (e.g. Q1, Q15, Q39, Q44, Q45) so you can track long-term trends.

Jürgen Ulbrich

CEO & Co-Founder of Sprad

Jürgen Ulbrich has more than a decade of experience in developing and leading high-performing teams and companies. As an expert in employee referral programs as well as feedback and performance processes, Jürgen has helped over 100 organizations optimize their talent acquisition and development strategies.

Free Templates &Downloads

Become part of the community in just 26 seconds and get free access to over 100 resources, templates, and guides.

Free Leadership Effectiveness Survey Template | Excel with Auto-Scoring
Video
Performance Management
Free Leadership Effectiveness Survey Template | Excel with Auto-Scoring
Free Advanced 360 Feedback Template | Ready-to-Use Excel Tool
Video
Performance Management
Free Advanced 360 Feedback Template | Ready-to-Use Excel Tool

The People Powered HR Community is for HR professionals who put people at the center of their HR and recruiting work. Together, let’s turn our shared conviction into a movement that transforms the world of HR.