Psychological Safety Survey Questions for Teams: Spot Risks Before Performance Drops

By Jürgen Ulbrich

Psychological safety sounds abstract, but it shows up in very concrete moments: do people speak up, admit mistakes, ask for help, or stay silent? This set of psychological safety survey questions for teams gives you early warning signals before performance, innovation or retention start to slide. You get a reusable question bank plus clear thresholds and actions so results turn into real changes, not another ignored survey.

Survey questions: psychological safety survey questions for teams

All closed questions use a 1–5 scale (1 = Strongly disagree, 5 = Strongly agree). The bracket at the end shows where to use each item: (Pulse + annual) or (Annual only).

  • In our team, I feel comfortable asking questions or saying “I don’t know” when needed. (Pulse + annual)
  • I can raise work-related concerns or report potential problems without fear of negative consequences. (Pulse + annual)
  • When I speak up about an idea or issue, my perspective is taken seriously by others. (Pulse + annual)
  • I know the channels (e.g. escalation path) to report serious issues or near-misses. (Annual only)
  • It’s safe here to try a new approach, even if it might fail. (Pulse + annual)
  • When mistakes happen, our team focuses on learning and solutions rather than blaming individuals. (Pulse + annual)
  • I feel I can admit a mistake without hurting my reputation. (Annual only)
  • Bad news (e.g. project setbacks) tends to be shared early rather than hidden. (Annual only)
  • I can give and receive honest feedback without harming working relationships. (Pulse + annual)
  • Disagreements or conflicts are addressed openly and resolved fairly in my team. (Pulse + annual)
  • I can challenge decisions respectfully, even if I have less seniority. (Annual only)
  • We regularly reflect on what we can improve as a team (e.g. retros, lessons learned). (Annual only)
  • Diverse opinions are welcomed in this team, even if they challenge the majority view. (Pulse + annual)
  • People here treat each other with respect regardless of role, background or identity. (Pulse + annual)
  • Offensive jokes or derogatory comments against any group are discouraged and addressed. (Annual only)
  • Remote, part-time or shift-based colleagues are included equally in discussions and decisions. (Annual only)
  • My manager reacts calmly and supportively when I bring up problems or complaints. (Pulse + annual)
  • My manager openly admits their own mistakes and encourages us to learn from them. (Annual only)
  • My manager explicitly communicates that retaliation for speaking up will not be tolerated. (Annual only)
  • Team members regularly help each other when someone is overloaded or needs advice. (Pulse + annual)
  • Information and knowledge are shared openly among our team. (Pulse + annual)
  • I feel comfortable asking my colleagues for help when I’m stuck. (Pulse + annual)
  • Overall, I feel psychologically safe in this team. (Pulse + annual)
  • I would speak up quickly if I noticed a serious risk, near miss or compliance concern. (Pulse + annual)

Overall team safety rating (0–10)

  • How likely are you to recommend this team as a place where people speak up and learn from mistakes? (0 = Not at all likely, 10 = Extremely likely)

Open questions

  • In which situations do you most hesitate to speak up in this team?
  • What would make it easier for you to raise concerns or share bad news earlier?
  • What is one specific change that could improve our “Fehlerkultur” (how we handle mistakes and failures)?
  • What is one thing our team currently does that really helps you speak openly and safely?

Decision & action table

Question group Trigger threshold Recommended action Owner Timeline
Speaking up & questions (Q1–Q4) Average <3.0 or ≥30% rate 1–2 Run a team session on speak-up barriers; agree 2 new rituals (e.g. round-robin check-in, anonymous “risk inbox”). Team lead + HRBP Session ≤14 days; rituals live ≤30 days
Mistakes & learning (Q5–Q8) Average <3.0 or ≥25% rate 1–2 Introduce blameless post-mortems; leaders share one “mistake & learning” story in team meetings each month. Team lead + Department head First post-mortem ≤21 days; monthly practice ongoing
Feedback & conflict (Q9–Q12) Average <3.5 or ≥20% rate 1–2 Define feedback rules; provide short training on constructive feedback and conflict resolution with real team cases. People team + Team lead Training ≤30 days; team rules documented ≤45 days
Inclusion & respect (Q13–Q16) Average <3.0 or subgroup gap ≥0.5 Review meeting formats; set rules for airtime and respectful language; check remote/part-time inclusion in key decisions. Diversity/HR + Area manager Rules agreed ≤30 days; first review of meetings ≤60 days
Manager behaviour (Q17–Q19) Average <3.0 or any item ≥40% rate 1–2 Start 1:1 coaching for the manager; run skip-level listening sessions; restate “no retaliation” policy to the team. HRBP + Manager’s manager Coaching plan ≤21 days; skip-levels ≤45 days
Collaboration & support (Q20–Q22) Average <3.0 or ≥25% rate 1–2 Check workload distribution; set up peer support pairs; schedule regular knowledge-sharing slots in team meetings. Team lead + Department head Plan ≤14 days; first knowledge session ≤30 days
Overall safety & risk signals (Q23–Q24 + 0–10 item) Average <3.0 or 0–10 rating ≤6 Clarify risk-reporting routes; agree 2–3 “safe-to-fail” experiments; ask a senior leader to reinforce psychologische Sicherheit. Function lead + HR Guidelines ≤30 days; senior message ≤45 days

Key takeaways

  • Use this survey as an early-warning system for team climate and hidden risks.
  • Cluster questions by theme to see exactly where psychological safety breaks down.
  • Link clear score thresholds to actions, not vague “we should improve” statements.
  • Protect anonymity, define owners and deadlines, and track every follow-up step.
  • Repeat short pulses to test if workshops and experiments improve psychologische Sicherheit.

Definition & scope

This survey measures psychological safety (psychologische Sicherheit) in teams: can people speak up, admit mistakes, ask for help and disagree without fear of blame or exclusion? It targets intact teams at any level, from shop floor to leadership. Results feed into coaching, team workshops, “Fehlerkultur” and feedback culture, and complement a broader Employee Engagement & Retention strategy or general employee experience surveys.

Core dimensions & typical actions

1. Speaking up & questions

This covers comfort with asking questions, saying “I don’t know”, and raising concerns (Q1–Q4). Scores below 3.0 usually mean people only speak up when it feels 100% safe or when problems are already big.

  • Team lead opens each meeting with a quick “questions first” round; start within ≤14 days.
  • HRBP coaches managers to thank people for bad news instead of reacting defensively; begin within ≤30 days.
  • Publish a simple escalation path slide in the team workspace; owner: Team lead; update yearly.
  • Use 1:1s to ask “What is one thing you didn’t say in the last meeting?”; start next 1:1 cycle.

2. Mistakes & learning

This dimension looks at Fehlerkultur: how the team reacts when things go wrong (Q5–Q8). Low scores often show blame, shame or hiding errors, which kills innovation and safety reporting.

  • After incidents, run 30–45 minute blameless reviews focusing on process, not people; owner: Team lead; from next incident onward.
  • Manager shares one own mistake and learning in a monthly meeting; owner: Manager; first story ≤30 days.
  • HR provides a short guide for “no-blame” language; deliver to all managers ≤45 days.
  • Set a target to log ≥2 near misses per quarter per team; review with HSE/Compliance quarterly.

3. Feedback & conflict

Here you see whether feedback and disagreements are handled constructively (Q9–Q12). When scores dip, conflicts move into side channels, and honest feedback disappears.

  • Run a 60-minute workshop to co-create feedback rules (“no surprises”, “criticize in private”); owner: HR; session ≤45 days.
  • Introduce a simple “start/stop/continue” round monthly; owner: Team lead; first round ≤30 days.
  • Managers practice asking for feedback first in 1:1s; track in performance check-ins each quarter.
  • If conflicts escalate, bring in a neutral facilitator for one session; owner: HRBP; arrange within ≤14 days.

4. Inclusion & respect

This lens covers respect, belonging and equal voice for remote, part-time or minority groups (Q13–Q16). Gaps between subgroups signal fairness issues and future attrition risk.

  • Check results by location, contract type and language; owner: HR Analytics; deliver cut ≤10 days post-survey.
  • Rotate who speaks first in meetings; include remote participants early; owner: Team lead; start next meeting.
  • Refresh team rules on jokes and comments; zero tolerance for discriminatory remarks; owner: Manager; confirm ≤30 days.
  • Set up buddies for new or underrepresented colleagues; owner: HR + Team lead; complete matches ≤60 days.

5. Manager behaviour

Manager reactions to mistakes, criticism and bad news (Q17–Q19) strongly predict psychological safety. Low scores here often drag down all other areas.

  • Offer targeted coaching on listening and de-escalation; owner: HRBP; start within ≤21 days of low result.
  • Run skip-level feedback sessions twice a year; owner: Manager’s manager; first session ≤60 days.
  • Have managers restate “no retaliation” and give examples in team meetings; next meeting after survey.
  • Include psychologische Sicherheit in manager performance goals; owner: HR + Leadership; update goals by next cycle.

6. Collaboration & support

This dimension reflects help-seeking, workload sharing and open information flow (Q20–Q22). Weak scores can signal burnout risk and silo behaviour.

  • Map workload and adjust priorities where individuals are overloaded; owner: Team lead; review ≤30 days.
  • Introduce weekly 15-minute “help needed / help offered” stand-ups; owner: Scrum Master or lead; start next sprint.
  • Create shared documentation for key processes; owner: Senior IC; first version ≤45 days.
  • Celebrate visible acts of support in meetings; owner: Manager; ongoing from now.

7. Overall safety & risk signals

These items aggregate how safe the team feels and willingness to flag risks (Q23–Q24 + 0–10 item). Very low ratings can hint at compliance, safety or whistleblowing issues.

  • If avg <3.0, treat it as a risk indicator and review with HR and Compliance; meeting ≤7 days.
  • Clarify anonymous channels for serious issues (e.g. hotline, trusted ombudsperson); owner: Legal/Compliance; communication ≤30 days.
  • Include psychological safety in site or function risk registers; owner: HSE/Compliance; update next cycle.
  • Track trends across surveys; owner: HR Analytics; provide quarterly dashboard to leadership.

Blueprints: survey formats

Use the full question bank flexibly. These blueprints help you pick the right format for your context without reinventing the wheel every time.

Blueprint Purpose Questions Length & frequency
Quarterly psychological safety pulse Quick health check on key drivers and trends in active teams. Q1–Q3, Q5–Q6, Q9–Q10, Q13–Q14, Q17, Q20–Q21, Q23 + 0–10 item. 12 closed + 1 rating; every 3–4 months.
Annual in-depth team safety survey Full picture across all dimensions for strategy, culture and training plans. All Q1–Q24 + 0–10 item + 4 open questions. 24 closed + 1 rating + 4 open; 1× per year.
Post-incident / post-conflict follow-up Check impact of a recent incident, reorganisation or conflict on safety. Q2, Q5–Q8, Q9–Q10, Q17, Q23–Q24 + 2–3 open questions tailored to the event. 8–10 closed + 2–3 open; run within ≤30 days of incident.
Agile / retrospective micro-pulse Integrate psychologische Sicherheit into sprint retrospectives. Q1, Q5, Q9, Q12, Q20, Q23 (short 1–5 rating each). 5–6 items; every sprint or monthly.

Scoring & thresholds

The closed items use a 1–5 scale (1 = Strongly disagree, 5 = Strongly agree). As a rule of thumb, an average score below 3.0 signals a critical gap, 3.0–3.9 shows “needs improvement”, and ≥4.0 is healthy. Watch both averages and the percentage of low ratings (1–2) on each question.

  • If any theme avg <3.0: Team lead runs an action-focused discussion and defines 2–3 steps (Owner: Team lead; session ≤14 days).
  • If ≥30% rate 1–2 on a single item: HRBP investigates context with manager and, if needed, Betriebsrat (Owner: HRBP; initial review ≤7 days).
  • If theme avg 3.0–3.9: Team sets 1–2 experiments and re-checks via pulse next quarter (Owner: Team lead; plan ≤30 days).
  • If theme avg ≥4.0 across ≥2 surveys: Recognise and document as good practice (Owner: HR; share examples within ≤60 days).

Follow-up & responsibilities

The survey only builds trust when people see movement. Clarify upfront who owns which step, including GDPR (DSGVO) and Betriebsrat topics. A talent platform like Sprad Growth or a similar system can help automate survey sends, reminders and follow-up tasks.

  • HR/People Ops: design survey, set anonymity rules, complete data protection check and consult the Betriebsrat ≥30 days before launch.
  • HR/IT: ensure survey data is stored in the EU, access-limited and deleted or anonymised after a defined period (e.g. 12–24 months).
  • Team leads: receive summary within 5 working days after survey close and schedule a team results meeting within the next 5 working days.
  • Each team: agrees on 2–3 actions with clear owners and due dates, documents them in a shared place (Owner: Team lead; log actions ≤21 days).
  • HRBPs/Senior leaders: review cross-team patterns quarterly and address structural blockers (KPIs, policies, tools) within ≤60 days.
  • HRBP or Ombudsperson: if comments flag acute risk (e.g. harassment, safety threats), start a confidential process within ≤24 hours.

Fairness & bias checks

Psychological safety is rarely evenly distributed. Office vs. frontline, remote vs. on-site or minority vs. majority groups often score differently. Analyse data by subgroup while keeping anonymity (show subgroup data only if there are at least 5 responses).

  • Segment results by team, location, contract type, working pattern (remote/office/shift) and tenure; Owner: HR Analytics; deliver segmented reports ≤10 days.
  • Flag any subgroup scoring ≥0.5 below company or site average on a dimension; follow up with focus groups or interviews (Owner: HRBP + local leader; plan ≤30 days).
  • Typical pattern: Remote staff feel less included in decisions; response: adjust meeting formats and update collaboration tools (Owner: Team lead; changes ≤45 days).
  • Typical pattern: New hires and minority groups report higher fear of speaking up; response: buddy systems, inclusive leadership training, clear anti-retaliation reminders (Owner: HR + Managers; rollout ≤60 days).

Examples / use cases

Example 1: Product team hesitates to challenge decisions. A software product team scored 2.7 on Q1–Q3 (speaking up) while the company average was 3.9. Comments mentioned fear of contradicting the senior product manager. HR organised a two-hour facilitated workshop, followed by manager coaching. The team agreed on new rules (“junior voices first”, no interruptions, explicit thanks for dissent). Three months later, the speak-up score rose to 3.9, and the backlog contained more improvement ideas from all levels.

Example 2: Blame culture on the shop floor. One manufacturing shift had an average of 2.5 on mistakes and learning (Q5–Q8) and several comments about “public shaming” when errors happened. The site manager stopped public blame, introduced short, no-blame incident reviews after each near miss and asked supervisors to share one weekly learning story. In the next survey, the score rose to 3.6, near-miss reporting doubled, and quality incidents dropped.

Example 3: Inclusion gap in a shared service centre. Most scores were strong, but international employees scored 0.8 lower on inclusion (Q13–Q16) than local staff. HR ran two confidential focus groups, added an inclusive language workshop, adjusted shift patterns for time zones and introduced cross-cultural mentoring pairs. In the following cycle, the gap shrank to 0.3 and retention of international staff improved.

Implementation & updates

Start with a small pilot and grow from there. In DACH organisations, involve Datenschutz and the Betriebsrat early, agree on legal basis (usually Art. 6(1)(f) GDPR “legitimate interest”) and define data minimisation and retention rules. Connect this survey with your broader people processes, for example performance reviews or engagement surveys, rather than running it in isolation.

  • Pilot the full annual survey with 2–3 diverse teams (office, hybrid, frontline); Owner: HR; design and run within ≤60 days.
  • Document anonymity thresholds (e.g. min. 5 responses per cut) and retention periods in a short data concept; Owner: HR + Legal; approve ≤30 days before launch.
  • Train managers on interpreting scores and running team discussions; Owner: HRBP; offer 60–90 minute sessions within ≤4 weeks of pilot.
  • After the pilot, refine questions, thresholds and communication based on feedback; Owner: HR; complete adjustments ≤30 days post-pilot.
  • Track key metrics such as participation rate (target ≥70%), average safety scores, action completion rate and changes in incident/near-miss reporting; Owner: HR Analytics; update quarterly.

For a deeper link with development and performance, align actions from this survey with your broader Performance Management approach. For example, include psychological safety goals in manager objectives or team OKRs and review them in regular check-ins.

Conclusion

Psychological safety is often invisible until something goes wrong: a project derails, an incident stays unreported, or a valued colleague leaves. This survey template turns those vague worries into clear psychological safety survey questions for teams, concrete scores and practical actions. You see earlier where people stay silent, where mistakes are punished and where inclusion is breaking down.

Used well, the survey also improves the quality of conversations. Instead of arguing about feelings, teams can look at data, read a few comments and then design small experiments: a new check-in round, different meeting rules, a change in how managers respond to bad news. Over time, that builds trust and a more honest Feedbackkultur.

Next steps can be simple: pick one pilot team, set up the annual in-depth survey in your existing employee survey tool or HR platform, and agree on anonymity rules with Datenschutz and Betriebsrat. Schedule a results workshop about three weeks after launch and leave that meeting with 2–3 documented experiments, owners and deadlines. Once you see what works, roll out a quarterly pulse and link the insights to your engagement, performance and retention work.

FAQ

  • How often should we run this psychological safety survey?
    Most organisations combine one in-depth survey per year with 2–3 shorter pulses. The annual survey gives a full picture, the pulses show trends and whether actions help. Google’s research on high-performing teams found psychological safety as the top factor for success, which supports treating this as an ongoing practice rather than a one-off Google re:Work study.
  • What should we do if a team gets very low scores?
    Treat low scores as a signal, not a verdict. First, HRBP and manager review the data and comments confidentially. Then they hold a team session to understand examples and co-create a few changes. If comments indicate retaliation, bullying or safety risks, escalate through HR, Compliance or the ombudsperson within ≤24 hours. Document all steps and check progress with a pulse survey after 3–6 months.
  • How do we handle critical or emotional comments?
    Look for patterns instead of reacting to single sentences. Group comments by themes (e.g. “fear of speaking up in meetings”) and share them in anonymised form. If comments describe specific incidents that may breach policy or law, escalate via HR or Compliance. In team sessions, focus on “What can we change going forward?” rather than “Who wrote this?”. That keeps discussions constructive.
  • How do we involve managers and employees without creating fear?
    Explain the purpose clearly: improving teamwork, Fehlerkultur and wellbeing, not hunting for “bad managers”. Share how anonymity works and who sees what. Ask managers to share their own scores and learning points first, then invite the team. Co-create 2–3 small experiments per quarter, like check-in questions or no-blame retros. When people see follow-up, participation and honesty increase.
  • How should we update the question set over time?
    Keep the core questions stable for at least 2–3 years so you can track trends. Review the full set annually with HR, a few managers, the Betriebsrat and, if possible, some employees. Retire questions that are consistently very high and no longer helpful, and add 1–2 items for new priorities (e.g. hybrid work). For broader changes, align with your overall employee survey framework, for example the one described in this employee survey template guide.

Jürgen Ulbrich

CEO & Co-Founder of Sprad

Jürgen Ulbrich has more than a decade of experience in developing and leading high-performing teams and companies. As an expert in employee referral programs as well as feedback and performance processes, Jürgen has helped over 100 organizations optimize their talent acquisition and development strategies.

Free Templates &Downloads

Become part of the community in just 26 seconds and get free access to over 100 resources, templates, and guides.

Free Leadership Effectiveness Survey Template | Excel with Auto-Scoring
Video
Performance Management
Free Leadership Effectiveness Survey Template | Excel with Auto-Scoring

The People Powered HR Community is for HR professionals who put people at the center of their HR and recruiting work. Together, let’s turn our shared conviction into a movement that transforms the world of HR.