Career Framework Survey Questions: How Employees and Managers Experience Career Paths

By Jürgen Ulbrich

These career framework survey questions help you see whether your “clean-on-slides” framework is clear, trusted, and usable in real career conversations. You’ll get early warnings (confusion, fairness concerns, blocked mobility) and a simple way to turn scores into decisions you can execute with owners and deadlines.

Survey questions

2.1 Closed-ended questions (5-point Likert scale)

Scale: 1 = Strongly disagree, 2 = Disagree, 3 = Neither agree nor disagree, 4 = Agree, 5 = Strongly agree.

Employees (E1–E48)

Awareness & understanding (E1–E6)

  • E1 [Employee] I know our career framework (Karriererahmen/Laufbahnmodell) exists.
  • E2 [Employee] I know where to find the career framework content.
  • E3 [Employee] I understand what the framework is used for in decisions.
  • E4 [Employee] I understand how my role maps to the framework.
  • E5 [Employee] I can explain the levels in my job family to a colleague.
  • E6 [Employee] I know who to ask when the framework is unclear.

Clarity of levels & roles (E7–E12)

  • E7 [Employee] Level expectations for my role are written in observable terms.
  • E8 [Employee] Promotion criteria in my job family are specific and measurable.
  • E9 [Employee] The difference between adjacent levels is easy to understand.
  • E10 [Employee] Examples of “good performance” by level help me self-assess.
  • E11 [Employee] I know what “scope” and “impact” mean in our level definitions.
  • E12 [Employee] I know what evidence is expected for a level change.

Fairness & access (E13–E18)

  • E13 [Employee] Level assignments feel consistent across teams doing similar work.
  • E14 [Employee] Promotions feel fair across locations and working models (remote/office).
  • E15 [Employee] Promotions feel fair across demographics and personal backgrounds.
  • E16 [Employee] I understand how calibration works for promotions or re-leveling.
  • E17 [Employee] I trust decisions are based on evidence, not visibility or politics.
  • E18 [Employee] I know how to raise concerns about leveling or promotion fairness.

Skills & development link (E19–E24)

  • E19 [Employee] I see which skills matter most for my next level.
  • E20 [Employee] I can link my development plan to level expectations.
  • E21 [Employee] Learning resources align with the skills in the framework.
  • E22 [Employee] Feedback I receive references the framework, not only personal opinion.
  • E23 [Employee] I know how to build evidence for a promotion case.
  • E24 [Employee] I can track progress on skills relevant to my career path.

Internal mobility & opportunities (E25–E30)

  • E25 [Employee] The framework helps me find realistic lateral move options.
  • E26 [Employee] I can see which internal roles are “nearby” for my skills.
  • E27 [Employee] I understand how level equivalence works across functions.
  • E28 [Employee] I feel encouraged to apply for internal roles or projects.
  • E29 [Employee] Internal job posts describe levels and expectations consistently.
  • E30 [Employee] I know the steps and timeline for an internal move.

Manager support in conversations (E31–E36)

  • E31 [Employee] My manager discusses the framework in 1:1s.
  • E32 [Employee] My manager helps translate level expectations into concrete goals.
  • E33 [Employee] My manager gives clear examples of what to improve for progression.
  • E34 [Employee] My manager supports internal moves, even if it means I leave.
  • E35 [Employee] My manager sets aside time for development planning regularly.
  • E36 [Employee] My manager uses the framework consistently across the team.

Psychological safety & trust (E37–E42)

  • E37 [Employee] I feel safe asking questions about my level or role mapping.
  • E38 [Employee] I can express interest in a lateral move without career damage.
  • E39 [Employee] I do not fear being labeled negatively by the framework.
  • E40 [Employee] I trust the framework is used for growth, not punishment.
  • E41 [Employee] I trust level discussions are confidential and handled respectfully.
  • E42 [Employee] I believe feedback is welcomed when the framework fails in practice.

Overall impact & transparency (E43–E48)

  • E43 [Employee] The framework increases transparency about career paths here.
  • E44 [Employee] The framework improves the quality of career conversations.
  • E45 [Employee] The framework helps me make realistic career decisions.
  • E46 [Employee] The framework increases my motivation to develop skills.
  • E47 [Employee] The framework makes promotions feel more predictable.
  • E48 [Employee] Overall, the framework improves my intent to stay.

Managers (M1–M42)

Clarity & training (M1–M6)

  • M1 [Manager] I understand the level definitions in the job families I lead.
  • M2 [Manager] I received practical training on how to use the framework.
  • M3 [Manager] I know how to map roles to levels in ambiguous cases.
  • M4 [Manager] I can explain promotion criteria in a clear, consistent way.
  • M5 [Manager] I know what documentation is needed for promotion decisions.
  • M6 [Manager] I know where to find up-to-date framework guidance.

Use in reviews & 1:1s (M7–M12)

  • M7 [Manager] I use the framework in regular 1:1 development conversations.
  • M8 [Manager] I reference level expectations when giving performance feedback.
  • M9 [Manager] I use the framework to set measurable growth goals.
  • M10 [Manager] My team’s development plans connect to the framework.
  • M11 [Manager] The framework reduces disagreement about expectations.
  • M12 [Manager] I can coach “what good looks like” at the next level.

Calibration & promotion decisions (M13–M18)

  • M13 [Manager] Calibration meetings use consistent criteria aligned to the framework.
  • M14 [Manager] I feel confident challenging inconsistent leveling decisions.
  • M15 [Manager] Promotion decisions are backed by evidence, not personal preference.
  • M16 [Manager] I understand how lateral moves are evaluated versus promotions.
  • M17 [Manager] I can clearly explain a “not yet” promotion decision.
  • M18 [Manager] The framework reduces escalations and disputes about promotions.

Skills & IDPs (M19–M24)

  • M19 [Manager] The framework connects to a usable skills model for my function.
  • M20 [Manager] I can identify the top 3 skills gaps blocking progression.
  • M21 [Manager] Learning options map clearly to skills and level expectations.
  • M22 [Manager] I can track development progress with observable evidence.
  • M23 [Manager] IDPs focus on skills and outcomes, not only training attendance.
  • M24 [Manager] Skills data helps me staff projects and stretch assignments fairly.

Fairness & bias (M25–M30)

  • M25 [Manager] The framework reduces bias compared to purely subjective assessment.
  • M26 [Manager] I watch for “visibility bias” when discussing promotions.
  • M27 [Manager] I consider impact relative to opportunity and context.
  • M28 [Manager] I feel safe raising fairness concerns in calibration meetings.
  • M29 [Manager] The process works equally well for remote and onsite employees.
  • M30 [Manager] I can explain how we prevent favoritism in leveling decisions.

HR support & tooling (M31–M36)

  • M31 [Manager] HR guidance on leveling and promotions is timely and practical.
  • M32 [Manager] Templates for promotion cases are clear and easy to use.
  • M33 [Manager] Tools embed the framework into reviews and development planning.
  • M34 [Manager] I can access level definitions without switching many systems.
  • M35 [Manager] HR helps resolve cross-team leveling inconsistencies quickly.
  • M36 [Manager] I know the escalation path for disputes about leveling decisions.

Overall impact & improvement (M37–M42)

  • M37 [Manager] The framework improves career conversations across my team.
  • M38 [Manager] The framework helps retain talent by making growth clearer.
  • M39 [Manager] The framework helps internal mobility by clarifying equivalence.
  • M40 [Manager] The framework reduces time spent debating unclear expectations.
  • M41 [Manager] The framework is usable in real situations, not just on paper.
  • M42 [Manager] I have a clear list of improvements that would raise adoption.

2.2 Overall / NPS-like rating questions (0–10 scale)

  • ER1 [Employee] How clear are the levels and promotion criteria in your job family? (0–10)
  • ER2 [Employee] How fair do leveling and promotion decisions feel overall? (0–10)
  • ER3 [Employee] How much does the framework increase your motivation to grow here? (0–10)
  • MR1 [Manager] How confident are you applying the framework to real cases? (0–10)
  • MR2 [Manager] How fair do calibration and promotion decisions feel overall? (0–10)
  • MR3 [Manager] How useful is the framework for internal mobility decisions? (0–10)

2.3 Open-ended questions (open text)

  • O1 [Shared] Describe one situation where the framework helped you make a decision.
  • O2 [Shared] Describe one situation where the framework created confusion or friction.
  • OE1 [Employee] What is the clearest part of the framework for your role today?
  • OE2 [Employee] What is the most confusing level expectation, and why?
  • OE3 [Employee] What would make promotions feel more fair and predictable?
  • OE4 [Employee] What would help you use the framework in your own development planning?
  • OE5 [Employee] What makes internal moves feel risky in your area?
  • OE6 [Employee] What should your manager start/stop/continue doing with the framework?
  • OM1 [Manager] Where do you struggle most applying level definitions in real cases?
  • OM2 [Manager] Which part of calibration creates the most disagreement, and why?
  • OM3 [Manager] What HR enablement (training, templates, tooling) would help most?
  • OM4 [Manager] If you could change one rule in promotions, what would it be?

Decision & action table (how to turn results into next steps)

Question(s) or area Score / threshold Recommended action Responsible (Owner) Goal / deadline
Awareness & understanding (E1–E6) Average <3,4 or ≥20% “1–2” Publish 1-page “how to use the framework” + add FAQ to intranet. HR (Career Framework Owner) Draft in 14 days, publish in 30 days
Clarity of levels (E7–E12, ER1) Average <3,2 or ER1 <6/10 Rewrite level descriptors into observable behaviors; add 3 examples per level. Functional Leader + HR Workshop in 21 days, updated rubric in 60 days
Fairness & access (E13–E18, ER2, M25–M30) Average <3,3 or group gaps ≥0,5 points Run calibration audit; standardize evidence checklist; add appeal path. HRBP + Dept Head Audit in 30 days, new rules live next cycle
Manager support (E31–E36, M7–M12) E31 or M7 average <3,0 Train managers with scripts + 1:1 agenda prompts tied to levels. L&D + People Managers Training in 45 days, adoption check in 90 days
Skills & development link (E19–E24, M19–M24) Average <3,2 Map top skills per level; align learning content; update IDP template fields. L&D Lead Mapping in 60 days, content alignment in 90 days
Internal mobility (E25–E30, M16, M39, MR3) Average <3,1 or MR3 <6/10 Standardize internal job levels; publish “equivalence rules” across functions. Talent Mobility Lead Rules in 45 days, job post update in 75 days
Psychological safety & trust (E37–E42) Average <3,0 or ≥15% “1” on E38 Stop labeling language; clarify confidentiality; run listening sessions by area. HR + Site/Org Leadership Response within ≤7 days, sessions in 30 days

Key takeaways

  • Measure usability, not existence: clarity, fairness, mobility, and manager habits.
  • Use thresholds to trigger actions with owners and deadlines.
  • Segment results to detect inequity across locations, levels, and working models.
  • Fix wording, evidence standards, and calibration before adding more levels.
  • Close the loop in ≤30 days to keep trust in the Mitarbeiterbefragung.

Definition & scope

This survey measures how employees and managers experience your career framework in daily work: findability, level clarity, fairness, skills linkage, internal mobility, and psychological safety. Use it for organisations that have launched (or are launching) a framework and need adoption signals to guide rewrites, manager enablement, calibration rules, and tooling choices. For background, see the career framework guide.

How to run career framework survey questions without creating noise

Keep it simple: one employee version, one manager version, and a shared open-text block. Run it when people have fresh experiences: after a promotion cycle, a re-leveling, or a mobility push. If you already run broader engagement surveys, treat this as a focused module, not another big annual questionnaire.

Process (fast and repeatable): define scope, send, analyse, decide, communicate, follow up. If you use a platform, a talent platform like Sprad Growth can help automate survey sends, reminders and follow-up tasks without changing your content.

  • HR defines audience rules (employees vs managers) and a timeline in 7 days.
  • Comms owner drafts a 150-word invite + confidentiality note in 7 days.
  • HR launches survey for 10–14 days; send 2 reminders, max.
  • HR and leaders review results within 10 days after close.
  • Owners publish actions and deadlines within 30 days after close.

Survey blueprints: picking the right set of career framework survey questions

You don’t need all items every time. Use the question bank as a library, then pick a blueprint that fits the moment. If you want a mobility-specific comparator, align a few items with your internal mobility survey so you can track whether clarity translates into real moves.

Blueprint Who Recommended items Timing What you decide
(a) Baseline before/after launch (22–26 items) Employees + managers E1–E6, E7–E12, E13–E18, E31–E36, ER1–ER2; M1–M6, M7–M12, M13–M18, MR1–MR2; O1–O2 T-14 days before launch, then +90 days Content fixes, manager training needs, calibration gaps
(b) Annual health check (18–22 items) Employees only E1–E6, E7–E12, E13–E18, E19–E24, E25–E30, ER1–ER3; OE1–OE4 1× per year, same month Adoption trend, fairness trend, mobility barriers, L&D alignment
(c) Short pulse after update/re-leveling (10–12 items) Employees + managers E2–E5, E7–E9, E16–E17, ER1–ER2; M3–M5, M13–M15, MR1; O2 2–6 weeks after change Whether the change landed; what still confuses people
(d) Function-specific ladder revision (12–15 items) Target function only E7–E12 + E19–E24 + ER1; M1–M6 + M19–M22 + MR1; OE2, OM1 During draft review + after rollout Rewrite role ladder language and skill expectations by level
  • HR selects a blueprint and final items in 5 days.
  • Functional SMEs validate wording (no jargon) in 10 days.
  • HR runs a 30-minute manager briefing before launch in 14 days.
  • HR publishes a “what happens next” timeline on day 1 of launch.
  • Leaders commit to 3 actions max per area within 30 days after close.

Analysing career framework survey questions: what to look for beyond averages

Start with patterns by dimension, then look for “breaks” between groups. Averages hide polarisation: a 3,4 can mean “fine” or “half confused.” Track both the mean and the share of low answers (“1–2”). If you run skills initiatives, connect this to your skill management work so you can tell whether skill language is becoming actionable.

Quick read sequence:

  1. Check clarity first (E7–E12, ER1). If clarity is low, fix content before process.
  2. Check fairness next (E13–E18, ER2, M25–M30). If fairness is low, tighten evidence.
  3. Check manager use (E31–E36, M7–M12). If use is low, train and prompt managers.
  4. Check mobility (E25–E30, MR3). If mobility is low, fix equivalence and postings.
  5. Read open text last to explain the “why,” not to replace the numbers.
  • People Analytics builds a dimension dashboard in 10 days after close.
  • HRBP reviews top 3 drivers of low trust (E37–E42) within 14 days.
  • Functional leader reviews role ladder clarity issues (E7–E12) within 14 days.
  • Mobility lead reviews friction in internal moves (E25–E30) within 21 days.

Turning survey results into fixes people will feel

Low scores rarely mean “people don’t care about careers.” More often, the framework is hard to apply, or decisions feel inconsistent. Fix the system in this order: language, evidence, decision forums, then tools. If you want a practical bridge into manager routines, link the framework to your 1:1 meetings habits and templates so it shows up in real conversations.

If–then rules that work well:

  • If E7–E12 average <3,2, then rewrite descriptors and add examples within 60 days.
  • If E13–E18 average <3,3, then standardise evidence and run calibration audits next cycle.
  • If E31–E36 average <3,0, then introduce manager scripts and prompts within 45 days.
  • If E25–E30 average <3,1, then standardise internal job levels within 75 days.
  • If E38 average <3,0, then address psychological safety and confidentiality within ≤7 days.
  • HR drafts a “level evidence checklist” and ships it in 30 days.
  • Functional leaders add 3 level examples per role in 60 days.
  • L&D updates IDP fields to reference skills and levels in 45 days.
  • Department heads run one calibration retro in 21 days after the cycle.
  • Managers add one framework question to every monthly 1:1 within 14 days.

DACH / GDPR & Betriebsrat notes (practical, non-legal)

If you run this in DACH, treat it as a Mitarbeiterbefragung that touches trust, fairness, and perceived career impact. That means you want strong anonymity, clear aggregation rules, and early alignment with the Betriebsrat. Avoid designing the survey so it feels like a hidden performance rating system.

  • HR sets an anonymity threshold (for example, report only groups with n ≥8) before launch.
  • HR confirms results are reported aggregated, not per person, within 7 days.
  • HR defines retention (for example, delete raw text after 180 days) before launch.
  • HR and Betriebsrat align on purpose and reporting cuts within 30 days pre-launch.
  • HR ensures no direct link to individual ratings or compensation decisions before launch.
Risk What it looks like Simple mitigation Owner
Re-identification Small teams, rare roles, very specific comments Aggregate to larger units; suppress open text under n <8 People Analytics
Function creep Using survey answers to justify individual performance actions Separate survey from performance files; document purpose and access HR + Legal
Trust collapse Survey runs, nothing changes, silence afterwards Publish actions within 30 days; show progress at 90 days Executive sponsor

Scoring & thresholds

Use a 1–5 agreement scale for most items. Treat results as signals for action, not as a scorecard to rank teams. For decision-making, translate numbers into three zones and attach a response rule.

  • Low (critical): average <3,0 or ≥25% responses “1” on a key item.
  • Medium (needs work): average 3,0–3,9 or ≥20% responses “1–2”.
  • High (strong): average ≥4,0 and low-response share ≤10% “1–2”.

Convert scores into actions by dimension. Example: low clarity (E7–E12) triggers content rewrites; low fairness (E13–E18) triggers evidence and calibration changes; low manager use (E31–E36) triggers enablement and manager routines. If you run connected processes, align actions with your performance management cadence so changes show up in reviews and 1:1s.

  • People Analytics publishes a “zone map” per dimension within 10 days after close.
  • HR sets 1–2 actions per red dimension, owner named, within 21 days.
  • Leaders confirm deadlines and resources within 30 days after close.

Follow-up & responsibilities

Fast follow-up protects trust. Decide up front who owns which signals, and commit to response times. Use the same pattern every cycle: HR triages, leaders decide, managers execute, HR tracks.

Signal Owner Response time Action standard
Very low trust/safety (E37–E42 average <3,0) HRBP + Dept Head Initial response ≤7 days Listening sessions + publish confidentiality clarifications
Fairness gaps (group differences ≥0,5) People Analytics + HRBP Plan in ≤14 days Calibration audit + evidence checklist refresh
Low manager usage (E31–E36, M7–M12 <3,0) L&D + Managers Enablement plan ≤21 days Training + scripts + 1:1 prompts
Low mobility clarity (E25–E30 <3,1) Talent Mobility Lead Plan ≤21 days Standardize job levels + internal move steps
  • HR publishes a follow-up plan with owners within 30 days after close.
  • Managers discuss relevant results with teams within 21 days after close.
  • HR tracks completion rate of actions monthly; first check at 60 days.
  • Exec sponsor reviews progress at 90 days and removes blockers.

Fairness & bias checks

Don’t stop at company-wide averages. Slice results by relevant groups: location, department, level, tenure band, remote vs office, and employment type where appropriate. Use consistent rules: only report groups above your anonymity threshold, and focus on meaningful gaps (for example, ≥0,5 points on 1–5 items, or ≥1,0 on 0–10 ratings).

Typical patterns and what to do:

  • Pattern: One site rates fairness (E13–E18) ≥0,7 lower. Response: audit local calibration and manager training within 30 days.
  • Pattern: Remote employees rate visibility and promotion predictability (E47) lower. Response: require evidence-based promotion packs, not “in-room” impressions, next cycle.
  • Pattern: Juniors score clarity (E7–E12) low while seniors score high. Response: rewrite definitions for readability; add onboarding module in 45 days.
  • People Analytics runs group-gap checks and flags hotspots within 14 days after close.
  • HRBP validates whether gaps reflect process differences or communication gaps in 21 days.
  • Dept Heads agree corrective actions with deadlines in 30 days after close.

Examples / use cases

A) Low clarity, high motivation: Employees want to grow, but they don’t understand the ladder. You see E7–E12 average 2,9 while ER3 stays at 7/10. Decision: rewrite level descriptors and add examples per level. Action: functional SMEs run a 2-hour workshop, HR publishes an updated ladder in 60 days, and managers use it in 1:1s within 14 days.

B) Fairness concerns concentrated in one division: Company average looks “ok,” but one division is 0,6 lower on E13–E18 and ER2. Decision: treat it as a calibration and evidence problem, not an engagement problem. Action: HRBP runs a calibration retro, introduces a shared evidence checklist, and sets a consistent “promotion case” template for the next cycle.

C) Mobility exists, but feels risky: E25–E30 average 3,0 and E38 average 2,8. Comments mention fear of being seen as disloyal. Decision: change manager incentives and messaging. Action: leaders state that internal moves are supported, managers are measured on talent development, and HR publishes a simple internal move process with timelines in 45 days.

Implementation & updates

Run this as a product: pilot, roll out, train, and refine. Your first version won’t be perfect, but you can make it reliable fast if you keep the cycle short and the follow-up visible.

  1. Pilot (6–8 weeks): 1–2 departments run blueprint (c) or (a), then you fix wording.
  2. Rollout (next 1–2 cycles): expand to more areas with the same thresholds.
  3. Manager training (2–4 weeks): teach how to use results in 1:1s and reviews.
  4. Annual review: update items, thresholds, and dimension mapping once per year.
  • HR selects a pilot org and confirms stakeholders within 14 days.
  • People Analytics sets dashboards and segmentation rules within 21 days.
  • L&D ships a 45-minute manager module within 45 days.
  • HR runs an annual item review workshop within 30 days before next annual run.
Metric to track Target Why it matters
Participation rate ≥70% annual; ≥55% pulse Low response weakens fairness and hotspot detection
Action completion rate ≥80% actions delivered by promised deadline Protects trust in the survey process
Clarity score trend (E7–E12, ER1) +0,3 points or +1/10 within 12 months Shows your framework is becoming usable
Fairness gap trend Reduce gaps to <0,3 points Signals more consistent decisions across groups
Mobility enablement (E25–E30, MR3) Average ≥3,6 within 12 months Links framework to real internal opportunities

Conclusion

Career frameworks fail quietly: people nod in presentations, then ignore them in real conversations. A focused set of career framework survey questions shows where the gap is—clarity, fairness, manager use, mobility, or psychological safety—before it turns into attrition or promotion disputes.

If you act fast, you get three benefits: you spot problems earlier, you improve the quality of career conversations, and you prioritise framework fixes based on evidence. Next steps: pick one blueprint (baseline, annual, or pulse), load the selected items into your survey tool, and name owners for the top 3 actions before you launch. Then commit to publishing results and actions within 30 days.

FAQ

How often should you run career framework survey questions?

Run an annual health check for employees (18–22 items) to track clarity and fairness trends. Add short pulses (10–12 items) 2–6 weeks after major changes like re-leveling or a new ladder version. If you run a promotion cycle twice per year, a small post-cycle pulse can catch process issues while memories are fresh.

What should you do if scores are very low (for example, average <3,0)?

Don’t debate the numbers for weeks. First, identify which dimension is failing: clarity, fairness, manager support, mobility, or trust. Then pick 1–2 corrective actions and publish them with owners and deadlines within 30 days. If the low area is psychological safety (E37–E42), respond within ≤7 days with listening sessions and clear confidentiality rules.

How do you handle critical open-text comments without overreacting?

Sort comments into themes, then check whether the theme aligns with low-scoring items. Treat open text as explanation, not as a standalone decision engine. Remove identifying details before sharing excerpts. If you see allegations of unfair treatment, route them to HRBP for a separate, appropriate process. Keep the survey feedback loop focused on system fixes.

How do you involve managers without turning this into “HR policing”?

Give managers a clear role: explain results to their teams, select small improvements, and use the framework in 1:1s. Provide scripts and templates so managers don’t invent their own definitions. Make it easy to do the right thing: add prompts into your review and 1:1 workflows, and measure adoption with simple indicators like “we discussed level expectations this month.”

How do you update the question bank over time?

Review the question set once per year, right after you’ve delivered the main follow-up actions. Remove items with consistently high scores (≥4,3) to keep surveys short, and add items where your framework is evolving (for example, a new skills model or a new internal mobility process). Keep your dimension mapping stable so trends stay comparable across years.

Jürgen Ulbrich

CEO & Co-Founder of Sprad

Jürgen Ulbrich has more than a decade of experience in developing and leading high-performing teams and companies. As an expert in employee referral programs as well as feedback and performance processes, Jürgen has helped over 100 organizations optimize their talent acquisition and development strategies.

Free Templates &Downloads

Become part of the community in just 26 seconds and get free access to over 100 resources, templates, and guides.

Free IDP Template Excel with SMART Goals & Skills Assessment | Individual Development Plan
Video
Performance Management
Free IDP Template Excel with SMART Goals & Skills Assessment | Individual Development Plan
Mitarbeiterengagement-Umfrage zur Identifizierung der Motivation und Zufriedenheit
Video
Employee Engagement & Retention
Mitarbeiterengagement-Umfrage zur Identifizierung der Motivation und Zufriedenheit

The People Powered HR Community is for HR professionals who put people at the center of their HR and recruiting work. Together, let’s turn our shared conviction into a movement that transforms the world of HR.