Clear, role-based AI training makes expectations transparent: which skills belong to which level, what “good” looks like, and how you judge progress. This framework turns AI training from ad‑hoc workshops into a shared language for hiring, promotion, feedback, and development across employees, managers, HR and AI champions.
| Skill area | Starter | Practitioner | Power User | Leader / Champion |
|---|---|---|---|---|
| AI literacy & mindset | Explains basic GenAI concepts in simple words; knows company AI policy and main risks. | Identifies realistic AI use cases in own role; distinguishes facts from AI hype. | Connects AI capabilities to team workflows; explains trade‑offs and limitations to others. | Shapes AI vision for their area; challenges unrealistic expectations with data and examples. |
| Prompting & tool usage | Uses approved tools with simple prompts (summaries, rephrasing) following templates. | Designs structured prompts with context and constraints; iterates to reach reliable results. | Builds reusable prompt libraries and templates for team‑specific tasks. | Standardizes prompting patterns across teams; evaluates and selects new tools with IT. |
| Data privacy & governance | Knows which data must never go into public AI tools; follows basic do’s and don’ts. | Checks prompts and outputs for GDPR and confidentiality issues before sharing. | Designs safe workflows (anonymisation, internal sandboxes) and documents guardrails. | Co‑creates AI policies, AVVs and works‑council agreements; monitors adherence. |
| Role‑specific AI use cases | Executes 1–2 simple use cases (e.g. email drafts, document summaries) with guidance. | Runs common role tasks with AI (e.g. job ads, 1:1 agendas, survey summaries) independently. | Redesigns processes to embed AI end‑to‑end and measures time/quality impact. | Prioritises portfolio of AI use cases, allocates resources and reports business outcomes. |
| Collaboration & knowledge sharing | Shares helpful prompts with colleagues informally; participates in AI discussions. | Documents before/after examples; presents learnings in team meetings. | Facilitates AI clinics or office hours; mentors colleagues on safe, effective use. | Builds formal champion network; aligns learning offers with HR and L&D. |
| Critical thinking & quality control | Double‑checks AI output for obvious errors; asks for manager review when unsure. | Uses checklists to validate facts, tone and bias; corrects outputs independently. | Combines AI output with data and domain expertise; rejects low‑quality use cases. | Defines quality standards and review steps; ensures humans keep final decisions. |
| Change & enablement | Voices concerns and questions openly during AI rollouts; tries agreed pilots. | Encourages peers to test new AI workflows; gives structured feedback on tools. | Leads AI experiments in their team; adjusts roles and routines accordingly. | Owns AI enablement roadmap; aligns with works council, IT and business strategy. |
Key takeaways
- Use the framework to link AI skills directly to levels and promotions.
- Plan role‑specific AI training curricula instead of generic all‑hands workshops.
- Collect concrete evidence: prompts, outputs, use cases, before/after process changes.
- Run regular check‑ins and calibration rounds to reduce bias in AI‑related ratings.
- Update the framework yearly as tools, laws and business priorities evolve.
What this framework is for
This AI training skill framework defines observable behaviors for Starter–Leader levels across key competence areas. You use it to design role‑based curricula, run consistent performance reviews, decide promotions, and structure development plans. Together with your skills matrix and talent processes, it becomes the reference for “what good AI use looks like” in your company.
Skill levels & scope for role‑based AI training
The same level means different scope depending on audience. A Starter HRBP carries more responsibility than a Starter employee, even if both just begin AI training. You therefore define levels by both proficiency and impact radius: own tasks, own team, function, or company‑wide.
All employees / knowledge workers
Starters focus on safe experimentation in their own tasks. Practitioners reliably use AI for everyday work (emails, summaries, simple analysis). Power Users redesign workflows and help colleagues. Champions in this group are rare; often they transition into “AI Champion” roles.
People managers
Starters use AI to prepare 1:1s and team updates. Practitioners manage performance inputs (feedback summaries, review drafts) with AI while coaching their teams on safe use. Leaders shape how AI supports goals, staffing, and performance routines across several teams.
HR & people teams
Starters improve one HR process with AI (e.g. job ads). Practitioners embed AI in multiple workflows: recruiting, surveys, performance, internal mobility. Experts work on skills‑based talent management, governance, and organisation‑wide AI capability building, often together with IT and business leaders.
AI champions / power users
Starters experiment with advanced prompts and share tips. Practitioners build simple automations and coach local teams. Innovators lead pilots, connect tools, and shape the internal AI roadmap. They often partner with HR to keep curricula current and realistic.
- Describe per audience which decisions each level can take independently when using AI.
- Document 3–5 typical outputs or workflows expected at each level.
- Align levels with your broader career framework, not in a separate “AI universe”.
- Link levels to onboarding, performance reviews and promotion criteria.
- Use the same labels (Starter, Practitioner, etc.) across all AI training content.
Curriculum templates by audience for AI training
Generic AI training creates excitement for a week and then disappears. Role‑based curricula make skills stick, because people practice on their real tasks. You can adapt the modules below to tools like ChatGPT, Copilot, Atlas AI or embedded AI in HR platforms.
All employees / knowledge workers
| Module | Topic | Learning goals | Duration | Format | Example exercise |
|---|---|---|---|---|---|
| 1 | Generative AI fundamentals | Explain what GenAI can and cannot do; recall your company AI policy. | 60 min | Live workshop | Map 3–5 daily tasks where AI could support you safely. |
| 2 | Prompting basics | Write clear prompts with context and tone; compare outputs. | 45 min | Live lab | Iterate a vague prompt into a useful one; document quality improvements. |
| 3 | AI in productivity tools | Use AI in email/docs (Copilot, ChatGPT) for drafting and formatting. | 30 min | Demo + practice | Draft an email with AI, then refine to match your usual style. |
| 4 | Data privacy & compliance | Identify forbidden data types; choose approved tools only. | 30 min | e‑Learning | Classify example prompts as “allowed” or “not allowed”, explain why. |
| 5 | Summaries & note‑taking | Summarise emails, documents and meetings accurately; check for errors. | 45 min | Hands‑on workshop | Summarise a long internal document and compare against a human summary. |
| 6 | Mini capstone | Apply AI to a real work problem and share results. | 60–90 min | Team project | Redesign one weekly task with AI; show before/after time and quality. |
People managers
| Module | Topic | Learning goals | Duration | Format | Example exercise |
|---|---|---|---|---|---|
| 1 | AI for 1:1s and team meetings | Use AI to prepare agendas, summaries and follow‑ups for meetings. | 45 min | Workshop | Generate next week’s 1:1 agenda from notes and goals; refine wording. |
| 2 | Feedback & performance reviews | Draft fair feedback bullets from raw notes while preserving your own judgement. | 60 min | Demo + lab | Create a review draft with AI, then adjust to remove bias and add specifics. |
| 3 | Team reporting with AI | Turn KPIs and notes into concise stakeholder updates. | 30 min | Hands‑on demo | Feed weekly data into AI and iterate a clear, audience‑specific report. |
| 4 | Leading AI adoption | Plan how to introduce AI to your team and address concerns. | 30 min | Guided discussion | Draft a short AI adoption plan for your team, including risks and messages. |
| 5 | Risk & ethics in people decisions | Spot biased, inaccurate or incomplete AI suggestions. | 30 min | e‑Learning | Review a biased AI hiring suggestion and rewrite it to be fair. |
| 6 | Manager case clinic | Apply AI to your own leadership challenges. | 60 min | Peer workshop | Bring a real case (e.g. restructure, PIP) and test safe AI supports. |
HR & people teams
| Module | Topic | Learning goals | Duration | Format | Example exercise |
|---|---|---|---|---|---|
| 1 | AI basics, GDPR & AI Act | Understand GenAI, legal bases, AVVs/DPAs and DACH specifics. | 45 min | Workshop | Map which existing HR tools already contain AI features and needed agreements. |
| 2 | Bias‑aware recruiting | Write inclusive job ads and outreach; check for stereotypes. | 45 min | Guided lab | Rewrite a biased job ad using ChatGPT and compare language before/after. |
| 3 | Skill‑based screening | Use AI to extract skills from CVs and match to role profiles. | 30 min | Demo + practice | Run 5 fictional CVs through an AI prompt; adjust criteria together. |
| 4 | Interview guides & scorecards | Generate behaviour‑based questions and structured scorecards. | 30 min | Workshop | Create an interview guide for one role, then pilot it with colleagues. |
| 5 | 360° & survey analysis | Summarise comments into themes while protecting anonymity. | 45 min | Live lab | Analyse a sample survey data set with AI and validate themes manually. |
| 6 | Skills taxonomy & matrices | Maintain living skill frameworks and matrices with AI support. | 30 min | Discussion + lab | Update one team’s skill matrix using AI suggestions; agree final version. |
| 7 | AI in performance & talent | Use AI to support calibration, talent reviews and IDPs. | 30 min | Demo | Generate draft development actions from review notes; refine with managers. |
| 8 | Governance & roadmap | Align training, tools and policies; plan reviews. | 60 min | Team session | Build an HR AI enablement roadmap for the next 12 months. |
AI champions / power users
| Module | Topic | Learning goals | Duration | Format | Example exercise |
|---|---|---|---|---|---|
| 1 | Advanced prompting patterns | Use chains, roles and constraints for complex tasks. | 45 min | Workshop | Design a multi‑step prompt to research, compare and summarise options. |
| 2 | Tool integrations | Connect AI tools to Slack/Teams or internal apps with low‑code tools. | 60 min | Lab | Automate a simple notification workflow based on AI‑classified messages. |
| 3 | Custom workflows & APIs | Build and document one reusable workflow per quarter. | 60 min | Guided project | Create a script that turns raw data into an AI‑generated executive summary. |
| 4 | Safe experimentation | Apply privacy‑first design and anonymisation in pilots. | 45 min | Policy review + demo | Sanitise a real dataset and explain your decisions to legal/HR. |
| 5 | Community & enablement | Run office hours, clinics and internal communication about AI. | 30 min | Peer coaching | Prepare a 20‑minute micro‑training for a target team. |
| 6 | Trend monitoring | Scan updates and propose curriculum/tool changes. | 30 min | Self‑study | Evaluate a new feature and suggest whether to pilot or ignore it. |
For deeper design tips you can cross‑check with your existing skill systems, for example the guidance in Skill Management or the AI‑specific overview in AI training programs for companies.
Practice exercises and guardrails by audience
All employees
- Summarise a long email thread into three bullet points and a proposed reply.
- Ask AI to rewrite your message for a different audience (peer vs. senior leader).
- Turn meeting notes into action items, then verify with your team.
- Create a checklist for a recurring task using AI and test it next week.
Never include personal data (health, salary, performance issues, addresses), client identifiers, internal financials or unreleased product information in public AI tools. Use approved, company‑managed tools for any content that touches real people or customers.
People managers
- Turn raw notes from three 1:1s into structured agendas for next week.
- Generate two alternative wordings for constructive feedback, then choose and adapt one.
- Summarise team feedback into three themes and discuss them in your next retro.
- Draft a role description and growth plan for a stretch assignment with AI support.
HR & people teams
- Draft a job ad from a competency profile; remove biased wording AI might propose.
- Ask AI to suggest 10 behavioural interview questions for one key competency.
- Analyse sample survey comments into themes and sentiment; validate with peers.
- Create a first draft skill matrix for one role family and refine with managers.
AI champions
- Create a workflow where survey comments flow into an AI summary in your BI tool.
- Design a reusable prompt template library and publish it to your intranet.
- Run a mini A/B test: AI‑assisted vs. manual process, measure time and quality.
- Document one pilot end‑to‑end, including risks, mitigations and Go/No‑Go criteria.
Roll‑out roadmaps for company‑wide AI training
Instead of one big AI day, run sprints. Below are three example roadmaps you can adapt. They mix live sessions, self‑paced learning and office hours, aligned with findings from AI training for employees.
4–6 week program for all employees
| Week | Main focus | Touchpoints |
|---|---|---|
| 1 | AI 101 & policy | Company‑wide kickoff, 60‑min AI 101, 20‑min e‑learning on policy and GDPR. |
| 2 | Prompting basics | Live prompt lab per department; self‑paced practice with 3–4 tasks. |
| 3 | Productivity use cases | Short demos in Teams/Slack; office hours with AI champions. |
| 4 | Role‑specific labs | Functional sessions (Sales, Ops, etc.) with hands‑on scenarios. |
| 5–6 | Capstone & reflection | Team mini‑projects, showcase session, short survey on confidence and adoption. |
6–8 week manager‑focused AI training path
| Week | Main focus | Touchpoints |
|---|---|---|
| 1 | Foundations & expectations | Kickoff with HR; clarify manager responsibilities for AI use and role‑modelling. |
| 2 | 1:1s and feedback | Workshop with exercises on AI‑assisted agendas and feedback drafts. |
| 3 | Team reporting | Lab on AI‑generated reports from KPIs and notes. |
| 4 | Performance reviews | Session on fair AI support in reviews and calibration. |
| 5 | Coaching & development | Use AI for coaching questions and IDP drafts; integrate with your performance management approach. |
| 6–8 | Team‑specific experiments | Managers run one AI experiment per team and report outcomes. |
8–12 week HR/people enablement path
| Week | Main focus | Touchpoints |
|---|---|---|
| 1–2 | AI, GDPR, works council | Deep‑dive with legal/IT; align on AVVs, data flows and Betriebsrat involvement. |
| 3 | Recruiting use cases | Labs on job ads, sourcing messages, screening prompts. |
| 4 | Surveys & feedback | Work on survey, 360° and review summarisation. |
| 5–6 | Skills & career frameworks | Design or refine skill matrices and career levels with AI support. |
| 7–8 | Governance & checklists | Create an AI governance checklist and link to HR policies. |
| 9–12 | Implementation & tooling | Select tools, pilot in one BU, measure impact, adjust curricula. |
Competence areas in AI training curricula
The framework uses 5–8 core competence areas so you don’t drown in detail. You can adapt names, but keep outcomes and behaviors clear.
Suggested competence areas
AI literacy & mindset. Understanding basic concepts, opportunities and limits. Result: realistic expectations and willingness to test AI without fear or blind faith.
Prompting & tool usage. Shaping input and context to reach reliable outputs in ChatGPT, Copilot, Atlas AI or domain tools. Result: higher quality, less trial‑and‑error.
Data privacy, security & ethics. Applying GDPR, company policy and DACH specifics in daily use. Result: fewer incidents, less rework with legal/IT.
Role‑specific workflows. Using AI where it matters: job ads, 1:1s, forecasts, ticket triage, etc. Result: measurable time savings or quality improvements.
Critical thinking & quality control. Checking facts, sources and bias; combining AI with domain expertise. Result: trustworthy outputs and lower risk.
Collaboration & enablement. Sharing learnings, templates, prompts; helping others adopt AI. Result: faster spread of good practices across teams.
For HR‑specific competence examples you can reuse parts of your existing HR skills matrix or the ideas from AI training for HR teams.
- Limit competence areas to 6–8; merge overlapping topics ruthlessly.
- Describe each area with 1–2 sentences and clear outcomes, not tool names.
- Align areas with your broader competency framework where possible.
- Map each training module to 1–2 competence areas for clarity.
- Review areas annually as AI capabilities and regulations change.
Rating scale & evidence
A simple, shared scale keeps AI skills comparable across teams. Combine it with evidence types that are easy to collect during normal work.
Rating scale (1–5)
1 – No exposure. Has not started using AI tools or needs constant support. 2 – Starter. Uses templates in simple scenarios, needs check‑ins for safety and quality. 3 – Practitioner. Uses AI independently in common tasks, follows policies reliably. 4 – Power User. Optimises workflows, supports others, measures impact. 5 – Leader/Champion. Shapes strategy, governance and curricula; acts as multiplier.
Evidence examples
- Prompt and output logs (screenshots, text snippets) for typical tasks.
- Before/after examples of documents, processes or communication.
- Time‑saved estimates, quality metrics, or error‑rate changes.
- Feedback from peers, stakeholders or customers on AI‑assisted work.
- Participation in AI champion activities, clinics or content creation.
Example: same task, different level
Case A – Starter. Uses AI to draft a job ad from an old template, copies text almost 1:1, forgets to remove phrases conflicting with your employer branding and must redo parts after HR review.
Case B – Practitioner. Starts from the role profile, includes required skills, checks AI output for bias, adjusts tone to your EVP, and delivers a final draft that only needs minor edits from HR.
Growth signals & warning signs
Clear signals help you decide when someone is ready for the next AI skill level and when promotion would be premature.
Growth signals
- Consistently uses AI for core tasks without breaching policy.
- Brings 2–3 solid before/after examples per quarter with clear impact.
- Colleagues ask them for help on prompts or safe use cases.
- Identifies risks proactively and involves legal/IT/HR when needed.
- Supports AI adoption beyond their own workload (coaching, templates).
Warning signs
- Copies AI output verbatim without sense‑checking facts or tone.
- Repeatedly ignores or forgets data‑privacy guardrails.
- Uses AI to avoid thinking; quality drops when prompts fail.
- Hoards knowledge; no documentation of prompts or workflows.
- Frames AI as a threat in team communication instead of an enabler.
Team check‑ins & review sessions
AI skills should show up in your existing people routines, not in separate side‑conversations. Combine this framework with your talent management and performance management approach.
Suggested formats
- Quarterly team check‑in: everyone brings one AI use case (success or failure).
- Calibration session: managers compare 2–3 examples per person against level anchors.
- Retros after each training wave: what stuck, what changed, what needs updates.
- Office hours with AI champions: open Q&A and live prompt debugging.
- Annual talent review: AI competence as one dimension in succession and promotion talks.
Tools like Sprad Growth or similar platforms – with features like Atlas AI for 1:1 agendas – can attach AI‑related evidence directly to reviews and development plans, so you don’t lose insights between cycles.
Interview questions for AI‑related skills
You can also use this framework in hiring. Behavioural questions uncover whether candidates already work like your Practitioners or Power Users.
AI literacy & mindset
- Tell me about a time you used an AI tool to improve your work. What changed?
- Describe a situation where AI did not deliver what you expected. How did you react?
- How do you decide when AI is appropriate for a task and when not?
- What AI developments do you follow, and how have they influenced your work?
Prompting & tool usage
- Describe a complex prompt you designed. How did you refine it?
- Share an example where you iterated prompts to fix a wrong or weak output.
- How do you document prompts so others can reuse them?
- Tell me about a time you combined multiple AI tools or features in one workflow.
Data privacy & ethics
- Tell me about a situation where you decided not to use AI because of data concerns.
- How do you anonymise data before putting it into AI tools?
- Describe a moment when you noticed biased AI output. What did you do next?
- How would you explain safe AI use to a colleague who is unsure?
Role‑specific workflows
- Walk me through a recurring task you significantly improved with AI. What was the outcome?
- How do you measure whether AI really saves time or improves quality?
- Share an example where you piloted AI in your team or function.
- Which of your current processes would you experiment with AI next, and why?
Collaboration & enablement
- Describe a time you taught others how to use an AI tool.
- How do you handle colleagues who are sceptical or fearful about AI?
- Give an example of AI‑related documentation or guides you created.
- How do you stay up‑to‑date on AI features relevant to your role?
Governance, GDPR & DACH specifics
DACH companies must align AI training with GDPR, works‑council rights and sector rules. Coordinate early with IT, legal and employee representatives, not after tools are already live. The governance approach in AI enablement in HR is a good reference.
- Use only tools with EU data residency and signed AVVs/DPAs for employee or candidate data.
- Clarify which data categories are strictly off‑limits for public AI tools.
- Agree with the Betriebsrat how AI is used in performance, not as an automated rater.
- Document all AI use cases that touch personal data and review them yearly.
- Include governance and “human in the loop” principles directly in training modules.
Implementation & updates
Treat this AI training framework like any other core people standard: start small, assign owners, and update regularly.
Implementation steps
- Run a light skill gap/needs analysis, for example using a simple skills matrix.
- Select 1–2 pilot groups (e.g. HR and one business team) for the first wave.
- Train managers first so they can coach their teams during AI experiments.
- Integrate AI skills and evidence fields into review and development templates.
- Define success metrics: adoption, time saved, quality, risk incidents, engagement.
Ongoing maintenance
- Nominate an AI Skills Owner in HR/L&D; give them clear decision rights.
- Collect feedback from each training wave and update modules quarterly if needed.
- Review level descriptions and competence areas annually with managers and champions.
- Align updates with your talent management and skill management roadmaps.
- Archive old versions to stay audit‑ready and show evolution over time.
Connecting AI training with your existing HR stack
AI training works best when it plugs into your existing skill, performance and governance stack. Think sequence: assess → design → deliver → measure → adapt.
- Assess skills and needs using a skills matrix and a simple gap analysis.
- Design curricula per audience using the tables above and your career framework.
- Deliver in blended form: live, labs, micro‑learning, supported by tools like Sprad Growth.
- Measure impact with performance and talent metrics, not just training satisfaction.
- Adapt contents based on new tools, regulations and business priorities each year.
You can reuse many building blocks from existing resources such as your skill matrices, IDP templates and governance checklists. Articles like Skill matrix templates, Skill gap analysis and AI training for employees give concrete formats you can plug in.
Conclusion
Clear, role‑based AI training turns AI from buzzword into observable behaviour. When employees, managers, HR and champions share the same expectations, you gain clarity in reviews, fairness in promotions and a development path that feels achievable instead of overwhelming. AI becomes part of daily work, not a side project.
To get started, pick one audience and one pilot business area in the next 4–6 weeks. Map their current skills against this framework, choose 6–8 modules, and run the first sprint. In parallel, adapt your performance and talent templates so AI‑related evidence is visible in the next cycle.
Within 6–12 months, aim to embed AI competence in your core people processes: job profiles, performance reviews, promotion committees and learning plans. Assign an AI Skills Owner, involve works councils early, and review your curriculum annually. That way, your AI training stays practical, compliant and tightly linked to real career growth.
FAQ
How do we use this framework in everyday performance reviews?
Link each role to 3–5 relevant competence areas and a target level. In reviews, ask employees to bring two AI‑related examples per area: prompts, outputs, before/after processes. Managers rate behaviors against the anchors, not gut feel. Use calibration meetings to compare cases across teams and adjust ratings. Over time, this creates shared standards for AI‑related performance.
How can we avoid bias when evaluating AI skills?
Bias shows up when “loud” AI users get more credit than cautious but effective ones. Counter that with behaviourally anchored scales, evidence requirements and cross‑team calibration. Encourage managers to test outputs, not just listen to confident narratives. A simple checklist (“What evidence? Over which time? Any risks?”) keeps ratings grounded. Training managers on common review biases also helps.
How do we keep AI training from overwhelming employees?
Short, spaced learning beats marathons. Limit formal AI training to 1–2 hours per week during a sprint. Focus modules on real tasks that save time immediately, like email drafts or interview questions. Offer optional deep dives for power users. Pulse surveys after each wave reveal if pace or complexity feels too high, so you can slow down or simplify examples.
Which metrics show whether our AI training works?
Track a mix of adoption, performance and risk metrics. For adoption: percentage of employees using approved tools weekly. For performance: time saved on target workflows, quality improvements, hiring or review cycle times. For risk: data‑incident counts, governance breaches, or complaints. According to a McKinsey study, companies that link AI initiatives to clear business KPIs see larger productivity gains.
How often should we update the AI training framework?
Review competence areas and level descriptions at least once per year, or after major legal or tooling changes. Smaller tweaks – new examples, updated prompts, or extra guardrails – can happen quarterly based on feedback from champions and managers. Maintain version history and communicate changes clearly so people know what’s new. Keeping it living and lightweight is better than redesigning everything every few years.



