This template gives you ready-to-use manager 360 feedback questions so you can run a fair, focused leadership review without weeks of prep. You get a full question bank, clear thresholds, and concrete follow-up actions that help you spot issues early and have better development conversations with your Führungskräfte.
Survey questions: manager 360 feedback questions bank
Use the same questions for all rater groups (self, manager, peers, direct reports). Rate each statement on a 1–5 scale: 1 = Strongly disagree, 2 = Disagree, 3 = Neither agree nor disagree, 4 = Agree, 5 = Strongly agree.
2.1 Closed questions (Likert scale)
- Q1. My manager holds regular 1:1s that focus on my progress and well-being.
- Q2. My manager gives specific, timely feedback that helps me improve.
- Q3. My manager supports my long-term development, not only current tasks.
- Q4. My manager helps me create and follow through on an individual development plan (IDP).
- Q5. My manager adapts their coaching style to my experience and needs.
- Q6. My manager recognizes my efforts and achievements in a meaningful way.
- Q7. My manager addresses performance or behavior issues early and constructively.
- Q8. My manager communicates expectations and priorities clearly.
- Q9. My manager listens actively and lets others finish their thoughts.
- Q10. My manager shares relevant information from other teams or leadership in a timely way.
- Q11. My manager works effectively with other teams to solve cross-functional issues.
- Q12. My manager handles conflicts in a fair and solution-focused way.
- Q13. My manager makes sure different perspectives are heard before decisions are made.
- Q14. My manager adjusts their communication style for different audiences (team, executives, stakeholders).
- Q15. My manager provides a clear direction for our team that connects to company goals.
- Q16. My manager prioritizes work based on impact and capacity, not just urgency.
- Q17. My manager uses data and evidence when making important decisions.
- Q18. My manager explains the reasons behind key decisions in an understandable way.
- Q19. My manager anticipates risks and prepares realistic contingency plans.
- Q20. My manager regularly reviews and adjusts priorities when circumstances change.
- Q21. My manager involves the right people in strategic discussions and decisions.
- Q22. My manager follows through on commitments and agreed action items.
- Q23. My manager removes blockers so the team can progress.
- Q24. My manager sets clear ownership for tasks and decisions.
- Q25. My manager holds people accountable in a fair and consistent way.
- Q26. My manager focuses the team on outcomes, not just activities.
- Q27. My manager helps us balance quality, speed, and risk when delivering work.
- Q28. My manager monitors progress and course-corrects when needed.
- Q29. I feel safe to speak up about problems or mistakes with this manager.
- Q30. My manager encourages critical questions, even if they challenge their own views.
- Q31. My manager addresses disrespectful or toxic behavior, regardless of who shows it.
- Q32. People with different backgrounds and opinions are treated fairly in this team.
- Q33. My manager shares their own learning moments and admits mistakes.
- Q34. My manager gives credit to the team rather than taking it alone.
- Q35. In difficult situations, I can trust my manager to support the team.
- Q36. My manager behaves consistently with our stated company values.
- Q37. My manager acts with integrity, even when it is uncomfortable.
- Q38. My manager is transparent about potential conflicts of interest.
- Q39. My manager treats everyone with respect, regardless of role or seniority.
- Q40. My manager makes decisions that balance business results and people impact.
- Q41. My manager is a positive role model for the kind of Führungskraft we want in this company.
- Q42. I would be confident having this manager represent our team to senior leadership or external partners.
2.2 Overall / NPS-style question
Use a 0–10 scale: 0 = Not at all likely, 10 = Extremely likely.
- Q43. How likely are you to recommend this manager as a leader to a colleague?
2.3 Open-ended questions
Use these to gather concrete examples and development input. Encourage raters to give short, specific answers.
- O1. What is one specific behavior this manager should start doing to be more effective?
- O2. What is one specific behavior this manager should stop doing because it reduces their impact?
- O3. What is one specific behavior this manager should continue because it works well?
- O4. Please share one recent situation that best illustrates how this manager leads the team.
Decision & action table
The questions group into six domains: People leadership & coaching (Q1–Q7), Collaboration & communication (Q8–Q14), Strategic thinking & decision-making (Q15–Q21), Execution & ownership (Q22–Q28), Culture, inclusion & psychological Sicherheit (Q29–Q35), Role modeling & values (Q36–Q42).
| Domain / questions | Threshold (1–5 scale) | Recommended action | Owner | Timeline |
|---|---|---|---|---|
| People leadership & coaching (Q1–Q7) | Average <3.2 or ≥30% “Disagree/Strongly disagree” | Schedule targeted coaching skills training and set 3 concrete coaching habits. | HR + manager’s manager | Training within 30 days; habit check-in after 60 days |
| Collaboration & communication (Q8–Q14) | Average <3.4 | Run a 60–90 minute team workshop to map interface issues and agree communication norms. | Manager + HR facilitator | Workshop within 45 days; follow-up survey after 90 days |
| Strategic thinking & decision-making (Q15–Q21) | Average <3.3 or gap ≥0.5 vs. manager’s rating | Introduce quarterly priority-setting ritual; provide a mentor from a more senior Führungskraft. | Manager’s manager | Ritual designed within 21 days; mentor assigned within 30 days |
| Execution & ownership (Q22–Q28) | Average <3.3 and at least 2 items below 3.0 | Define 2–3 execution KPIs; review in monthly 1:1s; clarify ownership matrix. | Manager + HRBP | KPIs agreed within 14 days; first review within 45 days |
| Culture, inclusion & psych. safety (Q29–Q35) | Any item ≤2.5 or average <3.5 | Escalate as risk; run psychological safety workshop; agree 3 team norms and track. | HR + manager’s manager | Escalation within 7 days; workshop within 30 days |
| Role modeling & values (Q36–Q42) | Any item ≤2.5, or average <3.5 for 2 cycles | Discuss in promotion/succession processes; define explicit behavior expectations and consequences. | HR + functional head | Expectations set within 21 days; review in next talent cycle |
| Overall NPS-style score (Q43) | Average ≤7.0 or ≥25% scores 0–6 | Ask manager to co-create 2–3 focus areas from feedback and document in development plan. | Manager + manager’s manager | Plan drafted within 30 days; progress review after 90 days |
| Rater group spread (self vs. others) | Self-rating >0.7 above all others in ≥3 domains | Use in coaching to address blind spots; practice asking for ongoing feedback. | Coach or HRBP | First coaching session within 30 days; follow-up in 90 days |
Key takeaways
- Use one question bank for all rater groups to compare perspectives.
- Work with 6–8 clear domains instead of dozens of scattered items.
- Trigger actions at set score thresholds, not gut feel.
- Link results to development plans, not compensation or punishment.
- Repeat 360s every 18–24 months to track leadership growth.
Definition & scope
This 360° survey measures how managers and leaders show up in six areas: coaching, collaboration, strategic direction, execution, culture & psychologische Sicherheit, and values. It is designed for all leadership levels from People Manager to Director and uses four rater groups: self, manager, peers, and direct reports. Results support development plans, coaching, succession planning, and discussions about Führungskultur, not one-off performance ratings.
Scoring & thresholds
The template uses a 1–5 Likert scale. In most organizations, this is enough nuance and easier to explain than 7 points. If you run large cohorts or want more detail, you can switch to a 7‑point scale, but then adjust thresholds (for example, “critical” might be <4.2 instead of <3.3).
For a 5‑point scale, use these working definitions:
- Low score: average <3.0 = critical and requires action within ≤30 days.
- Mid score: 3.0–3.9 = acceptable but improvement expected; action within ≤90 days.
- High score: ≥4.0 = strength; protect and share good practices.
Group items into the six domains above. For each manager, calculate:
- Average per question and domain, split by rater group.
- Gaps between self vs. others, and between rater groups (e.g. peers vs. direct reports).
- Trends vs. previous 360 cycles (if available).
Turn scores into decisions by applying simple rules:
- If any psychologische Sicherheit item (Q29–Q35) ≤2.5, HR flags it as high risk and informs the manager’s manager within ≤7 days.
- If a domain average <3.3, create at least one concrete development goal in that domain, documented in the manager’s IDP.
- If overall NPS (Q43) ≤7.0, the manager runs a debrief with their own manager and agrees 2–3 focus behaviors.
- If all domain averages ≥4.2 and NPS ≥8.5, consider this manager for mentoring others and future succession pipelines.
You can support this logic with technology. A talent platform like Sprad Growth can help automate survey sends, reminders, scoring, and follow-up tasks so HR focuses on interpretation instead of spreadsheets.
Scale design: 5 vs. 7 points and weighting by level
For most DACH companies, 5 points work well, especially when Betriebsrat and GDPR discussions focus on simplicity and transparency. If you choose 7 points, communicate what each point means and avoid half-steps like “3.5” in interpretation.
To customize by level while reusing the same bank of manager 360 feedback questions:
- People Managers: weight People leadership & coaching and Execution slightly higher in development plans.
- Heads-of: weight Collaboration & communication and Strategic thinking higher, especially cross-functional items.
- Directors: weight Strategic thinking, Culture & psychologische Sicherheit, and Role modeling most strongly.
Weights affect how you prioritize development, not the raw scores themselves. That keeps the survey comparable while acknowledging different roles.
Follow-up & responsibilities
Without clear follow-up, a 360 survey becomes a one-off event and harms trust. Define upfront who does what, by when, and communicate this to all Führungskräfte and to the Betriebsrat.
Typical responsibilities:
- HR / People team: Owns process design, tooling, GDPR and Betriebsrat alignment, and training for all raters and managers.
- Manager’s manager: Owns debrief conversation, development priorities, and consequences if values or psychologische Sicherheit are violated.
- Individual manager: Owns sharing key themes with their team and updating their individual development plan.
- External or internal coach (optional): Supports interpretation and behavior change for critical or high-potential cases.
Set clear reaction times:
- ≤7 days after survey close: HR delivers reports to managers and their managers.
- ≤21 days: Each manager has a debrief conversation with their own manager and drafts 2–3 focus behaviors.
- ≤30 days: Each manager updates their IDP and, where appropriate, shares high-level themes with their team.
- ≤90 days: Follow-up check-in on progress for every manager with at least one “critical” domain.
To keep the focus on growth rather than punishment, separate this 360 cycle from pay decisions. You can still use aggregated insights in talent reviews, similar to how you’d use data from a broader 360 degree feedback program, but make sure individuals know the primary purpose is development.
Debrief conversations with managers
Debriefs work best when structured. A simple 60‑minute agenda:
- 10 min: Manager summarizes their key takeaways from the report (self-reflection first).
- 20 min: Discuss patterns by rater group and domain, including any surprises.
- 20 min: Agree 2–3 concrete behaviors to start/stop/continue; connect to business outcomes.
- 10 min: Decide how to share themes with the team and how progress will be measured.
You can reuse methods from your existing performance review templates or IDP processes, for example those described in Sprad’s guide on individual development plans.
Fairness & bias checks
360° feedback is vulnerable to bias: popularity, conflict history, or cultural differences can sway ratings. You reduce this risk by designing the process and analysis around fairness.
First, control who sees what. For GDPR and DACH practice, set anonymity rules such as:
- Direct reports: show aggregated scores only if at least n≥3 (better n≥5).
- Peers: same n≥3 rule; otherwise merge into “Others”.
- Manager: single rating, visible but clearly marked as “manager only”.
Second, check patterns across groups:
- Compare average domain scores by location, function, gender identity, or remote vs. office where legally and ethically appropriate.
- Look for groups where leadership scores consistently lower on psychologische Sicherheit or fairness.
- Review outliers where one rater group is ≥1.0 point below all others.
Typical patterns and responses:
- Pattern: Direct reports score a manager much lower than peers on safety items. Action: HR and manager’s manager review comments, speak with the manager, and, if needed, run confidential team discussions or a climate survey, guided by principles from employee survey governance.
- Pattern: Women or minority groups consistently rate leadership behavior lower in one business unit. Action: escalate to business leadership; combine with DE&I data; design targeted interventions like inclusive leadership training and sponsorship programs.
- Pattern: Self-ratings are much higher than all others across domains. Action: treat as a development signal about self-awareness; prioritize coaching on asking for feedback and reflecting on impact.
Document bias checks and decisions, especially if the Betriebsrat is involved. This protects trust and supports consistent handling of similar cases.
Examples / use cases
Use case 1: Low psychologische Sicherheit in a key team
Situation: A high-performing product team reports strong results, but in the 360° survey, direct reports score Q29–Q31 around 2.3, far below company average. Comments mention fear of speaking up and public criticism.
Decision and action: HR flags the case as critical. The manager’s manager meets with the team lead to review feedback and agrees on a coaching engagement. HR runs a psychological safety workshop with the team, defines three ground rules for meetings, and schedules a pulse check after 3 months.
Outcome: Six months later, safety scores rise above 3.7, and incident reports drop. The manager keeps their role, but their eligibility for promotion is paused until a second positive 360 cycle confirms sustained change.
Use case 2: Strong strategic leadership, weak coaching
Situation: A Director receives high scores (≥4.3) on Strategic thinking (Q15–Q21) but low scores (≤3.0) on People leadership & coaching (Q1–Q7). Their manager sees this as a risk for succession planning.
Decision and action: The Director starts monthly coaching with an internal mentor known for strong people leadership. They also commit to bi-weekly 1:1s, use a structured 1:1 agenda, and agree two specific behaviors to change: asking more open questions and giving more forward-looking feedback.
Outcome: In the next 360 after 18 months, coaching scores improve to 3.8 while strategic scores remain high. The Director is shortlisted as successor for a VP role, supported by both quantitative trends and qualitative comments.
Use case 3: Comparing cohorts for a leadership program
Situation: HR runs a company-wide leadership program for 40 People Managers. All participants complete this 360° before and 12 months after the program.
Decision and action: HR aggregates domain scores to compare participants vs. non-participants. They see a clear uplift in Collaboration & communication and psychologische Sicherheit among program alumni. HR uses these data points, similar to an engagement or talent development study, to refine the curriculum and justify budget.
Outcome: Leadership program funding is extended, and the 360° becomes a standard diagnostic and evaluation tool for all new cohorts.
Implementation & updates
A light, predictable process beats a perfect but complex one. Think in four phases: design, communication, data collection, and debrief & action.
Mini implementation checklist
- Design (2–4 weeks): Align on goals, domains, and manager 360 feedback questions. Involve HR, leadership, and Betriebsrat. Clarify if and how results connect to talent reviews, but keep them separate from annual pay decisions.
- Tooling (parallel): Decide whether to run in a survey tool, HR suite, or spreadsheets. A platform such as Sprad Growth can centralize reviews, 1:1s, and development data.
- Communication (1–2 weeks): Explain purpose, anonymity rules, timeline, and who sees what. Share sample reports and an FAQ with raters and leaders.
- Data collection (2–3 weeks): Launch the survey, monitor response rates by rater group, send gentle reminders, and close on a fixed date.
- Debrief & action (4–6 weeks): Deliver reports, run debriefs, update IDPs, and capture 2–3 team-level actions per area or function.
Ongoing updates and governance
Review the question set and thresholds at least once per year. Use feedback from raters and managers to remove weak items and sharpen behavioral wording. Keep core domains stable so you can compare over time.
- Annually: HR and business leadership review domain averages and distribution by level (People Manager, Head, Director).
- Every 18–24 months: Run a full 360 cycle for each manager; supplement with lighter upward feedback pulses in between.
- When strategy or values change: Update 2–3 questions to reflect new priorities while maintaining historical comparability where possible.
Track a small set of KPIs across cycles:
- Participation rate per rater group (target ≥70% for direct reports and peers).
- Average domain scores company-wide and by business unit.
- Time from survey close to completed debrief and documented development plan (target ≤30 days).
- Share of managers with at least one “critical” domain who have a follow-up action logged (target 100%).
- Changes in psychologische Sicherheit and leadership NPS over multiple cycles.
Conclusion
A focused 360° manager survey helps you see leadership behavior as Mitarbeitende experience it, not only as executives hope it looks. With a consistent question bank, clear thresholds, and documented follow-up, you catch problems around psychologische Sicherheit early, improve the quality of development conversations, and create sharper priorities for leadership programs and IDPs.
The next steps are straightforward: choose the first pilot group of Führungskräfte, adapt the question bank to your context and language, and load it into your preferred survey or performance management tool. Align with the Betriebsrat on purpose, anonymity, and data handling, then communicate the process in simple terms to managers and raters. Once the first reports are out, support debriefs with short guides and coaching so that feedback turns into two or three concrete behavior changes, not just a PDF in a folder.
Over time, integrate 360° results into your broader talent processes: link focus areas to learning offers, combine domain scores with promotion decisions through structured talent reviews, and review patterns regularly with leadership. Done in this way, manager 360 feedback questions become less about judging individuals and more about building a stronger, fairer Führungskultur across the company.
FAQ
How often should we run this 360° survey for managers?
Most companies run a full 360° for managers every 18–24 months. That gives enough time for behavior change to show up while keeping insights current. Between full cycles, you can use lighter upward feedback or engagement pulses focused on leadership behavior. Avoid running full 360s more than once per year; fatigue and over-interpretation of small score changes can hurt trust and usefulness.
What should we do if a manager receives very low scores?
First, treat low scores as data for investigation, not instant verdicts. Check comments, rater group patterns, and context such as recent reorganizations. Then have the manager’s manager and HR review the case within ≤7 days and agree on next steps: coaching, training, closer supervision, or, in severe values or safety violations, formal measures. Document expectations and timelines clearly and connect them to a development or performance improvement plan.
How do we handle critical or emotional comments?
Critical comments often contain the most useful insights. Train managers to read them with curiosity, not defensiveness. Encourage them to look for repeated themes rather than single statements and to ask clarifying questions in a safe way during team discussions. For comments that hint at harassment, discrimination, or compliance issues, follow your formal investigation procedures immediately. According to research on psychological safety by Amy Edmondson, summarized by Harvard Business School, how leaders react to bad news strongly shapes future openness.
How do we integrate this with our existing performance management process?
Use 360° feedback as an input for development, not as the sole basis for ratings or pay. You can summarize key strengths and 2–3 focus areas in each manager’s individual development plan and then revisit these in regular 1:1s and check-ins. If you already use a performance or talent platform such as Sprad Growth, connect 360° insights to goals, learning recommendations, and talent review notes so you see one coherent picture instead of scattered files.
How can we update the question set over time without losing comparability?
Keep the six core domains stable and protect key items you want to trend, such as psychologische Sicherheit and Role modeling. Each year, review which questions produced clear, actionable differences and which stayed flat or confused people. Replace or reword weaker items but keep at least 2–3 anchor questions per domain unchanged. When you change the survey, mark the version and document what changed so future analyses understand why trends look different.



