Most referral programs get judged on hires and cost-per-hire, but that misses why people do (or don’t) refer. These employee referral program survey questions help you see the real experience: whether employees trust the process, whether managers feel supported, and where the workflow breaks. You get earlier warning signs, clearer conversations, and a short list of fixes that move participation and quality.
If you want to connect perception to outcomes, pair this survey with your monthly employee referral program metrics dashboard and review what changed after each campaign.
Employee referral program survey questions (question bank)
2.1 Closed questions (5-point Likert scale)
Answer scale (for all statements): 1 = Strongly disagree, 2 = Disagree, 3 = Neither agree nor disagree, 4 = Agree, 5 = Strongly agree.
Employees — Awareness & understanding
- Q1 (Employee-only) I know we have an employee referral program and how it works.
- Q2 (Employee-only) I know where to find the referral policy, rules, and eligible roles.
- Q3 (Employee-only) I understand who is eligible to refer (and who is excluded).
- Q4 (Employee-only) I understand what counts as a “valid referral” (timing, first touch, duplicates).
- Q5 (Employee-only) I know which channels I can use to refer (link, form, email, Teams/Slack, WhatsApp/SMS).
- Q6 (Employee-only) I know what reward(s) exist and when payout happens.
- Q7 (Employee-only) I know where to ask questions and get support about referrals.
Employees — Ease of use & process
- Q8 (Employee-only) It is easy to find open roles that are relevant for my network.
- Q9 (Employee-only) Referring someone takes ≤5 minutes end-to-end.
- Q10 (Employee-only) I can refer from my phone without friction (no laptop required).
- Q11 (Employee-only) I can track the status of my referral without chasing HR.
- Q12 (Employee-only) I understand what happens after I refer (screening steps and timelines).
- Q13 (Employee-only) The referral process respects candidate consent and privacy (GDPR-ready).
- Q14 (Employee-only) If my referral is rejected, I receive a clear reason I can share.
Employees — Fairness & transparency
- Q15 (Employee-only) Eligibility rules are clear (who can refer which roles).
- Q16 (Employee-only) Reward rules are clear (amount, tiers, milestones, taxes if relevant).
- Q17 (Employee-only) Duplicate referrals are handled fairly and consistently.
- Q18 (Employee-only) I trust that hiring decisions are not influenced by “who referred” someone.
- Q19 (Employee-only) I believe rewards are paid out fairly and on time.
- Q20 (Employee-only) If a conflict happens (e.g., two referrers), the resolution process is transparent.
- Q21 (Employee-only) The program feels fair across locations, teams, and job families.
Employees — Communication & trust
- Q22 (Employee-only) HR communicates clearly which roles are priority for referrals.
- Q23 (Employee-only) I receive timely updates on my referred candidates.
- Q24 (Employee-only) Rejections are communicated respectfully and with enough context.
- Q25 (Employee-only) I trust that referrals are taken seriously (not “parked” behind other channels).
- Q26 (Employee-only) I trust HR to handle referral data and candidate information responsibly.
- Q27 (Employee-only) The program is explained in plain language (not only HR terms).
- Q28 (Employee-only) The program’s success and learnings are shared back with employees.
Employees — Incentives & motivation
- Q29 (Employee-only) The reward(s) are attractive for the effort of referring.
- Q30 (Employee-only) Non-cash recognition (thank-you, visibility) feels meaningful in our culture.
- Q31 (Employee-only) Reward timing is motivating (not too delayed).
- Q32 (Employee-only) Reward tiers match role difficulty (hard-to-hire roles get higher rewards).
- Q33 (Employee-only) I would still refer even without a reward (intrinsic motivation exists).
- Q34 (Employee-only) The program avoids pressure; participation feels optional and safe.
- Q35 (Employee-only) Reward rules do not create tension within teams.
Employees — Channel & access (office + frontline)
- Q36 (Employee-only) Program communication reaches me in channels I actually use.
- Q37 (Employee-only) I can participate without a company email account.
- Q38 (Employee-only) Language options fit our workforce (e.g., German + English, or site languages).
- Q39 (Employee-only) Referral links and job content work well on mobile data connections.
- Q40 (Employee-only) I receive role suggestions that fit my context (site, shift, function).
- Q41 (Employee-only) The program feels equally accessible for remote, hybrid, and on-site employees.
Employees — Overall impact
- Q42 (Employee-only) Referrals improve hiring quality in my area.
- Q43 (Employee-only) Referrals help us hire faster for urgent roles.
- Q44 (Employee-only) The program strengthens our culture by involving employees in hiring.
- Q45 (Shared) I would recommend the referral program to a colleague as a good initiative.
Managers / recruiters — Awareness & ownership
- Q46 (Manager/Recruiter-only) I understand my responsibilities in the referral process.
- Q47 (Manager/Recruiter-only) I know the expected response time (SLA) for referral candidates.
- Q48 (Manager/Recruiter-only) I know how to handle duplicate referrals and conflicts.
- Q49 (Manager/Recruiter-only) I know how to communicate referral rejections appropriately.
- Q50 (Manager/Recruiter-only) I know how rewards and milestones work (so I can set expectations).
- Q51 (Manager/Recruiter-only) I have received enough enablement/training on referrals.
- Q52 (Manager/Recruiter-only) Referral roles and campaigns are aligned with our hiring priorities.
Managers / recruiters — Process & workload
- Q53 (Manager/Recruiter-only) Referral candidates are easy to identify in the ATS / workflow.
- Q54 (Manager/Recruiter-only) Referrals reduce time spent on low-fit applicants overall.
- Q55 (Manager/Recruiter-only) The referral process does not add avoidable coordination overhead.
- Q56 (Manager/Recruiter-only) The handoff between HR/TA and hiring managers works smoothly for referrals.
- Q57 (Manager/Recruiter-only) Response times to referrals are consistently met.
- Q58 (Manager/Recruiter-only) It is easy to give status updates back to referrers.
- Q59 (Manager/Recruiter-only) The process supports high-volume hiring without breaking.
Managers / recruiters — Perceived quality & fit
- Q60 (Manager/Recruiter-only) Referral candidates are higher quality than other sourcing channels.
- Q61 (Manager/Recruiter-only) Referral candidates have more realistic job expectations.
- Q62 (Manager/Recruiter-only) Referral candidates move faster through the funnel.
- Q63 (Manager/Recruiter-only) Referral candidates have higher offer acceptance rates.
- Q64 (Manager/Recruiter-only) The referrer context (why this person fits) is useful and specific.
- Q65 (Manager/Recruiter-only) Referrals improve team performance outcomes (fit, ramp-up, retention signals).
- Q66 (Shared) Referrals are a channel we should actively grow.
Managers / recruiters — Fairness & bias
- Q67 (Manager/Recruiter-only) I worry referrals create “in-groups” or favoritism.
- Q68 (Manager/Recruiter-only) I worry referrals reduce diversity in our pipeline.
- Q69 (Manager/Recruiter-only) We apply the same assessment bar to referrals as to other candidates.
- Q70 (Manager/Recruiter-only) We can monitor referral outcomes for fairness (without violating Datenschutz).
- Q71 (Manager/Recruiter-only) We have clear guardrails to avoid conflicts of interest (family reporting lines, etc.).
- Q72 (Manager/Recruiter-only) The program encourages referrals beyond “people like us”.
- Q73 (Manager/Recruiter-only) I trust the program rules reduce bias risk rather than increase it.
Managers / recruiters — Support from HR & tools
- Q74 (Manager/Recruiter-only) HR/TA provides templates and messages that managers can reuse.
- Q75 (Manager/Recruiter-only) Reporting helps me see referral performance for my org (volume, speed, quality).
- Q76 (Manager/Recruiter-only) Tooling makes it easy to launch role-specific referral pushes.
- Q77 (Manager/Recruiter-only) I can quickly trigger “thank you” and recognition for referrers.
- Q78 (Manager/Recruiter-only) Payout management is reliable and low-admin.
- Q79 (Manager/Recruiter-only) The program integrates well with our ATS and HR systems.
- Q80 (Manager/Recruiter-only) The referral experience for employees is consistent across devices and channels.
Managers / recruiters — Overall impact & future needs
- Q81 (Manager/Recruiter-only) Referrals reduce agency dependence for my hiring area.
- Q82 (Manager/Recruiter-only) Referrals help fill hard-to-hire roles that other channels miss.
- Q83 (Manager/Recruiter-only) I would personally ask my team to refer for critical roles.
- Q84 (Manager/Recruiter-only) I know what I would change to increase referral success.
- Q85 (Manager/Recruiter-only) We have the capacity to respond quickly if referral volume increases.
- Q86 (Shared) The referral program improves the candidate experience overall.
2.2 Overall / NPS-style ratings (0–10 scale)
- R1 (Employee-only) How likely are you to recommend the referral program to a colleague? (0–10)
- R2 (Employee-only) How fair does the referral program feel overall? (0–10)
- R3 (Employee-only) How easy is it to refer someone end-to-end? (0–10)
- R4 (Manager/Recruiter-only) How satisfied are you with referrals as a hiring channel? (0–10)
- R5 (Manager/Recruiter-only) How confident are you that referral decisions are fair and consistent? (0–10)
- R6 (Manager/Recruiter-only) How likely are you to actively promote referrals for your next priority role? (0–10)
2.3 Open-ended questions (open text)
- O1 (Employee-only) What is the single biggest reason you have not referred someone yet?
- O2 (Employee-only) Describe one step in the referral process that feels unclear or slow.
- O3 (Employee-only) What would make you trust the program more?
- O4 (Employee-only) What reward would motivate you most (cash, time off, vouchers, experiences, donation, other)? Why?
- O5 (Employee-only) What is one message, template, or example you wish you had when referring?
- O6 (Employee-only) Share one positive experience you had referring someone (what worked well?).
- O7 (Manager/Recruiter-only) Where does referral workflow create the most workload or friction for you?
- O8 (Manager/Recruiter-only) What would help you respond faster to referral candidates (process, tooling, staffing)?
- O9 (Manager/Recruiter-only) Describe one bias or fairness risk you have seen (or worry about) with referrals.
- O10 (Manager/Recruiter-only) What manager enablement would help most (training, playbooks, scripts, dashboards)?
- O11 (Shared) If you could change one rule in the program, what would it be?
- O12 (Shared) What should we start, stop, and continue doing to make referrals work better?
Decision table (from scores to actions)
| Question(s) / area | Score / threshold | Recommended action | Owner | Target / deadline |
|---|---|---|---|---|
| Awareness & understanding (Q1–Q7) | Avg <3,5 or ≥25% “Disagree” | Publish 1-page rules + FAQ; run 2-channel launch message; add “where to refer” link to job posts | TA Lead | Live within 14 days |
| Ease of use & tracking (Q8–Q14) | Avg <3,2 or R3 <7,0 | Remove steps; add status visibility; define referral SLAs; test mobile flow end-to-end | Recruiting Ops + IT | Fix plan in 7 days, ship within 30 days |
| Fairness & transparency (Q15–Q21, R2) | Avg <3,4 or R2 <7,0 | Clarify duplicates + conflict rules; publish payout milestones; add escalation route; align with Betriebsrat | HR Policy Owner | Draft within 21 days, approve within 45 days |
| Communication & trust (Q22–Q28) | Avg <3,3 | Standardize update cadence (e.g., weekly); add rejection reason categories; close-the-loop comms quarterly | TA Comms Owner | Cadence starts within 14 days |
| Incentives & motivation (Q29–Q35) | Avg <3,2 | Redesign reward menu; add staged payouts; pilot 1 hard-to-hire tier; document tax handling | Total Rewards + TA | Pilot within 60 days |
| Channel & access (Q36–Q41) | Avg <3,4 or Q37 <3,0 | Add non-email participation path (QR/SMS/WhatsApp); translate key assets; align timing with shifts | TA Lead + Site HR | Roll out to 1 site within 30 days |
| Manager ownership + workload (Q46–Q59, R6) | Avg <3,3 or R6 <7,0 | Define manager SLA; add manager scripts; simplify handoffs; add weekly referral review in hiring sync | Head of TA + Hiring Managers | Standards set within 14 days |
| Bias risk signals (Q67–Q73) | Q67 or Q68 Avg ≥3,5 (agreement with concern) | Run bias workshop; add structured interview bar; monitor funnel by group; publish guardrails | DEI Lead + TA | Workshop within 30 days |
Key takeaways
- Use employee referral program survey questions to spot trust issues before hires drop.
- Fix the top 1 friction step first; it usually lifts participation fastest.
- Set SLAs for updates; silence kills motivation and perceived fairness.
- Slice results by site and contract type to catch frontline access gaps.
- Always assign an owner and deadline; otherwise surveys become noise.
Definition & scope
This survey measures how employees and managers experience your employee referral program: awareness, usability, fairness, trust, incentives, and access across channels. It is designed for EU/DACH HR and TA teams (100–5,000+ FTE) and supports decisions on program relaunch, policy changes, manager enablement, reward design, and tooling improvements.
How to use these employee referral program survey questions (blueprints)
You do not need one giant survey every time. Pick a format that matches your decision. Use a baseline before a relaunch, then shorter pulses after campaigns or tool changes.
If you want to align survey insights with business impact, keep an ROI view in parallel, for example with an employee referral ROI calculator that includes bonus cost, admin time, and agency savings.
| Survey blueprint | Audience | Recommended items | When to run | Decision it supports |
|---|---|---|---|---|
| (a) Baseline before relaunch | Employees + Managers/Recruiters | 20–24 items (mix of Q + 2 ratings + 4 open) | 4–6 weeks before relaunch | What to fix in policy, process, channels, incentives |
| (b) Annual program health check | Employees only | 18–22 items (Q + 2 ratings + 4 open) | 1× per year | Trust, fairness, and accessibility trends over time |
| (c) Quick pulse after campaign/tool change | Employees + Managers/Recruiters | 10–12 items (6 Q + 1 rating + 2 open per group) | 2–3 weeks after change | Did the change reduce friction and improve clarity? |
| (d) Targeted TA/recruiter survey | Recruiters / TA | 10–12 items (process + quality + bias) | Quarterly | Funnel impact, workload, and SLA adherence |
Simple workflow (so you can run this without drama)
Keep it tight: define what you will do with results before you send the Mitarbeiterbefragung. Then commit to 1–3 actions per quarter, not 15.
- Pick blueprint and items; remove anything you will not act on this quarter.
- Define slicing (site, job family, remote vs. on-site) and anonymity thresholds (e.g., n ≥10).
- Send survey with a clear close date (≤10 days) and 2 reminders.
- Review results within 7 days; publish themes + next actions within 21 days.
- Run a 10–12 item pulse after 30–60 days to confirm improvement.
- TA Lead sets blueprint and items within 5 days.
- People Analytics configures slices + anonymity rules within 7 days.
- Comms/Employer Branding drafts the invite text within 5 days.
- Hiring managers confirm SLAs and response expectations within 14 days.
Interpreting employee results: awareness → trust → action
Employee scores usually fail in predictable places: “I don’t know the rules” (Q1–Q7), “I can’t track status” (Q11), or “payouts feel random” (Q19). Treat each as a different fix. Do not “motivate” people to use a process they do not trust.
What to prioritize (threshold logic you can reuse)
Use simple cutoffs: Avg <3,0 is a break; 3,0–3,9 needs improvement; ≥4,0 is strong. If R1 or R3 drops below 7,0, participation usually softens next.
- Check Q1–Q7: if awareness is low, everything else is noise.
- Check Q8–Q14: if process is hard, motivation will not save it.
- Check Q15–Q21 and Q22–Q28: if fairness and trust are low, fix rules and updates.
- Then look at Q29–Q35: redesign incentives only after basics work.
- Recruiting Ops maps the “refer in ≤5 minutes” journey and removes 1 step within 30 days.
- TA sets a referrer update cadence (e.g., every 7 days) within 14 days.
- HR Policy Owner clarifies duplicate/referrer conflicts and publishes rules within 45 days.
- Payroll/Total Rewards documents payout timing and milestones within 30 days.
Interpreting manager/recruiter results: quality, workload, and SLAs
Managers can like referrals and still avoid them if they fear extra work or bias risk. That shows up when Q60–Q66 is high but Q53–Q59 or Q67–Q73 is weak. Treat it as an operating model problem, not a motivation problem.
Fast diagnosis
If Q57 (SLA adherence) is <3,5, fix response times before you push new campaigns. If managers agree with Q67/Q68 concerns (Avg ≥3,5), add guardrails before scaling volume.
- Confirm referral SLAs by stage (screening, first interview, decision).
- Make referrals visible in the ATS and manager dashboards (not a hidden tag).
- Standardize rejection reasons that can be shared with referrers.
- Set weekly capacity checks for TA and hiring teams in peak hiring periods.
- Head of TA defines SLAs per stage and publishes within 14 days.
- Recruiting Ops ensures referral tags + queues exist in ATS within 30 days.
- Hiring managers add a 10-minute “referral review” slot to weekly hiring sync within 7 days.
- DEI Lead provides a bias guardrail checklist for referrals within 30 days.
DACH/EU notes: Datenschutz, Betriebsrat, and trust
In DACH, referral surveys touch sensitive areas: fairness perceptions, incentives, and how decisions get explained. If employees think the Mitarbeiterbefragung can be linked to performance data, response quality drops fast. Make privacy rules explicit and consistent with your Betriebsrat approach.
Keep governance simple: aggregate results, separate survey data from performance files, and apply a clear retention window (for example, delete raw open text after 180–365 days). If you use a tool for distribution and follow-ups, a talent platform like Sprad Growth can help automate survey sends, reminders and follow-up tasks while keeping ownership and deadlines visible.
- People Team + DPO define anonymity threshold (e.g., n ≥10) within 14 days.
- HR confirms data separation (survey vs. performance) and documents within 21 days.
- Works council touchpoint: review survey purpose, slices, and retention rules within 30 days.
- TA confirms candidate consent language for referrals and aligns within 21 days.
Scoring & thresholds
Use the 1–5 Likert scale consistently across all employee referral program survey questions. Compute (a) item averages, (b) dimension averages (e.g., Q8–Q14), and (c) “favorability” (% selecting 4 or 5). Combine this with 0–10 ratings (R1–R6) for a simple executive view.
Thresholds that work in practice: Avg <3,0 = critical; 3,0–3,9 = needs improvement; ≥4,0 = strong. For 0–10 ratings: <7,0 = warning; 7,0–8,4 = okay; ≥8,5 = strong. Turn scores into decisions by mapping each weak dimension to one owner, one fix, and one deadline (≤30 days for process, ≤60 days for policy/incentives).
| Score signal | What it usually means | Decision | Owner |
|---|---|---|---|
| Awareness Avg <3,5 (Q1–Q7) | People do not know rules, channels, or where to start | Run re-launch comms + 1-page policy | TA Lead |
| Process Avg <3,2 (Q8–Q14) + R3 <7,0 | Workflow is too slow, unclear, or not mobile-friendly | Remove steps; add tracking; improve mobile access | Recruiting Ops |
| Fairness Avg <3,4 (Q15–Q21) + R2 <7,0 | Rules feel inconsistent; payout/duplicates create distrust | Rewrite rules; add escalation; align with Betriebsrat | HR Policy Owner |
| Manager workload Avg <3,3 (Q53–Q59) | Referrals add admin or break handoffs | Fix SLAs, queues, and templates | Head of TA |
Follow-up & responsibilities
Assign routing before you launch. Otherwise, critical comments land nowhere and trust drops. Use a simple triage: “process issues” go to Recruiting Ops, “policy/fairness” to HR Policy + works council liaison, “tooling/access” to IT, and “manager behavior” to the manager’s leadership chain with HR support.
Response times keep you credible: react within ≤24 h if comments indicate harassment, discrimination, or severe misconduct (route to the proper case process). For normal process feedback, publish a first readout within 7 days and a committed action plan within 21 days.
- People Analytics delivers a results pack (dimensions + slices) within 7 days.
- TA Lead hosts a 60-minute decision meeting and assigns owners within 10 days.
- Each owner posts 1 action with a deadline (≤30 days) within 21 days.
- HR publishes “you said / we did” updates within 45 days.
- Recruiting Ops re-measures with a 10–12 item pulse within 60 days.
Fairness & bias checks
Referrals can raise fairness questions even when intentions are good. Check results by relevant groups: location, business unit, job family, remote vs. office, and employment type (where legally and ethically appropriate). Keep aggregation thresholds (e.g., n ≥10) so you do not compromise anonymity.
Common patterns to watch and how to respond:
- Pattern: Frontline sites score Q37/Q39 low. Response: add SMS/WhatsApp/QR access and shift-timed comms within 30 days.
- Pattern: One region scores Q19 low (payout trust). Response: audit payout timing; publish milestone dates; fix within 45 days.
- Pattern: Managers score Q67/Q68 high (bias concerns) while employees score Q18 low (fear favoritism). Response: add structured interview guardrails; monitor referral funnel outcomes; train within 30–60 days.
| Bias check slice | What to compare | Flag threshold | Action |
|---|---|---|---|
| Remote vs. on-site | Q36–Q41 averages | Gap ≥0,4 points | Adjust channels and timing; add mobile-first path within 30 days |
| Sites / locations | Q15–Q21 + R2 | Gap ≥0,5 points | Run policy clarification + payout audit; close within 45 days |
| Departments | Q53–Q59 + Q57 | Avg <3,3 in one org | Fix SLA capacity and handoffs with that leadership team within 21 days |
Examples / use cases
Use case 1: Employees understand the program, but they do not trust payouts
You see Q1–Q7 around ≥4,0, but Q19 and R2 fall below 7,0. That usually means rules exist but are not experienced as consistent. The decision is not “more communication.” The decision is to tighten payout milestones, publish timelines, and create a single escalation route.
Action: Total Rewards and TA define milestone dates (e.g., after start date, after probation) and publish them in the policy. Payroll confirms a payout SLA (for example, “paid in the next payroll run after milestone”) and HR shares quarterly payout accuracy stats at an aggregated level.
Use case 2: Managers like referral quality, but response times are poor
Q60–Q66 is strong, but Q57 and Q58 are weak. Referral candidates may be good, but managers do not see fast movement and referrers get no updates. The decision is to add a referral SLA and make referrals visible in recruiter queues with clear ownership by stage.
Action: Head of TA sets stage SLAs, Recruiting Ops configures ATS queues, and hiring managers commit to a weekly timebox to review referral candidates. HR provides rejection reason categories so updates are fast and consistent.
Use case 3: Frontline participation is low due to access, not motivation
Office teams score Q36 high, but frontline sites score Q37 and Q39 low. Incentives may be fine, but access is not. The decision is to meet employees where they are: mobile-first, minimal login, and shift-timed comms.
Action: Site HR pilots QR-code posters and SMS/WhatsApp referral links for 1 site, then compares Q37/Q39 before and after with a short pulse. Keep the policy and consent language simple and visible.
Implementation & updates
Treat survey work like a product cycle: pilot, ship, measure, iterate. Start with one business unit or one country so you can align Datenschutz, works council expectations, and manager routines before you scale.
If you are rebuilding the program basics (policy, comms, channels), it helps to keep your reference assets tight: a clear policy template and ready-to-send comms reduce inconsistent interpretation. Many teams maintain both an employee referral policy template and a set of employee referral email templates so managers do not invent their own rules in chats.
Phased rollout steps
- Pilot (6–8 weeks): run baseline survey + fix 1 process bottleneck + clarify 1 policy point.
- Rollout (next 8–12 weeks): expand to more sites; train managers on SLAs and rejection scripts.
- Enablement: publish a manager one-pager; add templates; set monthly referral review rhythm.
- Review (1× per year): prune questions, adjust thresholds, refresh incentives if needed.
KPIs to track alongside the survey
- Participation rate (overall + by site) within 10 days of send.
- Dimension averages (Awareness, Process, Fairness, Trust) and R1–R6 trends.
- SLA adherence for referral stages (screening, interview, decision) per month.
- Payout timeliness (milestone reached → paid) and exception rate.
- Action completion rate: % of committed actions delivered by deadline.
For tooling updates, document requirements before you buy or build. A structured employee referral software RFP helps you test real workflows: mobile referrals, consent capture, ATS tagging, and payout automation.
Conclusion
Employee referral programs work best when people trust the rules, see progress, and feel the channel is fair. Employee referral program survey questions give you that view early, before participation drops or managers disengage. They also improve the quality of conversations: instead of debating opinions, you can point to specific breakdowns in awareness, tracking, or payout confidence.
To get value fast, pick one pilot area, implement the question set in your survey tool, and agree on owners and deadlines before you send. Then publish a short readout within 7 days and commit to 1–3 fixes within 21 days. After 60 days, run a short pulse to confirm the program feels easier, fairer, and more trustworthy.
FAQ
How often should you run employee referral program survey questions?
Run a baseline before a relaunch, then use short pulses after meaningful changes. A common rhythm is: (1) baseline every 12 months, (2) 10–12 item pulse 2–3 weeks after a major campaign, tool change, or policy update, and (3) a quarterly manager/recruiter pulse if referral volume is high. Keep the cadence predictable so teams expect follow-ups.
What should you do if scores are very low (e.g., Avg <3,0)?
Do not launch another “refer more” campaign. First, name the weak dimension (process, fairness, trust, access) and fix one bottleneck with a clear owner and deadline. If Q11/Q23 (status updates) is low, publish an update SLA within 14 days. If Q19 (payout trust) is low, run a payout audit and publish milestone timing within 45 days.
How do you handle critical open-text comments without breaking anonymity?
Separate survey feedback from case management. Tell employees upfront: the survey is anonymous and not designed for individual case resolution. Provide a parallel channel for urgent issues (HR hotline, speak-up, works council, ombudsperson). In analysis, cluster open text by theme and only quote comments if they cannot identify a person, role, or site.
How do you get managers to take referral survey results seriously?
Give managers two things: clear expectations and less admin. Share a one-page playbook with SLAs, rejection scripts, and “what good looks like.” Then show them their own dimension scores (Q53–Q59, Q60–Q66) next to a simple action plan. If you ask them to improve, remove one friction point at the same time (templates, ATS queues, or status automation).
How do you update the question bank over time without losing comparability?
Keep a stable core (10–14 items) that you never change, then rotate a smaller set based on what you are improving this quarter. Maintain consistent scales and scoring rules, and document version changes (v1, v2) with dates. If you remove a question, replace it only at the start of a new annual cycle, not mid-quarter, so trend comparisons stay clean.



