Skip to Content
CWE-CWR-05 Compassion, Welfare & Environment Community Welfare & Relief CORE Excellence v2.9.7

User-impact survey results

Assesses if the organization conducts a user-impact survey for its community support and relief services, evaluating service effectiveness from the beneficiary's perspective. Emphasizes methodological rigor (defined sampling, representativeness), accessibility, safeguarding protocols, and ethical data handling. Rooted in Shura (consultation) and Ihsan (excellence), this feedback process ensures aid upholds Karamah (human dignity). By actively listening to recipients, organizations fulfill their Amanah (trust) and align with the Maqasid al-Shariah (objectives of Islamic law) for societal welfare.

Assessment Questions
  1. How does the organization systematically gather, record, and analyze feedback using the mandatory Beneficiary Impact Index items?
  2. Describe the sampling frame, fieldwork window, and contact attempts. How do you handle non-response?
  3. What specific separation controls prevent staff involved in decisions from collecting feedback from their own beneficiaries?
  4. How do you ensure representativeness? If segments deviate >10 pp, what remediation (weighting/boost) is applied?
  5. What safeguarding protocols are in place for the survey (risk assessment, distress scripts, referrals)?
  6. How do you ensure accessibility (readability scores, WCAG compliance, back-translation)?
  7. What were the key findings and overall positive feedback rate from the most recent survey cycle?
  8. Provide specific examples of service improvements or strategic changes implemented based on feedback.
  9. How does the organization 'close the loop' (including explaining what *cannot* change) via multiple channels?
  10. What lawful basis, DPIA, and ROPA entries are in place?
  11. What external benchmarks did you use and how did you adjust for case mix?
Evidence Requirements
  • Copies of user-impact survey templates (showing 5 core items) and focus group guides.
  • Survey Safeguarding Protocol (risk assessment, scripts, training records).
  • Sampling plan with population definition, target response rate, achieved n and CI.
  • Representativeness table and remediation evidence (weighting plan, weights file, weighted/unweighted comparison).
  • Accessibility evidence: readability score screenshots, WCAG checklist, translation/back‑translation logs.
  • Anonymized summary reports of survey results and data analysis from the last 1-2 years.
  • DPIA, lawful basis assessment, privacy notice, and ROPA entry.
  • Benchmarking sources and comparative analysis memo.
  • Action tracker with owners/dates/status; board papers evidencing oversight.
  • ‘You said, we did’ communications (copies in languages used, distribution metrics).
Scoring Guidelines
LevelRatingDescription
5 5/5 ≥80% positive on Index; strong methodology (RR ≥30% or CI ±5%); representativeness within ±7 pp (or weighted); benchmarking used; action plan delivered with evidenced improvements and comprehensive ‘You said, we did’.
4 4/5 ≥75% positive on Index; RR ≥25% or CI ±7%; representativeness within ±10 pp (or remediation plan); segmented analysis; action plan approved and in delivery.
3 3/5 70–74% positive OR ≥75% with low response (<20%) or poor representativeness; basic analysis; survey protocol and privacy notice in place; actions identified but not yet implemented.
2 2/5 <70% or serious methodological gaps (no sampling plan, no safeguarding protocol, no segmentation); limited or no actions.
1 1/5 No survey or unusable data.

Discussion (1)

Administrator 2026-03-07 11:07:59.634662

📋 **Version updated: 1.0.0 → 2.9.7** **Changes:** Updated islamic_references from mizan-297.json

Sign in to post a comment.