The Loyalty Gap_ Transforming Patient Experience Measurement in Medicare.pdf
paulsarakkal
0 views
15 slides
Oct 08, 2025
Slide 1 of 15
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
About This Presentation
The Loyalty Gap_ Transforming Patient Experience Measurement in Medicare.pdf
The healthcare industry is failing to measure what truly drives patient loyalty. Current satisfaction scores are susceptible to gaming and offer low actionable insight. This white paper details how adopting verified, closed...
The Loyalty Gap_ Transforming Patient Experience Measurement in Medicare.pdf
The healthcare industry is failing to measure what truly drives patient loyalty. Current satisfaction scores are susceptible to gaming and offer low actionable insight. This white paper details how adopting verified, closed-loop feedback systems could accurately measure patient satisfaction, predict patient referrals, and power organizational growth in Medicare. This white paper is focused on CAHPS and creating actionable insight for physicians through CAHPS.
Size: 29.68 MB
Language: en
Added: Oct 08, 2025
Slides: 15 pages
Slide Content
White Paper
The Loyalty Gap: Transforming Patient
Experience Measurement in Medicare
From Flawed Satisfaction Scores to Predictable Patient Referrals
using Verified, Closed-Loop Feedback.
Introduction
The healthcare industry is failing to measure what truly drives patient loyalty.
Current satisfaction scores are susceptible to gaming and offer low actionable
insight. This white paper details how adopting verified, closed-loop feedback
systems could accurately measure patient satisfaction, predict patient referrals, and
power organizational growth in Medicare. This white paper is focused on CAHPS. Author : Paul Simon Arakkal, Healthcare Innovation Evangelist
Measuring drivers of patient loyalty:
Actionable insight from physician ratings are crucial for physicians and provider
organizations. Let us dig into the mechanics of online physician ratings including
categories of rating platforms, type of reviews and key features.
Platforms Type of Review Key Features
Zocdoc, athenahealth,
NextGen Healthcare,
eClinicalWorks (eCW),
Epic Systems, Tebra
(Kareo + PatientPop),
AdvancedMD,
ModMed, and others
(Not focus of White Paper)
"Closed-Loop"
(Verified
appointments
only).
Integrated EHR/PM capabilities; robust
patient scheduling and online booking;
automated, closed-loop surveys
triggered by verified appointment
completion; patient portal integration
(MyChart, Healow etc ); and tools for
revenue cycle management and
reputation optimization.
Healthgrades,
Vitals.com, RateMDs,
Google/Yelp/Facebook
(Not focus of White Paper)
“Open-Loop”
sites, meaning
any user can
leave a rating.
These platforms operate as large,
open-loop databases that allow any
user to leave star ratings and
narrative comments on a doctor's
profile. Key features include displaying
professional credentials, hospital
affiliations, specialized rating
criteria (like bedside manner/wait
time), and search filters by location
or insurance to aid patient discovery.
PhysicianCompare
(CMS)
Federal
Government
mandated
system which is
inherently
verified
(closed-loop)
and statistically
reliable.
A U.S. government directory that is
focused on providing quality
information for Medicare providers
(though this data is often incomplete
for individual clinicians). Displays
professional credentials, group
affiliations, and quality performance
scores to help Medicare patients
compare doctors. Does not send
surveys, but it publishes data derived
from mandatory, standardized surveys
(like the CAHPS Survey), which are
collected by CMS-approved vendors
and are inherently verified
(closed-loop) and statistically reliable.
Consumer Assessment of Healthcare Providers and Systems
(CAHPS) surveys conducted by CMS (Medicare)
The Consumer Assessment of Healthcare Providers and Systems (CAHPS) surveys
published on Physician Compare (now part of Medicare Care Compare) represent
the gold standard for scientifically rigorous patient experience measurement.
However, even this validated system faces unique challenges.
Here is a detailed breakdown of the processes involved within the CMS CAHPS
methodology and operations:
(1) What is the process of choosing patients for these surveys? Do all
patients get invited?
No, not all eligible patients are invited; a statistically valid random sample is
selected.
●Sampling Procedure: CMS requires that a random sample of eligible
patients be drawn on a monthly basis for the survey period. This ensures the
results are representative of the entire eligible patient population and not
skewed toward those who are either extremely happy or extremely angry (as
is common with open-loop reviews).
●Eligibility: Eligibility is strictly defined and generally includes adult patients
(18 and older) who received care during a specific period. Patients with
certain conditions or complex discharge paths (e.g., discharged to hospice,
prisoners) are often excluded from the sample frame before the random
draw.
●Rationale: Random sampling prevents the provider or vendor from
intentionally choosing patients they think will give positive reviews (a form of
gaming known as "cherry-picking").
(2) Do all Physicians get surveyed?
No, participation requirements vary by entity and payment model.
●Hospitals/Plans: CMS mandates CAHPS surveys for nearly all major entities,
including hospitals (HCAHPS), Medicare Advantage (MA) plans, Home Health
Agencies (HHAs), and Dialysis Facilities.
●Individual Clinicians/Groups: CAHPS surveys for physician practices
(CAHPS for Merit-based Incentive Payment System or MIPS) are typically
administered at the group practice level (the Tax Identification Number, or
TIN), not for every single individual doctor, especially in smaller practices. ●MIPS is essentially the mechanism that shifts Medicare physician payment
from the traditional fee-for-service model to a pay-for-value model. A
clinician's performance in four categories (Quality, Cost, Promoting
Interoperability, and Improvement Activities) determines whether they
receive a positive, neutral, or negative adjustment to their Medicare Part B
payments two years later.
●CMS measures patient experience by surveying a practice's entire patient
base and calculating a single score for the whole clinic, rather than
calculating individual scores for every doctor. This method incentivizes the
entire staff (doctors, nurses, and receptionists) under the practice's tax
identification number to improve the overall patient experience.
(3) How are the patient surveys validated to ensure there is no gaming?
CMS employs rigorous, multi-layered scientific and procedural safeguards:
●Standardization: The CAHPS surveys are highly standardized in content,
order, and response options. They are developed and maintained by Agency
for Healthcare Research and Quality (AHRQ), a lead federal agency which
ensures the metrics are reliably comparable and resistant to bias by
controlling three main variables: Question Content, Administration
Protocol, and Statistical Analysis.
●Approved Third Party vendors who administer the surveys, must meet
strict business requirements, adhere to quality assurance protocols, use
specific sampling methods, and are prohibited from using volunteers to
conduct surveys. The data is not collected by the hospital or practice,
breaking the direct link between the provider and the collection process,
minimizing internal staff pressure on patients.
●Auditing and Validation: The Office of Inspector General (OIG) and CMS
continually perform validation checks and analyze data patterns to identify
gaming to identify aberrant patterns and target for deeper review.
(4) Are the patients required to disclose names or dates of encounter?
●Names/Anonymity: No. CAHPS surveys are designed to protect patient
confidentiality. Results are publicly reported only as aggregate data (usually
at the tax entity level) to prevent the identification of individual respondents. ●Date of Encounter: Yes (Internally). The survey is closed-loop, meaning
the patient is identified as an eligible person who had a specific encounter
(date of service or discharge date) with the provider or group via claims data.
This internal linkage validates the patient's experience, but the raw data
(date, patient ID) remains confidential with the vendor and CMS.
(5) What are the scales used by the CAHPS survey?
The CAHPS instruments primarily use two scales:
●Frequency Scale (4-Point Likert-type): Used for experience questions (e.g.,
communication, respect). Typically Never / Sometimes / Usually / Always.
●Global Rating Scale (0–10 Numeric): Overall (e.g., "Using any number from
0 to 10, what number would you use to rate your personal doctor?").
The publicly reported measure—the Star Rating—is calculated on a 5-Star Scale (1
to 5 stars) based on the aggregate results of these surveys.
(6) Are ratings calculated after eliminating fraud/mistaken identities?
Yes, the methodology is designed to minimize the impact of fraud and noise.
●Eliminating Fraud/Bias: The rigid scientific sampling and closed-loop
verification process, mandatory for approved vendors, makes system-wide
fraud difficult. Fraudulent reviews are eliminated before reporting by the
rigorous controls on who receives the survey. ●Mistaken Identity/Noise: The reported scores (Star Ratings) are adjusted
for statistical reliability. Measures with low statistical reliability (e.g., from
small sample sizes—like Home Health CAHPS needing 40 completed surveys)
are not reported or are flagged, effectively eliminating data that could result
from noise or single, mistaken identity cases.
(7) Can the provider provide context to misleading patient ratings?
No, the provider cannot directly respond to patient ratings on the Medicare
Compare websites.
●HIPAA Restriction: CMS is bound by HIPAA laws, which prohibit a provider
from confirming that an individual is or was a patient. Direct public response
would violate this.
●Data Aggregation: The scores are presented as aggregated summary data
and Star Ratings for the group/facility, not as individual reviews. This design
inherently prevents a direct, one-to-one response from the provider.
(8) What channels are available for participation in the CAHPS survey?
CMS uses a mixed-mode approach to maximize response rates across the diverse
Medicare population, often targeting older demographics who may not be digitally
active.
The administration channels typically include:
●Mail: This has traditionally been the primary method, ensuring reach to
patients without internet access.
●Telephone: Used as a follow-up for patients who do not respond to the
initial mail or web invitation. This helps capture feedback from
underrepresented groups (like disabled beneficiaries).
●Web/Email: CMS is increasingly moving toward web-first mixed modes for
various CAHPS surveys (like HCAHPS and OAS CAHPS), whereby the patient
may receive an initial email invitation or mail to complete the survey online.
(9) Do the Medicare Patients use a computer to respond to the survey?
Do they use their mobile phones or is it a paper-based survey?
Patients use a mix of all three: paper, computer, and phone.
●Paper-Based: The mail component is a traditional, paper-based survey.
●Computer/Mobile: The "Web" mode allows patients to complete the survey
online, which can be done on a desktop computer or a mobile phone. The
survey design is optimized to accept digital input, but historically, the
majority of responses have come from mail and telephone due to the
demographics.
●Telephone/IVR: Patients can also respond by telephone interview with a
live surveyor or through an Interactive Voice Response (IVR) system.
(10) Are the responses provided as bullet answers to increase the
convenience of participation?
Yes, the core questions use forced-choice, preloaded answers, which are the
equivalent of bullet answers.
The questions are not open-ended. Instead, they require the patient to select from
a short, standardized scale, which maximizes comparability and convenience:
●Frequency Scales (e.g., for communication): Never / Sometimes / Usually
/ Always.
●Global Rating Scales (e.g., for overall care): A numerical scale from 0 to 10.
This structure is highly convenient and designed for quantitative analysis, but the
sheer length of many CAHPS surveys (often around 30 substantive questions)
offsets this convenience, leading to low response rates.
(11) What is the usual participation rate?
The participation rate for mandatory CMS CAHPS surveys is often low, especially
compared to commercial metrics, due to the survey length and the timing (often
weeks after the visit).
●Overall Median Response Rate: For Medicare Advantage (MA) contracts,
the median overall response rate typically hovers between 32% and 39%
(using 2024-2025 data).
●HCAHPS (Hospital): Historically, the response rate has been reported to be
around 25% to 30% and continues to show decline.
●Mode Breakdown (Example MA Contracts, 2024/2025 Median):
○Mail: ≈25%−33%
○Telephone: ≈2%
○Web/Email: ≈3%
The reliance on mail and telephone follow-up is necessary to achieve the quoted
overall rate, as the digital response rate alone is very low.
(12) How does this participation rate compare with the surveys sent by
Closed Loop systems such as Zoc Doc, Athena Health etc.?
The CMS CAHPS median participation rate (approx. 25%−35%) is generally lower
than the response rates achieved by commercial, transactional, closed-loop
systems, primarily due to timing and length.
System Type Response Rate Key Difference
Commercial
Closed-Loop
(e.g., Zocdoc,
Athenahealth)
40%−60% (Often
higher for short
surveys)
Short and Immediate. These surveys are
typically 1-3 questions long (NPS focus)
and are sent immediately after the
encounter, maximizing patient recall
and willingness to respond.
CMS CAHPS
(Mandatory)
25%−35% (Median) Long and Delayed. These surveys are
lengthy (up to 30+ questions) and are
administered weeks after the service,
leading to higher abandonment rates.
The trade - off is clear. Commercial systems prioritize a high volume of quick,
loyalist feedback, while CMS prioritizes scientifically rigorous, representative, and
standardized data, accepting a lower response rate as a cost of that scientific rigor.
Summary : The system is designed for scientific rigor, not ease of response.
(13) How did other industries solve for similar poor outcomes?
The highly standardized methodology of CMS CAHPS, while ensuring reliability,
creates its own set of challenges (poor outcomes):
Poor Outcome in CMS CAHPS Solutions from other industries
Survey Fatigue/Low Response Rate:
Lengthy questionnaires (HCAHPS is 32
items) lead to low response rates,
especially among younger/healthier
populations.
Solution: Online Retail/Food
Delivery prioritizes 1–3 question
transactional surveys (like NPS) to
boost participation to 40−60%.
Lack of Timeliness/Actionability:
Surveys are sent weeks or months after
the discharge date (e.g., 48 hours to 35
days after ED visit), making the feedback
historical.
Solution: Airlines/Hotels use
immediate feedback loops (e.g.,
in-app survey upon landing/checkout)
to capture real-time data that is
actionable by staff immediately.
Limited Diagnostic Value: The high
standardization and long lag time make it
hard for providers to correlate a low
score with a specific, recent
operational failure to fix (e.g., "The new
receptionist was slow yesterday").
Solution: Online Retail/Tech use
follow-up questions customized to
the service interaction (e.g., "Rate
your experience with the support chat
you just used").
Incomplete Reporting/Coverage: Data
is often incomplete for individual
clinicians, particularly those in small
practices or those who don't meet
minimum case thresholds, defeating the
purpose of comparing individual doctors.
Solution: Marketplaces
(Uber/Airbnb) require every single
service transaction to result in a
rating for the individual provider,
ensuring high coverage.
Reverse Robin Hood Effect / Equity
Barrier
The Reverse Robin Hood Effect occurs
when low patient experience scores,
often from socioeconomically vulnerable
populations who may not respond to
surveys, trigger reduced Medicare
payments for their providers, thereby
financially penalizing practices serving
the poor while rewarding those serving
the affluent, who may respond quickly.
Geographic / Class Benchmarking
(Hotels/Airlines) Peer Group
Benchmarking : Rating systems
compare entities within similar peer
groups (e.g., Economy Class Airlines
vs. First Class Airlines; Mid-scale
Hotels vs. Budget Motels) and
geographic regions. A 4-star rating at
a city center business hotel is
benchmarked against other city
center business hotels etc.
Attribution Error and Lack of
Accountability
Attribution Error occurs when patients
misdirect feedback, often blaming the
health plan for provider actions (wait
times) or the provider for plan actions
(coverage denial), yielding inaccurate,
non-actionable data for improvement.
Separate Rating Systems : In
complex ecosystems, the different
providers are rated on their own turf,
ensuring feedback cannot be
confused. Gig Economy
(Uber/Airbnb): The driver (service
provider) and the app/platform (the
environment/user interface) are rated
entirely separately. A driver's poor
star rating won't affect the platform's
NPS score, and vice-versa.
Failure to Capture Modern Care
Delivery
The standardization required for CAHPS
surveys is too slow, causing them to lag
behind modern care models (telehealth,
team-based care). This devalues
innovation and yields data irrelevant to
the patient's current experience.
Online Banking/Finance :
Focusing on behavioral metrics (e.g.,
Were you able to complete the task?)
rather than subjective opinions
ensures feedback remains relevant
and actionable despite rapid changes
in technology (like telehealth),
preserving the core service outcome.
Limited Actionability (Aggregated
Data)
The CMS data is too highly aggregated
(e.g., group-level) to protect privacy. This
low utility prevents front-line staff from
identifying the specific provider, office, or
day that caused a low score, hindering
quality efforts.
Customer Service Operations
Proposed for Healthcare : The
closed-loop patient feedback system
should mandatorily tag each
response with the specific attending
provider's ID, the clinic location, the
date/time of the visit, and the front
desk staff shift. While CMS may only
publish the aggregate group score,
the practice's internal EHR/PM system
gains granular, real-time data to hold
the right person accountable for the
score fluctuation.
.
(14) Recommendations for making the CAHPS more actionable :
The CAHPS system requires modernization to remain effective. The following table
outlines solutions drawn from consumer industries to leverage existing strengths,
close current gaps, and enhance data utility for both CMS and providers.
Poor Outcome in
Current CAHPS
Methodology
Core Strength
to Improve
Proposed Solution for CMS
Implementation
Survey Fatigue /
Low Response
Rate
Standardization
(Consistency of
question types)
Mandatory Transactional NPS/CAHPS
Hybrid: Adopt a 3–5 question
mobile-first model (e.g., NPS + 2 CAHPS
experience drivers) sent immediately
post-visit.
Lack of
Timeliness /
Historical Data
Verification
(Linkage to
claims data)
Event-Triggered Digital Delivery: Shift
surveys to be event-triggered (sent
within 24–48 hours of service completion)
using web/SMS to capture real-time
feedback.
Limited
Diagnostic Value
Standardization
(CAHPS focus on
observable
experience)
Contextual Question Logic: Use the
patient's recent service interaction to
trigger customized follow-up questions
relevant to that specific touchpoint (e.g., a
short module on telehealth quality).
Incomplete
Reporting/Cover
age (No individual
clinician scores)
Scientific
Reliability
(Minimum case
thresholds
needed for
validity)
Required Sub-Group Reporting: Require
large groups (TINs) to publish
de-identified sub-group/department
scores when statistically viable, rather
than a single, diluted group score.
Reverse Robin
Hood Effect /
Equity Barrier
Value-Based
Payment Tie
(Financial
incentive for
quality)
Risk-Adjusted Benchmarking & Equity
Bonus: (1) Statistically risk-adjust public
performance scores for patient
Socioeconomic Status (SES). (2) Create
MIPS bonuses for implementing
equity-focused activities (e.g., social needs
screening).
Attribution Error
and Lack of
Accountability
Standardization
(Clear, objective
experience
questions)
Mandatory Dual Rating Systems:
Mandate separate, published ratings for
the Health Plan (for
coverage/administration) and the Provider
Group (for care delivery/wait time). This
prevents misattribution.
Failure to
Capture Modern
Care Delivery
Adaptability
(Ability to reflect
changing
practice)
Focus on Behavioral Metrics: Revise
core questions to focus on behavioral
outcomes ("Were the instructions clear
and easy to access?") rather than
outdated channels ("Did you like the video
chat?").
Limited
Actionability
(Aggregated
Data)
Scientific
Reliability (High
statistical
confidence in
group scores)
Real-Time, Granular Tagging (Internal
Only): Mandate that approved vendors
collect and report data internally to the
provider (TIN) tagged by specific attending
provider's ID and clinic location for
internal use, separating actionable
micro-data from public macro-data.
Conclusion : To communicate actionable insight through CAHPS
To strengthen CAHPS, CMS must embrace agile digital methods and
transactional surveys for timely, higher-volume feedback. Crucially, scores
must be risk-adjusted for socioeconomic factors to eliminate the Reverse
Robin Hood Effect, ensuring that the system reliably rewards quality care for
all populations while providing actionable, granular data to clinicians.