In this interview, Andrew Kucheriavy talks about a pioneering framework for quantifying the digital experience of patients in healthcare. Andrew is Intechnic’s Chief Experience Officer and one of the industry’s leading UX experts. He was the 9th professional in the world to receive Master UX Certification, earning him the nickname UX Master.
For over 25 years, Andrew and his team have been leading UX solutions for global brands, including a UX transformation and enterprise-wide UX strategy for the world's largest healthcare company.
Andrew has extensive experience creating world-class solutions for patients and providers, and this time, we are speaking about the new digital patient experience scale to learn what went into developing this framework, why it is critical to the industry, and how it is best used.
Alla: Andrew, thank you for finding time to chat today. This new framework is starting to get some interest, so I’d love to find out more and share this information with our audience, who is also interested in learning about it.
Andrew: Of course! I am happy to help and tell you all about DES/P. Hopefully others in the industry will find this framework useful as well.
Alla: Great, let’s begin by defining what the new scoring framework is.
What is DES/P?
Digital Experience Scale for Patients (DES/P)TM was developed by our team of UX researchers because we felt that there is an ever-growing need to quantify and measure the digital experience of patients in the healthcare and pharmaceutical industry. This framework and its criteria are based on extensive UX research, patient interviews, and user testing with hundreds of patients over the course of 25+ years.
How can DES/P improve Patient UX in Healthcare?
You can't improve what you don't measure. If you're interested in accomplishing patient centricity, you need to be able to quantify and measure the effectiveness of your digital experience, whether this is a website, an app, or a portal.
DES/P allows you to:
- Measure patient centricity and effectiveness of your websites, apps, or portals
- Make a strong business case for digital improvements, showing data for what is/isn't working for patients
- Prove ROI and that your investment was worthwhile
- Benchmark against the competition
Let’s dive a little more into it. When you set business objectives, you want to make sure that they are aligned with those of your patients. Often, organizations don't fully understand patients’ digital experience needs. However, their digital experience can greatly affect the patients’ experience overall, which in turn influences their adherence to therapy and ultimately their health outcomes.
This framework was created to objectively measure and quantify the patient experience to make sure that you fully understand how well your patients can interact with various digital assets that your company produces and how satisfied their needs are by these interactions.
It allows you to compare and contrast and to measure before and after a redesign, for example. This allows you to benchmark the starting point — how good or bad the user experience was before you started redesigning it. Then, it enables you to improve and measure the improvement over time.
An essential part of any successful project is to be able to establish a benchmark as well as set objectives and measure whether you're hitting those objectives throughout the entire project.
Who uses DES/P and for what purpose?
Healthcare companies, in-house teams, such as marketing, IT, patient support, patient experience, etc., agencies of record, consultants — basically, any teams or individuals who are interested in measuring and approving patient digital experience can use DES/P. We see patient-centric and research-driven teams embracing this framework, so they can objectively quantify and measure the effectiveness of the experiences of their patients.
DES/P allows to benchmark not only for the purpose of before and after, but you can also benchmark within the industry to better understand how your experience compares to that of competitors.
Another point that I want to add here is that objectively measuring user experience through user testing with real patients helps to avoid bias. This is a mathematical model that is designed to accurately and precisely measure patient experience and avoid relying on opinions or inherent biases of the internal stakeholders.
Why was DES/P created?
Prior to DES/P, there was no modern framework that accurately measured digital patient experience. In our many years of conducting studies with patients, everything from user testing to interviews to validation of research and strategy creation, we always felt like there was no framework that accurately measures modern digital experience of patients. System Usability Scale (SUS) was created in 1986 and hasn’t changed much since. It is also not healthcare specific. NPS is also generic and measures more of a “marketing stickiness.”
So, the scoring models that most healthcare companies use were not designed for healthcare, and they were not designed for a modern digital experience. These models don't measure many things that are instrumental to the world-class patient experience, especially modern digital patient experience. They don't measure, for example, how empathetic a patient experience is, or whether the quality of an experience can help get a patient on therapy and improve health outcomes.
Over the years of interviewing patients, we’ve learned what they want, and the things that they want are not necessarily measured by those other models. So, we felt an urge to create our own framework in order to accurately measure a true digital patient experience.
How does DES/P work?
DES/P measures the effectiveness of patient digital experience by evaluating 12 distinct criteria on a scale from 1 (poor) to 10 (excellent). The criteria are based on user research, and I can expand on it later.
The average score is then calculated across all criteria to produce a DES/P score representing the overall experience of a patient. Criteria are also grouped into three categories: Value, Simplicity, and Connectivity. Each category also has its own score, allowing you to pinpoint “problematic” areas of the patient experience.
Who came up with the criteria?
Patients did! We've interviewed hundreds of patients over the course of many years, and we kept hearing about different aspects of their experience that ultimately boiled down to these 12 distinct criteria as something that an average patient wants and expects in their digital journey.
Some of the criteria may vary, or the importance of the criteria may vary patient to patient. However, over multiple studies, we were able to validate that ultimately those 12 things, 12 criteria matter the most. If those 12 fundamental patient needs are addressed, then a company can expect to be rated as patient-centric and get a good score for overall patient satisfaction when it comes to digital experience.
Who assigns scores to each of the criteria?
Also, patients, and patients only! It is critical that the questionnaire is completed by real patients representing your demographic, and not by the internal stakeholders. Patients have unique needs, and failure to test your website or app with real patients often results in an organization-centric, and not a patient-centric, experience.
How many patients should participate?
An average score of at least 5 patients will result in a somewhat statistically accurate score and has been proven to uncover 85% of usability problems. Studies conducted with 5 to 15 patients will result in more statistical accuracy (90–95%). If you have an outlier (a result that deviates from the rest by at least 50%), it is advisable to increase the sample size to at least 15 patients to ensure accuracy of the study.
Who should be conducting the study?
Ideally, you want an outside trained and DES/P-certified facilitator conducting the study. It is important that you do not have the same team members who designed or built the experience conducting the study to avoid potential bias. If you need help conducting a study, Intechnic researchers have 25+ years of experience conducting user testing of healthcare and pharma websites and apps with real patients.
How does the scoring system work?
Nine to 10 is considered an exceptional, world class, and a fully patient-centric experience. Anything over 8 is considered an outstanding, best-in-class experience that is mostly patient-centric and addresses most of their needs.
A score of 7 is considered good or an okay experience that is mostly acceptable for most patients. Six is considered to be the lowest passing grade that’s satisfactory for some, but not all patients.
Anything below 6 signals multiple UX or usability problems. It's not a satisfactory or a passing score, which suggests that patient UX needs improvement. And anything below 3 signals serious UX or usability problems that should be immediately addressed.
Any other recommendations you can give to teams seeking to achieve patient centricity?
Yes. Results of any design project or an effort to improve patient experience can be invalidated by losing objectivity and introducing bias. It is extremely important for teams to ensure that patient experience is measured objectively — hence the framework that we created.
By testing with patients and patients only, you can remove bias that might come from the stakeholders within your team and avoid “designing by committee.” You're no longer relying on the loudest or the most senior voice in the room speaking on behalf of patients. Then, you are relying only on research, data, and the voice of the patients to tell you whether the experience you're creating is good enough for them.
It is also important that you avoid having the same team members conduct these studies who originally created the experience. That's where we sometimes could see leading questions, meaning that some of the objectivity is being lost.
So, the best way to conduct a study like this is not only have the patients fill out the questionnaire but also have an independent facilitator who was not a part of the team originally designing the experience. This will ultimately help avoid any kind of bias creeping back into the study and potentially invalidating your results.
Of course, as a UX research team and the team behind this framework, we're happy to help. If you have any questions about the framework or you need help conducting a study, please don't hesitate to reach out to our UX team. We're happy to help!