In the business world, tracking, analyzing, and visualizing data can lead to critical insights such as which products are profitable, or during which quarter sales are highest. In the world of healthcare, data is even more important: Accurate data based on precise metrics has the power to dramatically alter health outcomes, determine whether a drug is effective or not (or even harmful), and save lives. For example, clinical research data provides the basis of analytic validity, clinical validity, and clinical utility on which decisions are made about which drugs are FDA approved versus which are not. This data has the power to make or break the adoption of approaches to healthcare in every possible area, including mental health. Unfortunately, health data – both research data and patient data – are stored in a highly decentralized way, with personal health data often difficult to access even for the patient. Furthermore, most mental health practitioners do not reliably track their own patients’ data, and those who do often fail to standardize their results, or implement improvements to their practice based on the outcomes. Without this ability to define, adjust, and iterate data stored in a centralized, secure database, developing high-quality, evidence-based practices proves difficult, if not impossible.
The Problem: A Lack of Evidence-Based Care
If you were to break your arm and go to a hospital, you would see a doctor and receive a standard treatment protocol for your arm that likely wouldn’t be too different from what any other doctor or facility would recommend. Your recovery would be monitored, and you would have next steps provided for you at each stage of your recovery. Unlike physical medicine, in mental health recovery the treatment outcomes, protocol, and follow-up procedures are often not measured, precise, or standardized across the field. In fact, it is extremely rare for therapeutic outcomes to be measured by therapists and mental health centers. It’s like setting your broken arm and then never taking another x-ray to make sure it worked! In intensive outpatient care, patients are often tracked at admission, intake, when their treatment plans are updated every 3-6 months, and at discharge from the program, but unfortunately this data is often locked up in documents or dispersed spreadsheets and tables, so proper analysis and learning from the data can prove challenging. Furthermore, while there are plenty of well-researched mental health studies, found in journals such as World Psychiatry and JMP’s Evidence-Based Mental Health, there is no formal track to bring those studies to life in the practice of mental health care.
Part of why outcomes are not measured is because they can be very difficult to measure. In practice, even the definition of patient success is ill defined in practice and often crude tracking mechanisms are utilized to track outcomes. If a patient self-reports “feeling good” one day, but “feeling bad” the next, is such a report even meaningful in terms of the patient’s response to care, the short- and long-term impacts of care, and the overall success of the treatment? Does the occurrence of relapse or worsening symptoms – very common even for people who “fully recover” – indicate the treatment was ineffective or do we need to re-think how to measure “effectiveness”? Without consistent data tracking, proper analysis, and well-defined success metrics, mental health care will continue to be practiced – in most instances – without a real understanding of how well it is working.
The lack of measurement-based care in mental health care is contributing to the high cost of care, confusion in care navigation, and diagnostic confusion. Without the ability to track which protocols are working best under what conditions, we aren’t able to enhance the quality of the field. Without data collection, it is challenging to learn from our mistakes and make educated steps forward. Insurance companies are able to get away with extremely low reimbursement rates in part because clinics aren’t able to provide precise metrics.
With a lack of measurement-based care across all mental health centers, it is very challenging to hold providers accountable for providing quality care. One way we’ve attempted to tell the quality of centers apart is through accrediting institutions such as JCAHO. Making sure the facility you’re attending is JCAHO accredited can be an assurance of the quality of the facility, however not always! Although these centers are often highly regulated, many regulations exist for ensuring the physical safety of clients, rather than providing high recovery results.
The Solution: Define, Adjust, and Iterate on a Centralized Architecture
Define: Precise Success Metrics
It’s difficult to track success unless it is precisely defined and measurable. In the world of mental health, defining the success of a treatment protocol or program is essentially measuring the well-being of a human, which is in and of itself difficult to measure objectively. However, there are several patient-reported outcome measurement tools such as PHQ-8 or 9, the EDEQ (Eating Disorder Questionnaire), the ASI-LITE (Addiction Severity Index), the OQ-45.2, and many professionals use a patient satisfaction survey. These tools are not perfect, but they have proven to be statistically and clinically reliable. Many therapists who are collecting this data wisely use it to measure themselves, and where they see internal deficiencies, they try and improve. If nothing else, the data collected serves as one of several pieces of feedback that therapists can use to better support their patients. These surveys tell us if their mood has improved over time, if they are satisfied with their relationships, employment, friendships, etc.
While these surveys are helpful tools, outcomes need to be standardized and risk-adjusted in order to be of value to anyone else outside of an individual practitioner. For an individual therapist, performing this level of analysis and reporting the data is not common practice, nor does it really make sense for someone working at a small or even medium-sized practice because they cannot leverage a large enough dataset to uncover statistically significant results. With the advent of telemedicine for mental health, we are well-positioned to lead the fields of psychology and psychiatry into a new age of evidence-based practice. Antelope aims to take a different, more systematic approach. To understand better how this could be done, let’s explore standardization and risk adjustment.
Adjust: Standardization and Risk Adjustment
Currently, when outcomes are measured, those measurements are not risk-adjusted or standardized, and so they are only relevant for that single practitioner. Standardization means we are all using (and publishing) the same tools for the same patients in the same way at the same time. For example, If we are using the EDE-Q and our friends across the street are using the EAT-26 (another tool used to measure eating disorder progress), for instance, the two cannot be compared against one another and therefore any comparative analysis is useless. After we can all agree on which tool to use, we must standardize how we use it. Are we asking questions in an email or in person? Are we asking them at the same time during recovery (2 days post-treatment or 30 or 265?). If these points aren’t standardized, we can assume the results cannot be accurately compared. Agreeing on which tools and how we use them is a process that we hope will happen soon. Once we all agree, this is fairly straightforward.
Risk adjustment is a bit more complicated. Risk adjustment takes into consideration the underlying health status of the patient being measured. The more complicated the patient’s condition, the “riskier” they are and therefore the more resources will be needed to treat them effectively. Further, the more complicated the patient, generally the less improvement we might expect to see, or at the very least the improvement in a complex patient will look different. For example, imagine if an addiction facility publishes a 75% abstinence rate at 360 days post-discharge. However, they generally treat patients who have never been in treatment before, are relatively young, have not struggled with the disease for a long time and who do not have any co-occurring depression, anxiety, or trauma. But what if another provider with older clients, whose patients have generally been in and out of treatment their entire lives, and have a host of medical comorbidities uses the same tool, measures the same way, and yet shows a 55% abstinence rate? Would we say this provider is producing lower quality work because their abstinence rate is 55% as compared to 75%? Probably we would not. In fact, we might say the opposite; namely, that the first provider could possibly be doing better. However at first glance, that 75% might look fantastic.
Some of the ways we can begin introducing risk adjustment can be by asking:
- What level of care is the patient coming from previously? (Inpatient, outpatient, detox?)
- How many diagnoses does the patient have? (The more they have, the higher risk they are.)
- Has the patient been hospitalized for their addiction/mental health issue in the past 5 years?
- Does the patient have a supportive family?
- Any history of overdose or suicide attempts?
Iterate: Use the Data to Improve
Once we finally have clear success metrics, as well as standardization and eventually some form of risk adjustment model, outcomes will have the potential to provide for more value to patients, their families, and their communities. Doing so requires distilling the insights from the data into actionable conclusions. For example, once enough data is collected to be statistically relevant, if DBT (Dialectical Behavioral Therapy) is found to produce the best outcomes for teens aged 15-17 with BPD (Borderline Personality Disorder), but for younger teens the outcomes are better after mentalization-based treatment, we would be able to track this data in a centralized database, adjust the data as described previously, identify this trend, and shift our approach to that specific population accordingly. We would then also be able to see whether overall outcomes improved. Data tracking can also be enhanced by maintaining a relationship with the client just after leaving the IOP program, as this is often the most vulnerable time for a teen in recovery. By doing so, Antelope will be able to improve our program (e.g. by changing how we discharge our clients from the program) based on informed decisions.
This overall approach to data and healthcare has the potential to provide more value to the healthcare system as a whole. Concepts like value-based care have recently entered the healthcare lexicon for a good reason: Much of healthcare is not effective at actually improving the health of the patient due to misaligned incentives between physician, patient, and insurance provider. Once we are able to measure value, we can transform our insurance and healthcare system from a system that is centered on quantity (i.e. fee for service) to quality. This transition has proven in many other areas of medicine at major health institutions to align the interests of the patient, provider, and payer and results in improved quality at a lower cost.
Measurement-based care is the future
Measurement-based care is the future of our mental health system – introducing even some simple practices could transform our mental health care landscape. Antelope Recovery is on the mission to bring rigorous data practices into the field of mental health. We aim to track patient demographics, treatment protocols and timelines, and success metrics based on the leading evidence-based assessments. At Antelope, we are also well aware that progress and healing are non-linear, so we are optimistic that through virtual care, we can begin to innovate in this domain.
Please look out for future blogs on mental health data focusing on security, privacy, need assessments, and more!