In a value-based care system, the measures that determine the level of payment must accurately reflect the quality of care delivered, produce comparable and consistent results against the measure’s intent and be actionable to drive care improvement. In addition, quality measures must support the Quadruple Aim: improve health outcomes, improve patient experience, decrease clinician burnout and lower healthcare costs.
While all currently available methods of measurement have limitations, the accuracy of and the ability to report measures across the U.S. are best supported by capturing the actual clinical data elements in digital quality measures (dQMs) that can validly, reliably and efficiently determine the extent to which a quality goal was met. Against a backdrop of comments and some pushback from healthcare stakeholders, the Centers for Medicare and Medicaid Services (CMS) continued to ramp up preparations for a major change in the way quality is measured in the United States; the Merit-based Incentive Payment System (MIPS) Value Pathways (MVPs).
The first major change in quality measurement by CMS is the adoption of MVPs. MVPs are disease-state or specialty specific subsets of measures and activities integrating quality improvement performance, activities that enhance care and cost reduction. CMS would also use submitted claims to drive overarching population health measurement. Instead of choosing measures and activities to report for each category, MIPS participants would report on one MVP, including all measures and activities within the MVP. An endocrinologist would likely select the diabetic care MVP. The diabetic care MVP will likely have quality measures around HbgA1C and blood pressure control, improvement activity measures determining if the clinician offers glycemic control services or case management and measures determining how meeting the standard of care impacts cost.
In addition to a shift to MVPs, CMS is looking to expand its use of submitted claims to drive overarching population health measurement. It also intends remove large group reporting of mostly primary care measures via the CMS web interface from the MIPS program.
The adoption of MVPs and the removal of the large group reporting via the CMS web interface from MIPS demonstrate that CMS is looking for clinician-level measurement and more accountability for adherence to the standard of care and improved outcomes. From a clinician perspective, the intended use of MVPs is therefore a positive step toward making quality measures meaningful. MVPs would allow clinicians to report measures that are relevant to their patient populations, give feedback for identifying improvement opportunities and track their improvement progress.
CMS has also identified the need for a specialty MVP to be linked to clinician board maintenance of certification requirements. This linkage would help decrease burden for physicians by linking their regulatory-driven quality reporting and license competency requirements.
HIMSS continues to support a focus on reducing clinician burden and making quality measurement a more meaningful driver of improving patient outcomes. Insights from the HIMSS Davies Award of Excellence program and HIMSS Analytics Maturity Models repeatedly demonstrate that individualized measurement of clinician adherence to the standard of care paired with accountability for meeting that standard is a fundamental driver for quality improvement. The MVPs, as described by CMS, align with this HIMSS-supported approach to the extent that they allow eligible clinicians to focus on episodes of care, rather than narrow and isolated care processes.
Although MVPs represent an ideal future state, there are significant barriers to widespread use of MVPs to support value-based care models. The most immediate barrier is the absence of fully tested and field-tested electronic quality measures, including dQMs, for a sufficiently broad set of specialties. For specialties that do not have quality measures that can be incorporated, MVPs will need new measures to be developed or clinical data to be manually abstracted from their records, which is a significantly labor-intensive process. Therefore, there is an urgent need to ramp up the measure development process for specialties that have measure gaps. CMS can facilitate this process with program incentives—many of which would likely require legislative or regulatory changes. Program incentives can provide cooperative agreements with specialty measure developers and bonus MIPS scoring for eligible clinicians to participate in the quality measure testing and field-testing process. Unfortunately, even in the most ideal timeline, quality measures can’t be fully tested and field tested for all the potential measure gaps by CMS’s targeted rollout date for the first MVPs in 2022.
The second challenge is associated with individual clinician-level reporting. During a recent CMS town hall on development of MVPs, stakeholders from health systems and specialty organizations indicated that the move to smaller group and individual reporting of MVPs will place a significant administrative burden on health systems and clinician practices. Data elements for each clinical component of each MVP will need to be captured in the clinical documentation for each patient’s care. Those data points are most effectively captured as part of structured data fields built into the clinician’s clinical workflow. Any change to an MVP, which can occur on an annual basis, requires technical redesign of those structured data fields within the workflow and corresponding retraining for the clinical staff.
With the removal of group reporting via the retirement of the CMS web interface, instead of needing to redesign structured data fields to capture the same measures for the entire practice, large multispecialty organizations will need to redesign workflows for every physician specialty and MVP each time an MVP measure is changed. Without a nimbler measure development and implementation timeline—which currently takes, at best, one year to 18 months—or guarantees from CMS that measures will not change without an 18-month ramp before becoming required, the burden for ambulatory practices will be excruciating.
Longer term, patient attribution will be one of the biggest challenges with mandatory use of MVPs across all eligible clinicians. Patient attribution, especially important when quality measurement is tied to Medicare payment, is the process of assigning patients to the eligible clinician who will be accountable for the patient’s care outcomes and costs through adjustments to the clinician’s payment. With medically complex patients who are seeing a multitude of providers, it is often not clear which provider ultimately has accountability for the patient’s outcome in a value-based care model. Any system of payment that does not fairly and accurately identify the clinician most responsible for the patient’s outcome will not gain acceptance by clinicians and healthcare organizations.
Currently, the effectiveness of testing mechanisms to determine if attribution models meet reliability and validity standards varies wildly. Even with the growing adoption of HL7 FHIR helping to facilitate semantic interoperability through more granular access to structured and standardized data elements, if the entire set of patient data, administrative and clinical, is not being shared between healthcare organizations, then quality measures cannot truly capture quality of care accurately.
CMS is encouraged to look closely at implementing MVPs when it can address the critical issues associated with the burden of reporting and appropriate attribution. CMS and its partner Health and Human Services agencies also need to make a corresponding commitment to making MVPs actionable for clinicians and their organizations. Eligible clinicians need a robust quality data analytics platform to fully leverage MVP data to determine if they are delivering the quality and cost-effective care that would result in better outcomes for their patients and a positive payment adjustment in the MIPS program. Such analytics platforms are not a required part of certified health IT, although quality measure dashboards and tools are commonly available in certified EHRs and through registries and other services that support quality measurement and improvement. Without necessarily adding prescriptive certification requirements, both CMS and the Office of the National Coordinator for Health Information Technology could work collaboratively with measurement experts, health IT developers and providers to identify desired functionality and best practices for quality improvement analytics.
HIMSS’s analysis of HIMSS Davies Award of Excellence and HIMSS Analytics Maturity Models Stage 7 validated examples of significantly improved patient outcomes indicate real-time access to performance data on meaningful measurements of quality is an effective technology driver for enhancing improvement in patient outcomes. Value-based payment programs, be it MIPS or other value-based care payment systems around the globe, should incentivize, either through additional financial incentives or scoring bonuses, the utilization of data visualization and other analytic tools to enhance opportunities for improving care.
One potential solution for making MVPs a more meaningful and more easily collected method of quality measurement is the adoption of digital measurement. CMS announced a desired future state where measuring quality for value-based care programs like the Quality Payment Program and Inpatient Quality Reporting program will be driven by dQMs. In addition, the National Committee for Quality Assurance, the organization responsible for stewarding the Healthcare Effectiveness Data and Information Set (HEDIS®), has started to transition HEDIS measures to dQMs.
A dQM is a quality measure that compiles data from one or more interoperable systems to present a more holistic view of patient quality. While a current quality measure may be built upon claims data, clinical data extracted from an electronic health record or a submitted patient satisfaction survey, a dQM could potentially receive all those data points—plus registry data, data from patient wearable or at-home medical monitoring devices, data from case management systems, and data reported by patients through their patient portals—to paint a more holistic picture of the health of the patient.
It is important to recognize that creating such a holistic look at patient health requires the ability to retrieve data from a wide array of digital sources beyond those currently utilized to support quality measurement. The expected payoff from such a dQM model would, however, be richer data and measures that could facilitate use of artificial intelligence and other tools to deliver near real-time modeling and analysis of treatment efficacy. While the use of the underlying FHIR standards is a strength, there are potential barriers that must be addressed to ensure successful adoption of dQMs. How will the industry standardize measurement methodologies when measures will require data from so many diverse sources? Government and private payers will need to collect data at an individual patient level to meaningfully measure quality, and every individual patient will have a different combination of potential data sources. This variation can lead to challenges with dQM reliability and comparability against the measure’s intent.
In addition, the adoption of dQMs will present a huge data mapping challenge. There is significant variation in clinical documentation workflows from one electronic health record to another and from one healthcare organization to another. When data has to be mapped from many diverse resources using customized mapping based on the needs of individual EHRs and organizations, the possibility of error is introduced. Ultimately, the providers will be responsible because their payment will be impacted by faulty data calculations.
With CMS calling for an ambitious five-year plan to adopt dQMs for their quality reporting programs, HIMSS wants to work with the agency to develop a roadmap to address potential barriers with the adoption of dQMs, such as data mapping, comparability and reliability of measures, and measure calculation. Ultimately, we want to ensure that dQMs present a more meaningful and actionable method of quality measurement that evolves into a significant compliance exercise for providers that brings fundamental value to patients.
HIMSS will have multiple opportunities to advise CMS on the selection, design and incorporation of MVPs into the MIPS program. If you are interested in defining actionable guidance for regulatory agencies, please let us know.