Author + information
- Received November 10, 2012
- Revision received January 2, 2013
- Accepted January 18, 2013
- Published online June 1, 2013.
- James M. McCabe, MD∗∗ (, )
- Karen E. Joynt, MD, MPH∗,†,
- Frederick G.P. Welt, MD, MSc∗ and
- Frederic S. Resnic, MD, MSc‡
- ∗Division of Cardiovascular Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
- †Department of Health Policy and Management, Harvard School of Public Health, Boston, Massachusetts
- ‡Division of Cardiovascular Medicine, Lahey Clinic, Tufts Medical School, Burlington, Boston, Massachusetts
- ↵∗Reprint requests and correspondence:
Dr. James M. McCabe, Brigham and Women’s Hospital, 75 Francis Street, Shapiro 5, Boston, Massachusetts 02115.
Objectives This study sought to evaluate the impact of public reporting of hospitals as negative outliers on percutaneous coronary intervention (PCI) case-mix selection.
Background Public reporting of risk-adjusted in-hospital mortality after PCI is intended to improve outcomes. However, public labeling of negative outliers based on risk-adjusted mortality rates may detrimentally affect hospitals’ willingness to care for high-risk patients.
Methods We used generalized estimating equations to examine expected in-hospital mortality rates for 116,227 PCI patients at all nonfederally funded Massachusetts hospitals performing PCI from 2003 to 2010. The main outcome measure was the change in predicted in-hospital mortality rates per hospital after outlier status identification.
Results The prevalence-weighted mean expected mortality for all PCI cases during the study period was 1.38 ± 0.36% (5.3 ± 1.96% for all shock or ST-segment elevation myocardial infarction patients, 0.58 ± 0.19% for all not shock, not ST-segment elevation myocardial infarction patients). After public identification as a negative outlier institution, there was an 18% relative reduction (absolute 0.25% reduction) in predicted mortality among PCI patients at outlier institutions (95% confidence interval: −0.04 to −0.46%, p = 0.021) compared with nonoutlier institutions. Throughout the study period, there was an additional 37% relative (0.51% absolute) reduction in the predicted mortality risk among all PCI patients in Massachusetts attributable to secular changes since the onset of public reporting (95% confidence interval: −0.20 to −0.83, p = 0.002).
Conclusions The risk profile of PCI patients at outlier institutions was significantly lower after public identification compared with nonoutlier institutions, suggesting that risk-aversive behaviors among PCI operators at outlier institutions may be an unintended consequence of public reporting in Massachusetts.
Public reporting of risk-adjusted in-hospital mortality rates after percutaneous coronary intervention (PCI) is intended to build public trust and encourage adoption of best practices (1). However, there is controversy regarding the ability of public reporting to improve patient outcomes while preserving access to potentially lifesaving care (1–5). Public reporting has been associated with reduced mortality rates for ST-segment elevation myocardial infarction (STEMI) and shock patients in both Massachusetts and New York State (1), but some argue that this may be largely the result of avoidance of high-risk patients (6).
One important aspect of public reporting that has been less well studied is the public labeling of institutions as negative outliers. Negative outlier status is conferred on institutions whose observed risk-adjusted in-hospital mortality rate is significantly greater than expected based on risk prediction models. Since the inception of public reporting in Massachusetts in 2003, 4 institutions have been identified as negative outliers. We hypothesized that PCI operators at institutions previously identified as negative outliers may become more intensely risk averse in PCI case selection compared with physicians at nonoutlier centers and therefore more likely to avoid performing PCI in the most severely ill individuals. Paradoxically, these critically ill patients may stand the most to gain from reperfusion therapy, albeit with a poor overall prognosis (7).
Therefore, we set out to answer the following question: Was identification as an outlier for PCI mortality rate in Massachusetts associated with a change in the risk profile of patients receiving PCI at that hospital in subsequent years compared with patients treated at nonoutlier hospitals?
The Massachusetts Data Analysis Center (Mass-DAC) collects, adjudicates, and analyzes patient-specific risk factors and outcomes for each nonfederal hospital performing PCI in Massachusetts. Mass-DAC uses validated risk prediction models to calculate the expected mortality rates for all patients based on their clinical characteristics and presentation condition. Separate prediction models are used for shock or STEMI (SOS) PCI patients and not-shock, not-STEMI (non-SOS) PCI patients (8). Both expected mortality models were generated yearly to ensure prediction validity and thus model covariates and the point estimates of those covariates were subject to change throughout the study period (Online Tables 1 and 2 list all model covariates and their respective point estimates per year). Model covariates include elements collected for reporting to the National Cardiovascular Data Registry, whose reporting is mandatory in Massachusetts, as well as additional exceptionally high-risk elements not collected for the National Cardiovascular Data Registry but previously validated for improved model fit in exceptionally high-risk cases (6). The area under the receiver-operating characteristic curve for the resultant Mass-DAC models ranged from 0.83 to 0.91 during the study period, suggesting excellent model discrimination (Online Tables 1 and 2). Averaged expected mortality rates per PCI-performing hospital are reported as standardized expected mortality incidence rates (SMIRs). Because separate prediction models are used for SOS patients and non-SOS patients, separate SMIRs for each group are also reported per institution (8). These SMIRs are published yearly and represent the average expected in-hospital mortality rates per hospital based on the characteristics of that institution’s PCI population.
Endpoints and definitions
We collected publicly reported SMIRs for both SOS and non-SOS patients at all PCI-capable, nonfederal Massachusetts hospitals from 2003 through 2010. Prevalence-weighted mean expected mortality rates were calculated per year by weighting each hospital’s expected mortality rates for SOS and non-SOS patients by those hospitals’ respective number of SOS and non-SOS cases. The pre-specified primary outcome of this study was the change in average expected mortality rates for all patients undergoing PCI at institutions previously labeled as negative outliers compared with the changes in expected mortality rates from all nonoutlier hospitals during that time frame. Comparisons with contemporary nonoutlier hospitals account for yearly changes in the Mass-DAC expected mortality models and the possibility that public reporting and, more specifically, the identification of outlier hospitals, might lead to “risk avoidance creep” among all hospitals and not just outliers. The primary predictor was outlier status as reported by Mass-DAC. During the study period, 4 hospitals were reported to be negative outliers: 1 in 2005, 2 in 2007, and 1 in 2009. For the purposes of this analysis, institutions were not considered outliers until the year of their public identification but remained outliers for all subsequent years after their identification. To determine whether patients at outlier institutions were being routed toward coronary artery bypass graft (CABG) surgery instead of PCI after outlier status identification, we also collected publicly available SMIRs created by Mass-DAC for per-hospital projected 30-day mortality rates after isolated CABG to examine concomitant changes in the illness severity of this population.
Simple comparisons of normally and non-normally distributed data were performed using the Student t test and Wilcoxon rank sum test, respectively. We used generalized estimating equations for multivariate regression analyses comparing outliers with nonoutliers while accounting for nested and repeated measures among specific hospitals. These multivariable models were adjusted for secular trends by incorporating the years analyzed. Analyses were performed with Stata version 11 (StataCorp, College Station, Texas). Institutional review board approval was waived because all analyses were performed using aggregate publicly reported data.
Twenty-four Massachusetts hospitals performed 116,227 PCI procedures between 2003 and 2010. The prevalence-weighted mean expected mortality rate for all PCI cases during the study period was 1.38 ± 0.36% (5.3 ± 1.96% for all SOS patients, 0.58 ± 0.19% for all non-SOS patients).
Outlier hospitals were larger on average than nonoutlier hospitals (p = 0.03) and significantly more PCI procedures were performed per year (192 ± 80 vs. 112 ± 76 SOS, and 1,163 ± 200 vs. 780 ± 408 non-SOS cases, respectively; both p < 0.01) (Table 1).
Changes in expected mortality rate of PCI patients
On average, after hospitals were labeled as negative outliers, the expected mortality rate of their PCI patients was significantly lower than at nonoutlier institutions (1.08 ± 0.23% vs. 1.58 ± 0.29%, p < 0.01), suggesting the average PCI patient at outlier institutions was less severely ill. Specifically, outlier institutions had significantly lower rates of expected mortality among non-SOS patients compared with non-SOS patients at nonoutlier institutions (0.47 ± 0.18% vs. 0.60 ± 0.18%, p < 0.01), whereas the illness severity of SOS patients, as reflected in their expected mortality rates, did not appear to differ significantly between the 2 groups (5.22 ± 1.28% vs. 5.31 ± 2.02%, p = 0.87).
When expected mortality rates at outlier institutions (post-identification) were compared with themselves pre-identification, even greater differences were seen in both SOS and non-SOS expected mortality rates: 7.49 ± 3.47% (pre-) vs. 5.22 ± 1.78% (post-), p = 0.03; and 0.71 ± 0.18% (pre-) vs. 0.47 ± 0.18% (post-), p <0.01, respectively.
After adjusting for temporal trends across the state, there was a significant 18% relative reduction (or an absolute 0.25% reduction) in predicted mortality among PCI patients at hospitals after public identification as an outlier compared with nonoutliers (95% confidence interval [CI]: −0.46 to −0.04%, p = 0.021) (Fig. 1). Furthermore, there was a 37% relative reduction (0.51% absolute decrease, or 0.06% per year decrease) in the predicted mortality rates of all PCI patients in Massachusetts attributable to secular changes since the onset of public reporting (95% CI: −0.83 to −0.20, p = 0.002).
Changes in observed mortality rate of PCI patients
Of note, the changes in expected mortality rates calculated from patient risk profiles paralleled the actual observed in-hospital mortality rates after PCI over the study period. The overall observed statewide mortality rates significantly decreased from 1.70% in 2003 to 1.34% in 2010 (p = 0.02 for trend) (Fig. 2).
CABG surgery among outlier hospitals
All centers ultimately identified as outlier institutions also offered CABG surgery throughout the study period. Since the inception of public reporting of PCI outcomes in Massachusetts, the average expected 30-day mortality rates after CABG surgery at these 4 institutions has decreased from 2.50 ± 0.39% to 1.23 ± 0.03% (p = 0.01 for trend) (Fig. 3), suggesting that the decrement in average illness severity of the PCI population at those outlier institutions is not likely due to redirecting their most severely ill patients toward an operative reperfusion strategy.
Using expected mortality rates for each hospital as a surrogate for the case mix of its PCI population, we found that the aggregate illness severity of PCI patients at institutions previously labeled as an outlier was significantly lower than contemporaneous measures at other Massachusetts institutions that had not previously been labeled as a negative outlier. This suggests that risk-aversive behaviors among PCI operators at outlier institutions may be an unintended consequence of public reporting in Massachusetts. Concomitantly, there was a significant temporal trend toward a lower average illness severity for PCI patients across all hospitals in the state since the inception of public reporting of PCI outcomes; whether this was a result of public reporting or simply reflective of larger national trends is unknown, although previous work suggests that this may be a consequence of the public reporting process (9).
The mechanism for the decrease in expected in-hospital mortality rate among PCI patients at outlier institutions is unclear. An improvement in PCI performance or quality may improve observed mortality rates after PCI, but should not affect the expected mortality rates that were used for this analysis because they are calculated based on patients’ presentation characteristics. It is therefore likely that public reporting leads some PCI operators to avoid those patients whom they perceive to be at the highest risk of adverse outcomes and hence most likely to negatively affect their publicly reported performance. In fact, we suspect that the mechanism for the decreased average illness severity among PCI case mix at outlier hospitals is to avoid PCI altogether in the most severely ill SOS patients.
Although we did not observe a difference in the expected risk of SOS patients who underwent PCI after outlier status identification, our analysis cannot account for patients who might qualify for PCI but were no longer offered this therapy. Avoiding interventions in the most severely ill SOS patients who might otherwise have indications for PCI would decrease that institution’s aggregate expected mortality rate by relatively increasing the proportion of lower risk, nonshock, non-STEMI patients among the total PCI population. This hypothesis is consistent with the 30% reduction in STEMI patients with shock who underwent PCI after implementation of public reporting of PCI outcomes in New York State between 1997 and 2003 (1) and is in accord with a recent analysis suggesting that, compared with states without public reporting, states with PCI outcome reporting (including Massachusetts) demonstrate the most significant decrement in PCI application among patients with STEMI or shock (9). Nevertheless, an analysis of all PCI-eligible patients in Massachusetts is required to further evaluate our findings regarding the effects of outlier status on PCI case mix.
Interestingly, we also found that the average illness severity of the non-SOS patients, a generally lower risk group, significantly decreased after outlier status identification. It is unclear why statistically measurable changes not seen among the SOS cohort would be seen in this group after outlier status identification; it may be that the decision to provide PCI is generally more discretionary in a lower risk cohort than in the SOS group, or it may be related statistically to the far greater volume of nonshock, non-STEMI patients undergoing PCI.
First, our analyses were limited to publicly available per-hospital data, and thus specific patient-level information was not incorporated. Second, we were unable to account directly for differences in risk factor documentation. Previous experience suggest that more rigorous risk profile documentation after public reporting, sometimes referred to as up-coding, may be common (10). Up-coding has the potential to falsely inflate predicted patient mortality rate and therefore dilute any change in quantifiable risk aversion after outlier status identification. Up-coding would have biased this analysis against finding an even greater effect. Additionally, as noted previously, our data cannot account for the patients who might have qualified for PCI during the study period but who did not receive it as a result of risk-aversive behavior because those subjects are not recorded in this PCI-based dataset. Finally, we cannot account for the exact timing of institutional notification regarding their public identification as an outlier. We chose to stratify institutions as outliers for the purposes of this analysis based on the year for which they were cited, but it is possible that these institutions may not have been alerted to the change in their public reporting status until many months later. Stratifying institutions as outliers for the purposes of this analysis before they were alerted to their status changes would also be expected to bias our results toward the null.
Taken as a whole, these data suggest that the public reporting of in-hospital mortality after PCI and the practice of public identification of hospitals as negative outliers may increase risk avoidance in a manner inconsistent with best practices. Arguably, an aversion to performing PCI in the sickest patients at poorly performing hospitals is, in fact, the intended consequence of public reporting of PCI outcomes. However, the hospitals identified as negative outliers in Massachusetts were larger centers with greater average procedure volumes, features classically associated with superior technical performance (11–13). Additionally, we did not observe a reciprocal increase in the aggregate expected mortality of those institutions’ CABG patients or in PCI patients at nonoutlier institutions as one might expect to find if the most severely ill patients were being redirected from “poorer performing” outlier hospitals’ catheterization laboratories to their operating rooms for CABG or to “better performing” nonoutlier hospitals for PCI.
Interestingly, in the era of public reporting, the severity of illness of all PCI patients has diminished despite a contemporaneous movement toward PCI “appropriateness,” which generally encourages reserving PCI for higher acuity and more severely symptomatic patients (14). During this period, both the expected and observed in-hospital mortality rates after PCI declined in Massachusetts, although the relative contributions of quality improvement and risk aversion to this trend remain unclear. Further studies on publicly labeling institutions as negative outliers based on in-hospital PCI mortality rate are necessary to assess its impact on operator behavior and case-mix selection.
For supplemental tables, please see the online version of this article.
The authors have reported that they have no relationships relevant to the contents of this paper to disclose.
Data previously presented as a Best Poster finalist at the Scientific Sessions of the American College of Cardiology, 2012, Chicago, Illinois.
- Abbreviations and Acronyms
- coronary artery bypass graft
- Massachusetts Data Analysis Center
- percutaneous coronary intervention
- standardized expected mortality incidence rate
- shock or ST-segment elevation myocardial infarction
- ST-segment elevation myocardial infarction
- Received November 10, 2012.
- Revision received January 2, 2013.
- Accepted January 18, 2013.
- American College of Cardiology Foundation
- Resnic F.S.,
- Welt F.G.P.
- Ettinger W.H.,
- Hylka S.M.,
- Phillips R.A.,
- Harrison L.H.,
- Cyr J.A.,
- Sussman A.J.
- Peterson E.D.
- Lilford R.,
- Pronovost P.
- Resnic F.S.,
- Normand S.-L.T.,
- Piemonte T.C.,
- et al.
- ↵Public Reports: Percutaneous Coronary Intervention Cohort. Available at: http://www.massdac.org/reports/pci.html. Accessed May 13, 2012.
- Hawkes N.
- Katz J.N.,
- Losina E.,
- Barrett J.,
- et al.
- Harrison E.M.,
- O'Neill S.,
- Meurs T.S.,
- et al.