Author + information
- Paul T. Vaitkus, MD, MBA∗ ()
- ↵∗Reprint requests and correspondence:
Dr. Paul T. Vaitkus, Cardiology Division, Bay Pines Veterans Affairs Medical Center, Mail Stop 111, 10,000 Bay Pines Boulevard, Bay Pines, Florida 33744.
Comparative effectiveness research (CER) is a theme that will play an increasingly important role in the discourse of medical care. The Institute of Medicine defines CER as “the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care” (1). Most commonly, CER is used in the context of the politically-charged environment surrounding the Affordable Care Act, in which the explicitly stated purpose of CER is to rein in the growth of healthcare costs, largely by demanding evidence that newer, more expensive modalities be demonstrated to be more clinically effective and cost-effective than less expensive modalities as a condition for reimbursement (2). Whereas these are the principal domains for policy-makers and payers, clinical providers are also concerned with a somewhat more parochial issue: Given that we confront a clinical issue with a particular approach (e.g., a procedure such as carotid artery stenting [CAS]), which of several available alternative tools is the best for conducting the procedure?
In this issue, Giri et al. (3) use data from the NCDR (National Cardiovascular Data Registry) CARE (Carotid Artery Revascularization and Endarterectomy) registry to examine the issue of outcomes associated with different commercially-available stent delivery embolic protection systems in contemporary U.S. practice. The current paper provides several substantive contributions to the literature. First, it provides a broadly representative overview of contemporary CAS practice in the United States. Second, the study provides point estimates for the reported complication rates of in-hospital strokes and mortality that can thus serve as benchmarks for quality-assurance programs at local levels as well as guidance for assessing future novel technologies. Last, the paper provides comparative outcomes data for CAS systems to a degree that exceeds previous efforts, either retrospective or prospective (4,5). To this last point, whereas there is general agreement that an assessment of the relative benefits or risks associated with several different devices intended to accomplish the same goal would be most definitively accomplished in the context of a direct, head-to-head, randomized, controlled clinical trial (and the cardiovascular community has become conditioned to expect such studies), in reality such studies may never be mounted. They are of necessity large, expensive, and resource-intensive, and the question of sponsorship will most often preclude such trials. The federal dollars slated for CER will not be prioritized for these sort of efforts. Newcomers to the marketplace may be expected to compare themselves to established technologies, both for registration/approval and market/clinical expectations, but when several existing technologies have attained approval and established themselves in the marketplace, it is not likely that any one of them will seek to expend funds or incur the potential risks of clinical-trial failure to directly compare itself to the alternatives. Thus, our best, albeit imperfect, answers will likely come only from registry efforts, and the NCDR has the advantages of size (with attendant statistical power), breadth, and impartiality in that the data gathering and analyses are not under the aegis of any commercial entity.
The main findings of the current study are as follows. First, 3 (of a half-dozen) marketed CAS systems account for approximately three-quarters of procedures performed in the United States, and market share is approximately evenly divided among them. Also, physicians are using each of these systems in an integrated fashion. That is, in the majority of cases, the stent delivery system is used in conjunction with the same manufacturer's embolic protection system with relatively little “mixing and matching.” Given that the CAS systems were developed and received regulatory approval as integrated systems, the U.S. Food and Drug Administration can take note that real-world practice is consistent with product labeling. As the investigators point out, some of this compliance with labeling may be driven by the mandated ongoing post-marketing surveillance.
Second, the complication rates were reasonably low. Although the investigators never explicitly state an overall complication rate, one can readily calculate that in-hospital mortality and stroke rates for the cohort as a whole were approximately 0.35% and 1.8%, whereas the 30-day mortality and stroke rates were approximately 0.73% and 3.2%. The real-world practice of CAS thus measures favorably to what was demonstrated in pivotal clinical trials and previous registries (6,7). This conclusion needs to be tempered, however, by several limitations of this, and all, registries. Self-reported outcomes may be under-reported outcomes. Furthermore, 30-day follow-up was incomplete, and one must always be concerned that some of the cases lost to follow-up may be lost due to death, disability, and institutionalization due to recurrent stroke.
Finally, in comparing the 3 CAS systems, the investigators report that whereas one had a seemingly higher raw (“unadjusted”) rate of complications, this trend did not reach statistical significance (by a small margin), and once baseline characteristics were controlled for, the differences were not at all significant. Indeed, in examining each item on the exhaustive list of baseline clinical and angiographic characteristics that were taken into account, it is noteworthy that many of them were not balanced among the 3 groups of patients, and, most often, those that might have an adverse impact on outcomes were more prevalent in patients receiving the device who had higher complication rates. Thus, the investigators are appropriate in offering the conclusion that no differences emerged among the devices. Nevertheless, the trends, especially when graphically displayed, are concerning. An observer might ask whether a larger sample size would have resulted in the confidence intervals being to the right of unity. Herein, the carotid-stenting community may take notice and conclude that 1 device is, indeed, inferior to the others. Regulators, as well, may ask for additional safety data. It is not unprecedented that retrospective, secondary, or indirect analyses have provided safety data leading to the withdrawal from the market of pharmacologic agents (such as some of the selective cyclooxygenase inhibitors) and devices (such as some femoral artery closure devices) (8).
This paper and the CARE registry on which it is based have some important overarching limitations. Like most NCDR efforts, CARE focuses on patients receiving a particular procedure. Thus, it cannot shed any light on comparing outcomes of CAS to those in patients who did not undergo CAS. Second, this was exclusively a safety analysis. There are no long-term outcomes data, and thus there is no parallel comparison of whether 1 stent design is more efficacious than another in preventing stroke. Certainly, when a possible safety signal has been raised (especially in the context of an imbalance of baseline characteristics) before we can impugn 1 device, the absence of efficacy assessments, either in comparison to nontreatment or an alternative such as endarterectomy, or among the 3 devices, means that we lack the full set of considerations to properly compare the 3 stents.
Nevertheless, the investigators can be congratulated for their contribution and for having added to the burgeoning literature emerging from the NCDR, which provides answers, even if partial or imperfect at times, to relevant clinical questions that will probably not be addressed as well from any other source.
↵∗ Editorials published in JACC: Cardiovascular Interventions reflect the views of the authors and do not necessarily represent the views of JACC: Cardiovascular Interventions or the American College of Cardiology.
The views expressed are exclusively those of the author and do not represent any official position of the Department of Veterans Affairs. Dr. Vaitkus has reported that he has no relationships relevant to the contents of this paper to disclose.
- American College of Cardiology Foundation
- ↵Committee on Comparative Effectiveness Research Prioritization. Board on Health Care Service. Institute of Medicine of the National Academies. What is comparative effectiveness research? In: Initial National Priorities for Comparative Effectiveness Research. Washington, DC: The National Academies Press, 2009. Available at: http://www.nap.edu/openbook.php?record_id=12648&page=29. Accessed October 15, 2013.
- ↵Congressional Budget Office. Research on the comparative effectiveness of medical treatments: issues and options for an expanded federal role. Available at: http://www.cbo.gov/sites/default/files/cbofiles/ftpdocs/88xx/doc8891/12-18-comparativeeffectiveness.pdf. Accessed December 11, 2013.
- Giri J.,
- Kennedy K.F.,
- Weinberg I.,
- et al.
- Schillinger M.,
- Gschwendtner M.,
- Reimers B.,
- et al.