Myth: People use health system report cards to make decisions about their healthcare

PDF version (248 KB)

Myth Busted December 2003
Busted Again September 2006

In a culture obsessed with ranking the Top 10 everything, it seems logical patients would want their hospitals and doctors to be rated as well. After all, it's nice to know Citizen Kane is the "greatest American movie," but it won't change your life. On the other hand, knowing your local hospital is a world leader in heart surgery just might.

This is the thinking behind healthcare report cards for the public. Providers and hospitals are ranked, giving patients the ability to choose between superior, average, and poor performers. As well, those being ranked can focus their quality improvement efforts where they will do the most good.

Unfortunately, research indicates that, while most Canadians want comparative information, i they are not using it when making healthcare decisions. In addition, evidence of improved quality and accountability is somewhat tempered by potential gaming of the system.

If you build it, will they come?

In the province of Ontario, hospital-specific report cards have been used since the early 1990s to rate cardiac care. ii However, people are not consulting these reports before going to the hospital. For example, follow-up of a one-time atlas on heart attack care showed none of the patients at 81 percent of hospitals asked about the findings within a year of its release. iii

The situation is similar south of the border. Pennsylvania produces an annual report on bypass surgery; in its fourth year, only 12 percent of patients had even heard of it before they had surgery. And less than one percent of people going into surgery knew the rating of their surgeon or hospital and said the rating had a moderate or higher effect on their decisions. iv

In addition, a large study of Wisconsin hospitals that were included in a public report card showed no change in the number of people using each hospital after the report's release. Further, only 10 percent of people who were exposed to the report used it to recommend or choose a hospital within two years, and almost no one spoke with their doctors about it. v

Finally, report cards on bypass surgery, heart attack care, and spinal surgery in New York and California produced almost no sustained change in hospital volumes after they came out - there was some patient movement towards high-ranked hospitals and away from low-ranked ones in the first few months, but these effects were only temporary. vi

Of course, it's possible the reports aren't reaching the public - people aren't using them simply because they don't know they exist. Most reports are available on web sites and may be reported on only once in the newspapers, requiring people to actively seek them out in the first case and read a particular article on a particular day in the second. However, in Wisconsin, the reports were also sent directly to people's homes and distributed by community groups and libraries yet still had very limited uptake by the public. v

A further complication is patients' ability to make choices based on these reports. In Canada, care for many serious conditions-such as cardiac care-is often centralized in hospitals within health regions, so patients don't have many options that wouldn't involve significant travel. vii In the United States, report cards have some effect on what health plans people opt for. viii, ix But after choosing a health plan, patients are often locked into a specific network of hospitals, meaning they may have very few realistic choices to make based on the report cards. x

So what are patients consulting? Surveys show people prefer health services somewhere close to home, iv they listen to what family and friends say about the hospital, and they feel comfortable if they were referred to the same place by their doctor in the past. xi

But can they still be useful?

While the public may not use report cards in decision-making, facilities may use them to improve quality and accountability. In Ontario, 54 percent of hospitals said they made changes in response to the heart attack atlas. iii Another study said 77 percent of Pennsylvania hospitals and 88 percent of New Jersey hospitals set up ways to monitor quality, and 38 percent of Pennsylvania hospitals and 56 percent of New York hospitals put more money into quality improvement for heart surgery patients. xii A survey of Missouri hospitals that were publicly evaluated on their obstetrics services found improvements in nearly half the hospitals one year later. x England, which rated its NHS trusts annually between 2001 and 2005, found remarkable improvements in the amount of time people spent waiting in emergency rooms, ambulance responses to life-threatening calls, and waiting times for elective hospital admissions. xiii

Finally, studies suggest it is indeed the public aspect of report cards that spurs quality improvement efforts. In the Wisconsin hospital study, 33 percent of hospitals that were included in the public report card improved their performance two years later and only five percent had worse performance; among hospitals that received confidential feedback at the same time, only 25 percent of hospitals improved, and the performance of 14 percent declined. v This mirrors a large study of Quebec hospitals that studied immediate versus delayed feedback and determined "feedback based on one-time, confidential report cards… is not an effective strategy for quality improvement regarding care of patients with [heart attacks.]" xiv

Caution required

Several concerns remain about public report cards. Ontario hospitals worry the reports don't do enough risk-adjustment - factoring in things like the patient's age or other ailments. iii And it can be years between the time the data are collected and when they are published-issues that brought down ratings may no longer be problems. xv, xvi

In addition, the reports may have some unintended negative effects. In 1996, almost two-thirds of New York heart surgeons said they refused to do a bypass on at least one high-risk patient, mostly because of public reporting. xvii Cardiologists in Pennsylvania also said they were having trouble finding surgeons to operate on high-risk patients. x And "gaming" has been alleged in England; as one example, patients had to wait in ambulances outside the emergency room until staff was confident of meeting the four-hour waiting time target for treatment (the clock didn't start until patients left the ambulance). xiii

Despite these concerns, public report cards are likely here to stay. x The best bang for our buck may come by focusing on where the research says we're already making the most headway - making report cards multidimensional and useful for health professionals, institutions, and decision makers. xviii


i. Ipsos-Reid. 2002. Second Annual Report Card on Health Care in Canada.

ii. Naylor D and Slaughter P. 1999. Cardiovascular Health and Services in Ontario: An ICES Atlas. Institute for Clinical Evaluative Sciences.

iii. Tu JV and Cameron C. 2003. "Impact of an acute myocardial infarction report card in Ontario, Canada." International Journal for Quality in Health Care; 15(2): 131-137.

iv. Schneider EC and Epstein AM. 1998. "Use of public performance reports: a survey of patients undergoing cardiac surgery." Journal of the American Medical Association; 279(20): 1638-1642.

v. Hibbard JH et al. 2005. "Hospital performance reports: impact on quality, market share, and reputation." Health Affairs; 24(4): 1150-1160.

vi. Romano PS and Zhou H. 2004. "Do well-publicized risk-adjusted outcomes reports affect hospital volume?" Medical Care; 42(4): 367-377.

vii. Shahian DM. 2000. "Selection of a cardiac surgery provider in the managed care era." Journal of Thoracic Cardiovascular Surgery; 120(5): 978-987.

viii. Kluge E-HW. "Patients' views about cardiac report cards: better late than never, but do I really want to know?" Canadian Journal of Cardiology; 21(11): 949-950.

ix. Beaulieu ND. 2002. "Quality information and consumer health plan choices." Journal of Health Economics; 21(1): 43-63.

x. Scanlon DP et al. 2002. "The impact of health plan report cards on managed care enrollment." Journal of Health Economics; 21(1): 19-41.

xi. Robinowitz DL and Dudley RA. 2006. "Public reporting of provider performance: can its impact be made greater?" Annual Review of Public Health; 27: 517-536.

xii. Bentley JM and Nash DB. 1998. "How Pennsylvania hospitals have responded to publicly released reports on coronary artery bypass graft surgery." The Joint Commission Journal on Quality Improvement; 24: 40-49.

xiii. Bevan G and Hood C. 2006. "Have targets improved performance in the English NHS?" British Medical Journal; 332(7538): 419-422.

xiv. Beck CA et al. 2005. "Administrative data feedback for effective cardiac treatment: AFFECT, a cluster randomized trial." Journal of the American Medical Association; 294(3): 309-317.

xv. Davies HTO. 2001. "Public release of performance data and quality improvement: internal responses to external data by US health care providers." Quality in Health Care; 10: 104-110.

xvi. Guerriere M. 2005. "Determining the utility of public reporting - too early to judge." Healthcare Papers; 6(2): 62-67.

xvii. Burack JH et al. 1999. "Public reporting of surgical mortality: a survey of New York State cardiothoracic surgeons." The Annals of Thoracic Surgery; 68(4): 1195-2000.

xviii. Brown AD et al. 2005. "Making performance reports work." Healthcare Papers; 6(2): 8-22.


Mythbusters are prepared by staff at the Canadian Health Services Research Foundation and published only after review by experts on the topic. The Foundation is an independent, not-for-profit corporation. Interests and views expressed by those who distribute this document may not reflect those of the Foundation.