Sorry, there are no polls available at the moment.

Evidence, Technology, and Cost Control

There is a growing understanding that the control of health care costs must now be considered as important as the problem of the uninsured. American health expenditures are expected to double within the next decade, from $2.1 trillion to $4 trillion, but already, cost escalation has been one of the important reasons for the steady increase in the number of the uninsured. Moreover, as a recent study by the Congressional Budget Office shows, medical technology accounts for about 50% of the annual cost increase.

There is, unhappily, every reason to believe that cost control, and particularly of technology, will be much harder to achieve than universal care. Even if it is difficult to gain political agreement on what form the latter should take (which is why we may not get it), universal care is still a popular idea, overwhelmingly supported in public opinion surveys. But while the public is concerned about the cost issue, doing something about it will be exceedingly unpopular in practice. Universal care would mean giving millions of people what they want and need. Cost control will mean taking from people what they want and often what they need.

A key item for cost control, particularly of medical technologies, is the assessment of treatments and technologies. What works and what does not? A recent report from the Institute of Medicine, Knowing What Works in Health Care: A Roadmap for the Nation, recommends the establishment of a “National Clinical Effectiveness Program,” whose task it would be to assess treatments and technologies for their clinical efficacy. The program (which might be a private or a public/private entity) should “reflect the potential for evidence-based practices to improve health outcomes across the life span, reduce the burden of disease and health disparities, and eliminate undesirable variation.” While the report does not emphasize or develop the point in any detail, it notes the potential of such a program to constrain health care costs.

The idea of the need for technology assessment and clinical effectiveness studies is hardly new. One of the problems that the new organization is meant to address is that many organizations have carried out such studies, but in a fragmented way based on no clear national priorities. It is also evident over the years that there is a major allergy to taking on costs; policy usually focuses instead on medical efficacy. Since its creation in 1965, the Medicare program has been forbidden by Congress from taking costs into account in its decisions on benefit coverage. Only what is medically “reasonable and necessary” may be considered.

Hardly less striking is the fate of various federal efforts in recent decades to establish agencies to assess health care technology.  One technology assessment agency that had a scope well beyond health care, the Office of Technology Assessment, was created in 1974 and killed by Congress in 1995. The first effort devoted to health care technology was the National Center for Health Care Technology, which was established by Congress in 1978. The aim of the center was to assess technology for its safety, efficacy, economics, ethics, and impact on society. It was not given a formal cost-control role, but its inclusion of economics opened the way for cost impact considerations. It was deliberately not regulatory; it was limited to commissioning original research, organizing demonstration projects, and evaluating specific technologies – and remaining neutral in the process.

Despite all those limitations, the center did not survive for long. It was killed by Congress in 1981. Its undoing was opposition from physicians, who believed that only they were capable of determining how best to treat their patients, and from medical manufacturing associations, who worried about a threat to innovation. In 1985, Congress created another agency with similar aims: the Agency for Health Care Policy and Research. It was never eliminated, but when it ran afoul of back surgeons by recommending nonsurgical treatment for most lower-back pains, leading the surgeons to attack the agency as biased, it barely survived. Its budget was cut 25%, it was renamed the Agency for Health Care Research and Quality, and its authority to recommend payment decisions to Medicare and Medicaid was eliminated.

Whether the organization proposed by the Institute of Medicine will ever see the light of day is uncertain. But if history is any guide – and in this case, why wouldn’t it be? – its congressional sponsors will need to take great care. While the IOM report did talk about cost control, it may have soft-pedaled that role (or so it seems to me) out of a wariness about potential Congressional and industry resistance. At the least, any such organization should be given a guaranteed life of at least 10 years – time enough to get some good work done and to survive criticism of its findings. It probably should be put together as a public/private body to insure that it is not seen as a one-sided government imposition on medical practice aimed at cost control.

But my own conviction is that a “government imposition on medical practice aimed at cost control” is what is needed as part of any serious cost control effort. The only efforts that work well to rein in cost increases in any part of the developed world are the result of government regulation. A careful effort to open that door with a new assessment agency will require, regrettably, moving delicately and with exceeding political astuteness, not only to open that door a crack at first but perhaps to find a way to show the public and Congress that gradually opening it wider is inevitable and necessary.

Daniel Callahan is director of the international program of the Hastings Center and has just completed a book, now in press, titled Taming the Beloved Beast: Medical Technology and Health Care Costs.

Published on: February 15, 2008
Published in: Health Care Reform & Policy

Receive Forum Updates

Recent Content