Medical malpractice—the process by which society holds medical practitioners legally and financially responsible for wrongful or substandard care—has been one of the most complicated and contentious issues in U.S. public life. The medical malpractice phenomenon is frequently viewed only as a product of the mid- to late twentieth century, but physicians protested what they viewed as a malpractice “crisis” as early as the 1830s. Since then, they and other observers have identified successive periods of intensified litigation in the 1890s, 1930s, 1970s, 1980s, and, more recently, in the early twenty-first century. Contemporary observers in each respective historic period have identified and focused primarily on short-term causes of the suits, but have been unable to identify the underlying and ongoing origins of the litigation. Medical malpractice, however, has always been a multifaceted phenomenon and its history is a chronicle of the overlapping influences of medicine, technology, politics, culture, economics, and law. Its development serves also as an imperfect reflection of society’s continually evolving view of what constitutes medical error and culpable error. Widespread malpractice suits in the 1830s, as in the 2000s, were the result of a variety of multiple short-term topical causes and long-term cultural preconditions, but the development and use of new medical technologies and procedures has played a consistent and central role throughout the history of the litigation.
The Law of Medical Malpractice.
Medical malpractice suits occur in a social, cultural, and medical context that fundamentally influences the way in which they arise and are resolved. Medical malpractice is first and foremost a legal action designed to compensate individuals for the harms that they have suffered as a result of medical negligence or other wrongdoing in the medical setting. It also serves a deterrence function by prompting physicians to alter their behavior before harm to individuals occurs. Although legal doctrine has evolved in important ways, the basic required elements of a medical malpractice action have remained essentially stable for over 150 years. Medical malpractice is based on the tort theory of professional negligence. Patients are plaintiffs in this context; physicians are defendants. To construct a successful malpractice action, a plaintiff must demonstrate, by a “preponderance of the evidence” (more likely than not), four “elements”: (1) the defendant had a duty toward the plaintiff; (2) that duty or “standard of care” was breached; (3) the plaintiff suffered harm or damage; and (4) the breach was the proximate cause of the harm the plaintiff suffered. There is no single proscribed standard of care for any particular procedure, illness, or injury that applies in all cases, for all patients. Instead, the standard of care requires that a physician possess and exercise the skill, knowledge, learning, and care that a reasonable prudent physician would possess and exercise in the same or similar circumstances. Experts for both the plaintiff and the defendant are required to offer opinions to aid the jury on whether the defendant-physician met the standard of care in the treatment of the plaintiff-patient. Although the wording of the physician’s legal duty has not changed significantly, technological development and advancements in medical sciences dramatically altered the essential content of that standard over time.
The Nineteenth-Century Background: 1830–1850.
Although malpractice suits in the early republic were not unheard of, before the 1830s suits in the United States were rare. Medical journals rarely mentioned suits except to comment on their infrequency. All that changed by midcentury. In 1844 the Boston Medical and Surgical Journal lamented that physicians were “constantly liable to vexatious suits instituted by ignorant and unprincipled persons.” By 1860 John Elwell, a physician-lawyer who authored a book on malpractice, claimed, “There can hardly be found a place in the country where the oldest physicians in it have not been actually sued or annoyingly threatened.” This period, from 1830 to 1860, represents a critical threshold—many of the factors that underlie the first perceived malpractice “crisis” in the United States persisted through subsequent cyclical outbursts of increased litigation (Burns, 1969; DeVille, 1990).
The short-term topical causes for this apparent sudden outburst of litigation were accurately identified by contemporary observers. For example, physicians had suffered a clear loss of status since the latter half of the eighteenth century, in part attributable to the antielitism that is said to characterize the Jacksonian period. Leaders of the profession speculated that this decline in status contributed to the unprecedented wave of suits. Patients often had little respect for physicians, who could claim few genuine cures. Moreover, the tradition of domestic and alternative medicine in the United States undermined physicians’ reputations. Some of the problem lay in the educational system. After 1800, medical schools proliferated and the competition for students frequently led to degraded educational standards and debased the profession in the eyes of the public. The increased number of schools also produced a flood of physicians. When physicians were rare, suits were unlikely. Defense attorneys sometimes asked juries to weigh the value of an accused physician to the community against the injury suffered by the plaintiff. By 1850, such a tactic was much less likely to be successful because when one physician left the city or town, another was waiting to take his place. Finally, the burgeoning number of physicians often led to bitter competition. Malpractice accusations were occasionally used as competitive weapons against professional enemies. The lack of medical licensure control and professional discipline at midcentury may have encouraged the individuals to rely on litigation to attack what they viewed as deficient practitioners and medical error (DeVille, 1990; Mohr, 2000).
Although these short-term topical factors are an important part of the causal explanation of the mid-nineteenth-century malpractice phenomenon, two long-term cultural developments also played a central and important role in the litigation. The first of these developments was a transformation in Americans’ belief in divine providence. Much of America in the eighteenth century believed that physical misfortune was an explicit expression of divine will. This belief held that physical misfortune was inflicted upon humans to either test or punish them. To those who held such beliefs, to sue for misfortune would have been to question God’s judgment. This attitude may have affected both potential plaintiffs and the juries who would judge their claims. Although these attitudes had been evolving for at least two centuries, the first half of the nineteenth century was marked by a period of especially dramatic and rapid religious transformation. A greater portion of society began to believe that God observed but did not intrude on ordinary affairs of day-to-day life. Scientific and social progress strengthened the growing belief that physical ills could and should be changed on earth. These developments allowed and even led individuals to look for earthly reasons and human culpability for human suffering. Without this large-scale religious transformation, widespread suits for malpractice would have been unimaginable (DeVille, 1990).
The second cultural transformation that allowed medical malpractice suits to flourish for the first time in the mid-nineteenth century was the dissipation of a community ethos that tended to suppress lawsuits. Legal anthropologists have suggested that the “relational distance” between members of a community influences their willingness to rely on legal remedies. In communities characterized by face-to-face relationships and populated by economically self-sufficient farmers and merchants—typical of eighteenth-century life in the United States—lawsuits were viewed as an inappropriate response to personal injuries, regardless of culpability. Litigation disturbed the peace of the community, violated religious-based community strictures against suing for misfortune, and threatened to rob the community of a valuable social resource, its physician. This species of communal structure changed only slowly but profound changes to old ways of thinking took place in the early to middle nineteenth century, the same time that the first malpractice crisis arose. There was a clear movement from the communal mentality in colonial America to the more anonymous individualism of the nineteenth- and twentieth-century United States. As the notion of community evolved, individualism became a greater feature in American life and community stigma against suits had less influence. Without this change, individuals would not have felt free to sue on a wide scale (DeVille, 1990). Indeed, social distance may still influence what is considered legitimate litigation. An important empirical study of malpractice in the late twentieth century contends that “urbanization is the single most powerful predictor of both frequency and severity of [malpractice] claims” (Danzon, 1985).
The vast majority of malpractice suits in the mid-nineteenth century resulted from fracture and dislocation cases. Orthopedic treatments played only an inconsequential role in the few publicized suits prior to the 1830–1850 period. The explanation is revealing. In 1800, the standard of care for severe fractures and dislocations was amputation. Amputations, however, did not generate a large number of suits. Claimants typically had no limb left to present as evidence to experts and the jury but, most significantly, public and professional expectations were low for severe orthopedic injuries. Amputations and/or death were the norm. By the late 1830s, however, the profession had developed a series of dramatic new orthopedic techniques that allowed physicians to save rather than amputate limbs. Accordingly, saving, rather than amputating, limbs in severe orthopedic injuries had become the standard of care by 1850. This orthopedic revolution fostered excitement and inflated expectations in both the profession and the lay public. Medical treatises referred to fracture treatment as a relatively mechanical procedure in which physicians and patients could expect perfect cures. Orthopedic care, however, was far from mechanical or predictable. The new technologies and treatment methods were complex and, in many ways, required more skill, knowledge, and care than did amputations. During the long recovery period associated with orthopedic treatment, physicians were faced with a host of new concerns and complications. Moreover, although physicians could now more frequently save limbs, fractures and dislocations usually yielded permanent injuries: shortened or deformed limbs, frozen joints, and complications associated with the long periods of convalescence. Dissatisfied patients were now provided clear manifestations of their alleged injury to show jurors. As a result, less-than-perfect results following orthopedic injuries constituted the most common type of malpractice suits well into the twentieth century (DeVille, 1990; Smith, 1941).
Clarifying the contribution of technology and medical advancement to mid-nineteenth-century litigation is central for understanding the first malpractice crisis and the subsequent waves of suits to follow. Physicians’ experience with orthopedic treatment in the mid-nineteenth century illustrates a phenomenon that would recur as medicine advanced in other areas of practice. Dramatic and genuine medical advancements are invariably followed by heightened, and frequently excessive, professional and lay expectations. As Mark Grady (1988) has explained, improved procedures more often than not require greater learning, skill, and care. Consequently, technological advancement carries with it a greater opportunity for what is now perceived as error or accident. Moreover, the cost of that error is greater because the potential benefit of the improved treatment is higher than the treatment that it replaced. As a result, in the words of Grady, medical innovation and advancement “captures” what was previously “natural risk” and transforms it into “medical risk” providing the precipitating cause of malpractice suits.
Medical Malpractice: 1850–1900.
In the late nineteenth century and beyond, there was a clear decline of many of the inciting factors of the 1830s–1850s. Antiprofessionalism weakened. Medical education improved by 1900 and even more dramatically during the first half of the twentieth century. By 1900, the status of physicians had improved; later it skyrocketed. But despite the disappearance of most of the topical causes of the litigation in the first half of the nineteenth century, suits continued. New inciting factors arose to take the place of the old. The status-based resentment of the Jacksonian period, for example, was gradually replaced by a species of class-based resentment as physicians’ income slowly increased. A growing population of lawyers and more frequent use of contingency fees in the late nineteenth and early twentieth centuries provided legal services to a wider range of the population. As importantly, whereas the law of malpractice in the early century was undeveloped, by 1900 case law and legal treatises provided important and heretofore unavailable guidance to potential malpractice attorneys. Medical licensure had been largely reinstituted by 1900, but professional discipline was rarely vigorously pursued. Thus, as the nineteenth century progressed, physicians increasingly recognized the existence of more exacting profession-wide standards, but only modestly enforced those norms on their membership. The absence of a robust, formal professional response to medical error and deficient practitioners left the resolutions of those questions by default to the anarchic medical malpractice litigation system (Mohr, 2000; Hogan, 2003).
At the same time, the cultural preconditions for suits did not abate, they matured. The trend toward urbanism steadily continued and community cohesion dissipated further. Religious fatalism decreased further and individuals increasingly came to believe that human ills and suffering had an early cause and remedy. Medical progress, too, played an ongoing and central role in malpractice litigation in the second half of the nineteenth century. Technological advancement in the late nineteenth and early twentieth centuries helped raise the overall status of the profession and eliminated one of the early causes of suits. However, specific improvements in medical treatment, like the orthopedic revolution of the 1830s, incited cycles of heightened expectations, greater clinical demands, unforeseen complications, and resultant disappointments when expectations were unmet. For example, in the 1870s the Plaster-of-Paris “revolution” and innovations related to aseptic practice again transformed orthopedic care and again generated unfounded expectations of near-flawless cures. According to some physicians, the treatment of compound fractures had been brought to a “state of perfection.” A new spate of orthopedic suits arose in the 1890s. Thus, as in the early part of the century, large numbers of suits followed periods of successful and dramatic improvement in a particular treatment modality (DeVille, 1990; Grady, 1988). X-rays, developed in 1895, followed a similar course. By early 1896, physicians were predicting that their use was a foolproof protection against suits for medical malpractice. The X-ray indeed had jurisprudential significance, but not that anticipated by the medical profession. By the end of 1896, physicians were being sued for failing to take X-rays, and radiological evidence was being employed against physicians in court to demonstrate negligent care. By 1900, physicians had been sued for iatrogenic injuries (burns) caused by the technology. Like fracture treatment a half century before, the X-ray generated expectations that it could not meet. Moreover, it created a record of ambiguous evidence, a record that did not previously exist. This record, susceptible to subjective interpretations, could and would be used against defendant-physicians (Hogan, 2003).
Medical Malpractice: 1900–1950.
Medical malpractice suits continued a “slow, but steady, rise” in the first half of the twentieth century, slowing during the war years. Medical societies began to play a greater role in the development of defense strategies for their members and malpractice insurance became a prevalent feature of a physician’s professional life. Professional consciousness of suits was said to result in the so-called “conspiracy of silence” in which physicians refused to testify as experts against their colleagues. At the same time, physicians continued to believe that careless comments from colleagues were frequently responsible for sparking suits about another’s care. The new availability of insurance protected physician-litigants individually, but it may have played a role in increasing the rate and size of damage awards. The number of attorneys in the United States increased by as much as 50 percent between 1900 and the 1930s and these attorneys developed new theories and a new sophistication for pursuing medical negligence cases. For example, suits for insufficient informed consent appeared. Courts began to accept the doctrine of res ipsa loquitur, which allowed attorneys and their clients to pursue cases without expert witnesses when the situation was such that the jury could infer negligence from the facts. The use of res ipsa loquitur expanded the range of cases for which patients could seek remedies (Mohr, 2000; Hogan, 2003).
Although the rate of suits was affected by these readily identifiable early twentieth-century topical factors, long-term causes of the litigation persisted as well. Public acceptance of litigation and the cultural proclivity to sue grew in part because overall a greater number of communities became more heterogenous, individualism flourished, and communal restraints against suing weakened as the twentieth century progressed. Also continuing early nineteenth-century trends, Americans became even more secularized and more convinced that humans could improve their lives. Partly as a result, some commentators have suggested that individuals in the twentieth century became ever more concerned about their physical condition, a tendency historian T. J. Jackson Lears has described as a “fretful preoccupation with secular well-being” (No Place of Grace: Antimodernism and the Transformation of American Culture, 1880–1920, Pantheon, 1975). Accordingly, Americans became more convinced that there must be a remedy or a solution when something went wrong.
Despite the importance of these cultural factors in inciting or providing the social context for suits, technological and medical advancement continued to play a central role in litigation between 1900 and 1950. Continuing the trend originating in the previous century, suits resulting from the use or nonuse of X-rays multiplied (Hogan, 2003). Suits spread to new procedures as well. Before the 1880s, operative surgery was limited and, not surprisingly, generated few suits. By the turn of the twentieth century, however, the prevalence of surgical procedures had increased. Consistent success was hampered by factors such as surgical shock, infection, inadequate training, and primitive instruments. Despite the growing number of surgical procedures, including experimentation and uneven results, suits related to surgical practice did not increase dramatically in the first two decades of the twentieth century. It was not until the advent of sulfa drugs to fight deadly infections, transfusions to assuage the effects of surgical shock, the development of residency programs to train surgeons, the refinement of aseptic practices, and the development of more reliable and appropriate instruments that surgeons were able to boast noteworthy and more numerous successes. After that time, suits involving surgical treatments increased precipitously, overtaking orthopedics as the most common source of medical malpractice suits by the 1940s (DeVille, 1998).
Body-cavity surgery was performed with some regularity for over 40 years before it generated large numbers of malpractice suits. As with orthopedic care, surgical procedures, even in the face of failure, did not incite suits until the treatment modality reached a threshold of lay and professional expectation. It was not until the end of the 1920s that surgical intervention could be championed by the profession and viewed by the public as a predictable and reliable solution for injury and disease, sparking the demand for accountability and compensation. In addition, surgeons of the post-1930s medical world had to monitor and manage shock, bleeding, and vital signs while they were performing complex and new procedures, demands unheard of in previous medical care. As the Grady thesis (1988) suggests, when surgeon performance fell short or inflated expectations were not met, suits followed. This consequence is consistent as well with human performance research that demonstrates that complex medical technological environments typically increase the likelihood of knowledge-based, skill-based, and attention-based mistakes.
Medical Malpractice: 1950–2000.
The most visible increase in the incidence of malpractice suits in U.S. history occurred in the last 30 years of the twentieth century, increasing by some accounts from approximately one claim per one hundred physicians in the late 1950s to ten claims per one hundred physicians in the mid-1980s (DeVille, 1998). Physicians self-identified successive malpractice “crises” in the early 1970s, in the 1980s, and in the first decade of the twenty-first century (Sage, 2003). Unlike previous periods, the perception of “crisis” now extended not only to the number and costs of the suits, but also to the increasing expense and decreasing availability of physician medical malpractice insurance coverage (Hogan, 2003).
The rate of malpractice claims increased so dramatically after 1950 because of particular and identifiable topical factors specific to the historic period. The gradual decline of the “locality rule” meant that physicians might be held to a national, instead of a merely local, standard of care and in theory increased the availability of expert witnesses for patient-plaintiffs. Attorney advertising, which appeared only in the late 1970s, gave guidance and options to patients who believed they had been wrongfully injured by physicians. Increasingly specialized and organized plaintiff’s attorneys increased further the sophistication and expertise of lawyers who would pursue complicated medical malpractice cases. Expansive media coverage of both medical issues and adverse events likely affected the public’s consciousness as well. The spectacular medical promise of the second half century was paradoxically juxtaposed with multiplying stories of medical error. Physicians, for the first time, began to suggest that limits in doctor–patient communication played an important part in inciting malpractice litigation (Hogan, 2003). In a society like the United States, with no universal insurance coverage, crushing medical expenses frequently associated with medically related injuries undoubtedly played a role in encouraging reliance on litigation as a means to satisfy medical expenses.
By all accounts, health-care delivery in the late twentieth century became increasingly complicated, fragmented, and diffuse. Medical care at the beginning of the millennium inescapably involved the interaction of multiple providers, institutions, and modalities ordinarily without the benefit of a coordinated, overarching organization structure. Physician–patient communication and patient understanding in this setting are likely to be undermined, providing a source of dissatisfaction that has been linked to litigiousness. In addition, the role and importance of complex systems are also nearly an article of faith among those who think and write about error in medicine. These complex health-delivery environments are replete with error traps that are difficult, if not impossible, for individuals to identify. Increased errors in such a context are inevitable. Thus, although individuals frequently spark error, mistake also often has its genesis in interactions among various critical actors and in organizational structures and distribution of tasks and work responsibilities. Thus, the endemic and increasing complexity of medical care has itself likely played a contributing role in increasing mistakes and, as a consequence, increasing litigation. Finally, in the late twentieth century, physician and public policy discourse focused nearly as heavily on the cost and availability of medical malpractice insurance as it did on the number and the cost and justice of the lawsuits themselves. Concern over insurance premiums had remained muted for much of the century, in part because physicians frequently had been able to pass the costs on to patients and third-party payers. As health-care cost containment became a central feature of medical life as the century closed, however, it became progressively more difficult to transfer premium-rate increases, and as a consequence physicians began to feel rate increases more acutely (Sage, 2003).
Despite these important causal explanations, the increase in suits in this period, as in previous ones, should be traced to the dazzling number of medical technologies that were developed and reached widespread use in the last half of the twentieth century. These technologies, available for use because of the advent of widespread health-care insurance coverage, generated both a public and a professional belief in predictable results and substantial benefit. For example, obstetrical care generated relatively few medical malpractice claims until the 1970s. Maternal and fetal risk remained considerably high at midcentury, but improved rapidly thereafter. By the 1970s, new drugs, medical regimens, and technologies provided obstetricians with a broad range of exciting and valuable diagnostic and clinical options. Although beneficial, in some respects these advancements have complicated treatment decisions by placing greater demands on clinicians. Dramatic improvement in the safety of pregnancy, labor, and delivery has also increased expectations and contributed to resentment over tragic outcomes. By 1985, obstetric claims had increased precipitously, representing perhaps 10 percent of all malpractice suits (DeVille, 1998).
Similarly, diagnostic advancements of the last half of the twentieth century have revolutionized the practice of medicine. These technologies, as did the X-ray before them, improved care but stimulated malpractice litigation by engendering the expectation that life-threatening and debilitating illnesses can be foreseen and thwarted by early intervention. Some of this enthusiasm was well founded, but some was not. Patients, and even frontline physicians, are often insufficiently aware of the workings and limitations of the new diagnostic techniques. As a result, both are sometimes disappointed by the results. Some diagnostic technologies posed iatrogenic dangers to patients. Finally, diagnostic technology, when used, may leave a record to be used against the physician. Consider the example of electronic fetal monitors (EFMs), used to monitor fetal heart rates and to identify fetal distress. Like X-rays, EFM was seen not only as an improvement in care, but also as a prophylaxis for the growing number of obstetric malpractice suits. However, EFM proved a mixed legal blessing. EFM may be indicated in some cases, but it simultaneously increased the risk of suit following the birth of an injured infant by providing a retrospective record of the fetal heart rate that was sometimes susceptible to ambiguous interpretations. Similar conclusions might be drawn about the use and implications of such diagnostic technologies as ultrasound, bronchoscopy, endoscopy, magnetic resonance imaging, computed tomography, sophisticated laboratory tests, genetic screening, and even the modern medical record itself, all diagnostic innovations that were refined in the last half of the twentieth century. All may sometimes represent the required standard of care, and each, in the event of an untoward outcome, can provide often ambiguous evidence of oversight or negligence. Significantly, the fastest growing medical malpractice allegation in the closing decades of the twentieth century was the failure to diagnose an existing illness or injury (DeVille, 1998; Jacobson, 2006).
The heightened, and sometimes ill-informed, expectations inspired by diagnostic advances of the second half of the twentieth century are analogous to those generated by innovations in fracture treatment in the mid-nineteenth century and surgery in the first third of the twentieth century. Other medical innovations, too, have proceeded through similar “life cycles” of introduction, proliferation, inflated expectations, and lawsuits when expectations were not met and unforeseen complications and limitations led to injury or less-than-perfect results. Suits related to pharmaceutical therapies, for example, increased in the decades after 1950, a period during which drug treatments became both more sophisticated and more complex. After a technological innovation is initially widely used, the medical profession cannot fully realize its potential limits, risks, or side effects; the precise degree of skill, care, or knowledge required to employ it; the patient populations on which it is most and least effective and dangerous; and the clinicians most qualified to offer it. When the new drug, device, or procedure is used on a larger and broader scale, the limitations and peculiarities of the innovations become apparent as injuries and less-than-perfect results surface. As the profession uncovers the hidden limitations and dangers of the medical innovation, physicians institute precautions to resolve them, outcomes improve, and adverse outcomes and perceived errors decline. It is likely, however, that in many cases the number of suits generated by the new procedure will still be higher than before the advancement was introduced, in part because the new procedure frequently generates higher expectations, requires greater care and skill, and provides more opportunity for oversight and accident. This analysis suggests that although the legal standard for negligence has not changed, the real-world content and requirements have burgeoned (Grady, 1988; DeVille, 1998). Using this model, Peter Jacobson analyzed the medical malpractice claims associated with laparoscopic cholecystectomy, neonatology, diagnosis of breast cancer, and anesthesiology from 2001 to 2003. Jacobson concluded that the data are “consistent with the view that claims increase upon the introduction of a new technology, but then level off over time, in part because a new procedure is typically more complex and exacting than previous treatment methods” (Jacobson, 2006).
Although physician calls for reform of the medical malpractice litigation system surfaced in the late nineteenth century and became progressively more strident, they did not yield significant changes until much later. Successive spikes in physician concern regarding premium rates and malpractice suits in the mid-1970s, 1980s, and the early twenty-first century inspired a wave of legislative reforms to decrease the frequency and costs of the litigation. Legislative reform of medical malpractice posed a complicated challenge for policy makers. Public debate was dominated by factions of physicians, attorneys, and various public advocacy groups; the range of complaints varied, depending on the interest group evaluating the issue. As the twentieth century closed, although nearly all groups agreed that the current system of reimbursing those who are injured by medical negligence could be improved, there was little agreement on the best remedies. Empirical data on medical malpractice, made increasingly available near the end of the century, did not definitively resolve many of the key questions regarding the nature, magnitude, and source of the problem (Studdert et al., 2004). In a review of arguments for and against medical malpractice reform, Bovbjerg and Berenson (2005) concluded that many of the most strident claims of both proponents and opponents of fundamental malpractice reform were misguided. Instead, they argued that the “top five real problems” in medical malpractice litigation included the following: (1) too many patients suffer preventable injuries; (2) compensation of injuries is very poor because few patients make claims and fewer still collect; (3) claims resolution is inefficient: too slow and costly; (4) liability fears hamper physician-patient communication and disclosure of injuries; and (5) determinations of negligent medical injury are inherently subjective. Similarly, Tom Baker (2005), in an analysis of available empirical data titled The Medical Malpractice Myth, concluded that despite a high rate of medical error, relatively few patients file medical malpractice suits and that, despite public perception, the rate of litigation actually declined at the end of the twentieth century. Baker contended as well that medical malpractice suits had “little or nothing” to do with insurance rates and that medical malpractice had only a small impact on health-care costs in general. Such studies do not demonstrate that medical malpractice litigation poses no problem to the medical profession and society. Instead, as through the history of the litigation, these empirical refinements of phenomenon confirm that reality rarely conforms to the beliefs and observations of contemporary observers.
Overall, the incidence of malpractice suits increases because a greater number of individual patients sue physicians for the use of, or failure to use, a particular technology, procedure, or therapeutic approach. Paradoxically, malpractice suits have appeared to increase in the face of what appears to be extraordinary and unceasing medical progress. And the cultural origins and the continuing foundations for the suits—individuals’ growing concern for their bodies and their increasing acceptance of litigation as a remedy for misfortune—suggest that medical malpractice suits are as much a social–cultural phenomenon as a legal one. These observations have ambivalent implications for those who wish to reform the medical malpractice liability system.
[See also Forensic Pathology and Death Investigation; Health Insurance; Journals in Science, Medicine, and Engineering; Law and Science; Medicine; Public Health; Religion and Science; and Technology.]
Baker, Tom. The Medical Malpractice Myth. Chicago: University of Chicago Press, 2005. Baker synthesizes the scholarly and empirical evidence on medical malpractice and makes a special attempt to test and dispel popular and professional misconceptions regarding suits.Find this resource:
Bovbjerg, Randall R., and Robert A. Berenson. “Surmounting Myths and Mindsets in Medical Malpractice.” Urban Policy Briefs. Urban Institute, 2005. http://www.urban.org/uploadedPDF/411227_medical_malpractice.pdf (accessed 17 October 2012). This study weighs the available empirical evidence in an attempt to blunt the rampant misinformation and misunderstanding that dominate policy debates on medical malpractice.Find this resource:
Burns, Chester R. “Malpractice Suits in American Medicine before the Civil War.” Bulletin of the History of Medicine 43 (1969): 41–56. A pioneering work on the history of medical malpractice in antebellum America that discusses the nature of early suits and the medical profession’s response.Find this resource:
Danzon, Patricia. Medical Malpractice: Theory, Evidence and Public Policy. Cambridge, Mass.: Harvard University Press, 1985. An important early empirical work that analyzes the workings of the medical malpractice system in the second half of the twentieth century. Danzon considers the efficiency of the system in compensating victims and engendering deterrence, analyzes the medical malpractice insurance system, and provides guidance on potential reforms.Find this resource:
DeVille, Kenneth A. Medical Malpractice in Nineteenth Century America: Origins and Legacy. New York: New York University Press, 1990. Analyzes the emergence of malpractice litigation in antebellum life, tracing its origins to changing attitudes about the physical body, the role of providence, and the changing nature of community. It also explores how the first malpractice “crisis” is integrally connected to specific advances in treatment.Find this resource:
DeVille, Kenneth A. “Medical Malpractice in Twentieth Century United States: The Interactions of Technology, Law and Culture.” International Journal of Technology Assessment in Health Care 14, no. 2 (1998): 197–211. Examines how nineteenth-century antecedents of the litigation carry over into the next century. Provides extended illustrated discussion of role of medical advancement in the suits.Find this resource:
Grady, Mark F. “Why Are People Negligent? Technology, Nondurable Precautions, and the Medical Malpractice Explosion.” Northwestern University Law Review 82 (1988): 293–334. The seminal work explicating the role that technology plays in medical malpractice litigation and the development of the litigation over time.Find this resource:
Hogan, Neal C. Unhealed Wounds: Medical Malpractice in the Twentieth Century. New York: LFB Scholarly Publishing, 2003. Hogan explores the malpractice phenomenon in the first two-thirds of the twentieth century. He focuses on legal changes, the role of medical societies, public awareness of the litigation, malpractice insurance, and the growing role of hospitals.Find this resource:
Jacobson, Peter D. “Medical Liability and the Culture of Technology.” In Medical Malpractice and the U.S. Health Care System, edited by William M. Sage and Rogan Kersh, pp. 115–136. Cambridge, U.K.: Cambridge University Press, 2006. A systematic analysis and empirical confirmation of the thesis that the advancement of medical technology and practice has played and the pivotal role in the history and continuing development of medical malpractice litigation. Jacobson considers the implications of this analysis for reform.Find this resource:
Jost, Kenneth. “Medical Malpractice.” CQ Researcher 13, no. 2 (2003): 129–152. Drawing on a wide range of scholarship and guidance, Jost focuses on late twentieth-century suits and prospects for reform, but connects this discussion to the historical origins of the suits.Find this resource:
Mohr, James C. “American Medical Malpractice Litigation in Historical Perspective.” JAMA 283 (2000): 1731–1737. Mohr provides an overview of the historical origins of medical malpractice, focusing on the recurring influences of the phenomenon. Mohr pays special attention to the role of medical professionalism and its interaction with the legal community.Find this resource:
Sage, William C. “Understanding the First Malpractice Crisis of the 21st Century.” In Health Law Handbook, edited by Alice C. Gosfield, pp. 1–31. St. Paul, Minn.: West Publishing, 2003. Sage discusses the first malpractice “crisis” of the twenty-first century and explains how it differs from later twentieth-century concerns.Find this resource:
Smith, Hubert Winston. “Legal Responsibility for Medical Malpractice.” JAMA 116, no. 24 (1941): 2670–2679. Smith’s article represents an early attempt to understand the malpractice phenomenon in a systematic way and provides a window into the first half of the twentieth century.Find this resource:
Studdert, David M., Michelle M. Mello, and Troyen A. Brennan. “Medical Malpractice.” New England Journal of Medicine 350, no. 3 (2004): 283–292. This article provides an overview and discussion of the functioning and efficiency of the medical malpractice system in the last third of the twentieth century. Drawing from the best available empirical data, the authors also critique various medical malpractice reforms.Find this resource: