Toddlers getting into Grandpa’s medicines

May 16, 2011  |  General  |  No Comments

From time to time have children, mostly toddlers, in the PICU who are there because of an overdose of a medication meant for somebody else. I frequently see this scenario happen: parents are careful to keep all medicines locked away from curious toddlers, but then the child visits grandparents who, not having small children regularly around the house, are not so diligent. Many older persons take one or more of a wide variety of powerful medications that can cause serious or even lethal poisoning in small children. Child-proof caps are sometimes difficult for the elderly to open, so they may not use them. I deal with the results of what this can lead to at least several times each year. A parent whose small child spends significant time at another house, especially if someone living there takes medicines, should make sure those medicines are stored safely. Toddlers are amazingly quick at getting into trouble.

The best and fastest way to get advice about poisonings in children is to call your regional Poison Control Center. To make this easy to do, the telephone number is the same across the nation: 1-800-222-1222.

Safe transport for critically ill and injured children

May 12, 2011  |  General  |  No Comments

Critically ill or injured children often develop their problems far from where they need to be in order to get the care they need. A common scenario is for a child to be brought initially to a facility such as a general community hospital, where they are stabilized and receive initial therapy. Following this, however, they often need transport to a larger facility with specialized resources, such as a children’s hospital. To get there they need to ride in an ambulance, an airplane, or a helicopter. Are there risks to that? Unfortunately, although the risks are small, they are real, as you can read about here. As with everything we do in medicine, the risks of the transport needs to be weighed against the benefit of getting the child to facility that can better care for the child.

How common are accidents with pediatric transports? The last data I saw are from 2002, in which a survey of all pediatric transport teams in the United States asked if they had suffered any accidents in the preceding five years — 42% of the teams answering the survey had. These included 9 aircraft crashes and 57 ambulance accidents. To judge transport risk, of course, we need to know a key piece of information — the total number of transports — and this number is unknown. We do know that there were about 150 pediatric transport teams in the country at the time, and a typical team does around 100-200 transports each year. That’s a pretty broad range, but it would estimate the accident risk to be somewhere around 0.05-0.1%. This isn’t a big number, but 5 to 10 mishaps of some kind for every 1,000 trips does indicate a real risk. Many of the ambulance accidents were minor, but, as you can read in the link above, some of the air transport accidents were not. Overall, there were 8 deaths and 10 serious injuries to patients or transport personnel during the 5 year period.

Fifteen years ago I founded and subsequently ran for nearly a decade a pediatric transport program, during which time I made quite a few flights in both fixed-wing aircraft and in helicopters. I quickly learned how difficult that work environment can be for the medical team. Our team suffered one accident, the result of engine malfunction with our helicopter; fortunately, no one was injured, but the incident underscored for all of us the risks of the process. (I don’t do that any more, but I continue to be a member of the Transport Section of the American Academy of Pediatrics.)

From a patient-care perspective, the main advantage of helicopter transport versus ground ambulance is speed (fixed-wing aircraft transports are a bit separate, being typically used for long-range transports over hundreds of miles). Speed is especially important for severe trauma cases. The principal disadvantage of a helicopter, compared to an ambulance, is that the former is a very difficult environment in which to work; it is noisy, cramped, and often buffeted about in the air.

In spite of the risks of transport, specially-trained pediatric transport teams are an important component of all regionalized pediatric critical care systems. If your child’s doctor recommends this for your child, the slight risk of the trip is far outweighed by the benefits of getting your child to the people best equipped to care for her. Do ask about the members of the transport team — if they are specially trained to do pediatric transports and, overall, give the best results.

The right of adolescents to decide life and death medical issues for themselves

May 7, 2011  |  General  |  No Comments

One of the four key principles of standard medical ethics is the principle of autonomy, which I’ve written about here. Autonomy means that patients are in control of their own bodies and make the key decisions about what sort of medical care they will (or will not) receive. For children, this principle means that the child’s parents make these decisions.

There are exceptions, as with all things in medicine. For example, if a child’s physicians believe that the parent’s choice will harm the child, the physician can ask a court to intervene. This is a very rare occurrence, but it happens sometimes. I have been involved in a few of those cases. But that’s not what I’m writing about now — I’m writing about nearly-adults, those children who are almost independent, but not quite.

The law generally defines the age of majority, the point at which a child is no longer a child and may decide these things for herself, at age eighteen, although there are variations between states. (The age is younger for so-called emancipated minors — those children who are entirely self-supporting or who are married.) What should we do when such a near-adult and her parents disagree about the treatment the child should get? There have been several recent examples of the variety of things that can happen then.

One case is that of Dennis Lindberg, a fourteen-year-old boy who died from leukemia in 2007. Dennis was a Jehovah’s Witness and, like others in his faith, rejected blood transfusions, even in life-saving situations. It is common for the courts to mandate transfusions in very small children over the objections of Jehovah’s Witness parents. The rationale for this is that a small child is too young to decide himself if he agrees with his parents. Dennis’s doctors went to court to get such an order.

But this case was different — Dennis was not a toddler or small child. He was an aware, articulate, young man who understood the meaning of both his illness and the consequences of not getting the transfusion. The court ruled that Dennis had the right to make his own choice, which he did.

His case dramatized a very grey area in medical ethics — when ought a young person be able to make these decisions on his own? In my own career I have had several occasions when an adolescent disagreed with the doctors, his parents, or both about what to do. In all those situations everyone eventually came to an understanding. That’s the best outcome, of course, but these will always be ambiguous situations because children mature at differing rates. Some thirteen-year-olds are wiser than seventeen-year-olds. For that matter, some young adolescents are wiser than others who have already attained the magic age beyond which we give them the right to make all these decisions.

If you are interested in these kinds of ethical questions as they relate to children, here is another example of a teen (with the support of his parents) going to court to assert his right to refuse standard therapy for cancer.

How are doctors trained anyway? (Part 3: resident life)

May 4, 2011  |  General  |  No Comments

In spite of all its scientific underpinnings, medicine is not really a science; rather, it is an art guided by science. Medical students spend long hours learning about the science of the body, but they really do not become doctors until they have learned the art at the bedside from experienced clinicians. Medical practice is called practice for a reason; we learn it by practicing it in a centuries-old apprenticeship system, which is really what a residency is. As we do so, and again, in spite of the scientific trappings, we imbibe ways of thinking, of talking, and of doing that are as old as Hippocrates. This post will show you that aspect of medicine. Seeing it is fundamental for your understanding of what doctors do and why.

Although physicians learn at the feet of their elders — the experienced practitioners — a young doctor’s peers also heavily influence his training, and through that, his outlook; resident culture is important. Residency is an intense experience that comes at a time in life when most new doctors are relatively young and still evolving their adult characters. In a manner similar to military training, residency throws young people together for lengthy, often emotion-laden duty stints in the hospital. Not surprisingly, and also like military service people, residents often form personal bonds from this shared experience that last for the rest of their lives. Most physicians carry vivid memories from their residency for the duration of their careers.

Recent regulations have limited the maximum number of hours a resident may work each week. These rules came from two sources. One was the common-sense observation that tired residents cannot learn or work well. Common sense, however, cannot change hidebound traditions; what really changed resident work hours was a famous court case in New York (the Libby Zion case), involving a girl who died under the care of overworked residents. The particulars of that case did not clearly establish that resident fatigue caused Libby’s death, but the uproar started a sea change in how residents are trained.

The mandated maximum of an eighty-hour workweek is still long by any standard, but it had been much longer, and many of today’s doctors (myself included) trained under the old system when 110 hours or more per week was not uncommon, with perhaps the gift of every third Sunday off. My own residency program director told us, intending no irony: “The main problem with being on-call only every other night is that you miss half the interesting patients.” So, like garrulous ex-Marines, doctors swap tales of the time that, although brief in comparison to a lifelong career, was extraordinarily important in forming their professional behavior. Generalizations are tricky, especially when applied to such a diverse group of people as resident physicians. This caveat aside, parents who understand something about resident culture will gain useful insights into why many physicians think and act the way that we do.

Residents have come through a pathway that generally fosters intense competition and that values academic achievement above all else. In recent years, medical schools and residency programs have, to varying degrees, tried to emphasize the importance of more humanistic skills like empathy and compassion, and the specialty of pediatrics has been among the leaders in doing this. However, it remains true that physicians are the products of a system that rewards those who excel at competing with their colleagues at how much information one can learn, remember, and then produce when asked for it by a superior.

Resident culture encourages young doctors to appear and act all-knowing and self-confident even when they are not. This skill is often called “roundsmanship” and is inculcated from early on in their training. Residents get much of their teaching during the time-honored ritual of rounds, in which a team of residents and their supervising physician walk around to their patients’ rooms, pausing at each doorway to discuss the case. The discussion typically begins with the resident presenting the patient’s problem and the resident’s plan to deal with it to the assembled group, following which the supervising physician often grills the resident about the case. Residents adept at roundsmanship are quick thinkers and have rapid recall of pertinent facts. Master roundsmen, however, are best characterized as fearless when clueless—they appear assured and in control of the situation even when they are not.

I am exaggerating a little for effect, of course, but my point is to show you how years and years of this kind of environment affect most doctors to some extent. Such a background can cause doctors to seem defensive when questioned, for example by a parent, because doctors spend their formative years defending what they are doing to both their peers and to their exacting teachers. It can also make it difficult for a doctor to admit he does not know what to do with a patient, since physicians are conditioned to regard that admission as a real defeat. This attitude is encapsulated in the saying, often applied to surgeons but relevant to all physicians: “Seldom wrong, never in doubt.”

How are doctors trained anyway? (Part 2: the process)

April 28, 2011  |  General  |  No Comments

So how does one train to be a physician? The first step is to obtain a four-year undergraduate degree at a college or university. This was the most fundamental change in post-Flexner medical training; before that time, many medical schools had little or no requirement for previous education, some not even demanding a high school diploma. Today the prospective physician’s baccalaureate degree can be in anything (mine happens to be in history and religion), but all medical schools require premedical course work in biology, chemistry, biochemistry, physics, and often mathematics. As a result of these science-heavy requirements, most premedical students choose to major in one of the sciences.

The next step is to gain admission to medical school. This traditionally has been a difficult thing to accomplish, although admission statistics for individual schools are hard to interpret because virtually all students apply to several schools, often more than ten. In general, a medical school applicant’s overall chances of being admitted to medical school has fluctuated between 25 and 35 percent over the last several decades. One thing that has changed is drop-out rate. Fifty years ago, many students did not complete the course; these days, drop-out rates are extremely low.

Medical school generally lasts four years, at the end of which time the graduate is properly addressed as “doctor.” However, the new doctor is one in name only, because no state will allow her to practice medicine independently without further training. Medical licenses are in fact granted by the individual states, and their requirements vary, but all demand at least one year of supervised on-the-job training beyond medical school. Fifty years ago, many physicians stopped their training after doing that single year of training — called an internship — because that was all a physician needed to obtain a medical license and begin working as a general practitioner. These days virtually no one stops after one year, because nearly all physicians require more training just to find a job. You will still hear doctors in their first year out of medical school referred to as interns, but the term does not mean much now.

Medical students receive a standard training curriculum that varies little between the various medical schools; this is enforced by the organization that accredits medical schools. Toward the end of their four years, however, students generally do get some freedom to select courses geared toward what specialty they choose for their residency, the term for the several years of practical training they get after medical school. The usage comes from the fact that medical residents once actually lived in the hospital; these days, even though resident workweeks average eighty hours or so, no one literally lives in the hospital.

Residencies come in the standard broad categories of areas of expertise like internal medicine, pediatrics, surgery, and obstetrics and gynecology, as well as specialties like radiology, neurology, dermatology, and psychiatry. There are in total twenty-four recognized medical specialties, each of which sets its own requirements for the residents training in their respective fields. (You can read more about each individual specialty here.) Medical science has expanded sufficiently that a medical student who wishes to specialize in not being a specialist — that is, who wants to take care of all sorts of patients — must do a residency in family practice.

Residency lasts from three to five years after medical school, depending upon the specialty. At the end of training, the resident takes an examination. Passing it makes her “board-certified” in the field; someone who has completed the residency requirement but has not yet passed (or has failed) the examination is called “board-eligible.” Some physicians choose to continue their training even further beyond residency, to subspecialize in things like cardiology, infectious diseases, or hematology.

The person you encounter when you bring your child to her doctor’s appointment has thus spent at least eleven years getting ready to meet you: four years in college, four years in medical school, and three to five years in residency. That person has also spent much of that time being initiated, perhaps indoctrinated, into a culture, a worldview, that is shared by most physicians. It is a culture foreign to that of many nonphysicians. Its attributes come primarily from the way physicians have been trained since Flexner’s reforms of medical education a century ago. Knowing about this time-honored system will help you understand your child’s physician, and understanding improves communication. More about that in later posts.

How are doctors trained anyway? (Part 1: the historical background)

April 24, 2011  |  General  |  No Comments

A couple of conversations I’ve had with patients’ families over the past month have made me realize that many folks don’t know how our system produces a pediatrician, a radiologist, or a surgeon. And a lot of what people know is wrong. Physicians are so immersed in what we do that we forget that the process is a pretty arcane one. Just what are the mechanics of how doctors are trained? Understanding your physician’s educational journey should help you understand what makes him or her tick. As it turns out, a lot of standard physician behavior makes more sense when you know were we came from. This post concerns some important history about that.

Most physicians in the nineteenth century received their medical educations in what were called proprietary medical schools. These were schools started as a business enterprise, often, but not necessarily, by doctors. Anyone could start one, since there were no standards of any sort. The success of the school was not a matter of how good the school was, since that quality was then impossible to define anyway, but of how good those who ran it were at attracting paying students.

There were dozens of proprietary medical schools across America. Chicago alone, for example, had fourteen of them at the beginning of the twentieth century. Since these schools were the private property of their owners, who were usually physicians, the teaching curriculum varied enormously between schools. Virtually all the teachers were practicing physicians who taught part-time. Although being taught by actual practitioners is a good thing, at least for clinical subjects, the academic pedigrees and skills of these teachers varied as widely as the schools — some were excellent, some were terrible, and the majority were somewhere in between.

Whatever the merits of the teachers, students of these schools usually saw and treated their first patient after they had graduated because the teaching at these schools consisted nearly exclusively of lectures. Although they might see a demonstration now and then of something practical, in general students sat all day in a room listening to someone tell them about disease rather than showing it to them in actual sick people. There were no laboratories. Indeed, there was no need for them because medicine was taught exclusively as a theoretical construct, and some of its theories dated back to Roman times. It lacked much scientific basis because the necessary science was itself largely unknown at the time.

As the nineteenth century progressed, many of the proprietary schools became affiliated with universities; often several would join to form the medical school of a new state university. The medical school of the University of Minnesota, for example, was established in 1888 when three proprietary schools in Minneapolis merged, with a fourth joining the union some years later. These associations gave medical students some access to aspects of new scientific knowledge, but overall the American medical schools at the beginning of the twentieth century were a hodgepodge of wildly varying quality.

Medical schools were not regulated in any way because medicine itself was largely unregulated. It was not even agreed upon what the practice of medicine actually was; there prevailed at the time among physicians several occasionally overlapping but generally distinct views of what the real causes of disease were. All these views shared a basic fallacy — they regarded a symptom, such as fever, as a disease in itself. Thus they believed relieving the symptom was equivalent to curing the disease.

The fundamental problem was that all these warring medical factions had no idea what really caused most diseases; for example, bacteria were only just being discovered and their role in disease was still largely unknown, although this was rapidly changing. Human physiology — how the body works — was only beginning to be investigated. To America’s sick patients, none of this made much difference, because virtually none of the medical therapies available at the time did much good, and many of the treatments, such as large doses of mercury, were actually highly toxic.

There were then bitter arguments and rivalries among physicians for other reasons besides their warring theories of disease causation. In that era before experimental science, no one viewpoint could definitely prove another wrong. The chief reason for the rancor, however, was that there were more physicians than there was demand for their services. At a time when few people even went to the doctor, the number of physicians practicing primary care (which is what they all did back then) relative to the population was three times more than it is today. Competition was tough, so tough that the majority of physicians did not even support themselves through the practice of medicine alone; they had some other occupation as well — quite a difference from today.

In sum, medicine a century ago consisted of an excess of physicians, many of them badly trained, who jealously squabbled with each other as each tried to gain an advantage. Two things changed that medical world into the one we know today: the explosion of scientific knowledge, which finally gave us some insight into how diseases actually behaved in the body, and a revolution in medical education, a revolution wrought by what is known as the Flexner Report.

In 1910 the Carnegie Foundation commissioned Abraham Flexner to visit all 155 medical schools in America (for comparison, there are only 125 today). What he found appalled him; only a few passed muster, principally the Johns Hopkins Medical School, which had been established on the model then prevailing in Germany. That model stressed rigorous training in the new biological sciences with hands-on laboratory experience for all medical students, followed by supervised bedside experience caring for actual sick people.

Flexner’s report changed the face of medical education profoundly; eighty-nine of the medical schools he visited closed over the next twenty years, and those remaining structured their curricula into what we have today—a combination of preclinical training in the relevant sciences followed by practical, patient-oriented instruction in clinical medicine. This standard has stood the test of time, meaning the way I was taught in 1974 was essentially unchanged from how my father was taught in 1942.

The advance of medical science had largely stopped the feuding between kinds of doctors; allopathic, homeopathic, and osteopathic schools adopted essentially the same curriculum. (Although the original homeopathic schools, such as Hahnemann in Philadelphia, joined the emerging medical mainstream, homopathic practice similar to Joseph Hahnemann’s original theories continues to be taught at a number of places). Osteopathy maintains its own identity. It continues to maintain its own schools, of which there are twenty-three in the United States, and to grant its own degree—the Doctor of Osteopathy (DO), rather than the Doctor of Medicine (MD). In virtually all respects, however, and most importantly in the view of state licensing boards, the skills, rights, and privileges of holders of the two degrees are equivalent.

Steroid treatment of croup: a silk purse from a sow’s ear

April 19, 2011  |  General  |  1 Comment

Clinical research — finding out which treatments help and which ones don’t (or even make the situation worse) — is tough research to do. In the laboratory a scientist can control conditions so that only one thing changes, isolating the effect of a particular thing. Clinical research is different because humans are complicated. The researcher tries to control the situation as much as possible, but ultimately she is comparing one dissimilar human to another one.

The result is that a lot of clinical studies, such as interventions in which researchers give patients this or that medicine and then try to find out if it worked, are underpowered. This means the studies aren’t powerful enough to answer the simple question: does this treatment help? Usually the reason for the lack of power is that, for all but situations that show extreme differences between the groups, you need a lot of patients in the study to demonstrate any difference. Sometimes this means researchers need to enroll thousands of patients in the study.

Recognizing this problem, the concept of “meta-analysis” was devised. The idea is that one can take a bunch of underpowered studies and lump the information together. This can create, in effect, a single study with enough power to answer the question. Critics compared meta-analysis to making a silk purse from a sow’s ear — trying to take a lot of poor studies and make a good study from them. This can be a problem. But if you’ve ever taken your child to the doctor for treatment of croup, you and your child have been the beneficiaries of what meta-analysis can accomplish.

Croup is caused by swelling of the airway from a virus (see the link above for details), and corticosteroid medicines reduce swelling. So it seemed logical to try them for croup. But although some of the early studies suggested steroids helped, they were all underpowered to answer the question for sure. Then somebody did a meta-analysis with the data and showed steroids probably helped. This information then led other researchers to spend the large amount of time and effort to do some fully-powered studies. The results? Steroids, by mouth, injection, or even inhaled, help relieve the symptoms of croup.

So in this case it was a silk purse all along.

Children need sedation for painful or scary procedures

April 14, 2011  |  General  |  No Comments

When I started training in pediatrics nearly 35 years ago it was common practice when an infant or child needed something done that was going to be painful, anxiety-producing, or both, the child was often merely held (or tied) down. Looking back on it now, it reminds me of the 19th century, a time when somebody might just be given a stick to bite down on. I wonder how we could have been in the same place with children a century later.

To be fair, there were several reasons we did things that way. Chief among them was the notion — one we now know to be false — that children (infants in particular) did not feel pain in the same way as older persons. The other reason was that we simply didn’t have available many of the medications we have now to counteract pain and anxiety, and the few that we had had not been studied much in children.

Things are much different now. We have a menu of things we can use to prevent pain, ranging from numbing cream we can put on the skin to lessen (or even eliminate) the pain of a needle stick to powerful, short-acting anesthetic drugs we can use to put the child into a deep (and brief) slumber. We have reliable ways of greatly reducing or eliminating both pain and anxiety when a child needs medical procedures as varied as an MRI scan or some stitches in the scalp.

Most doctors who do these procedures are well aware of these things. But if you run across one who doesn’t seem to be, don’t be shy about speaking up and asking what can be done to make your child more comfortable.

The grey zone: the ethical right of parents to refuse complicated and high-risk treatments for their child

April 8, 2011  |  General  |  No Comments

The principle of autonomy is one of the four guiding principles of medical ethics, the others being beneficence, nonmaleficence, and justice. It means that patients have the right to decide what is done to their own bodies. For children under eighteen, the age of majority, this means their parents decide for them. What happens when parents refuse a treatment that their child’s doctors recommend? (The right of a minor child himself to refuse such treatment is an interesting and knotty related issue.)

If the doctors believe the parents are not acting in the child’s best interest, they can go to court and try to convince a judge that the court should take temporary custody of the child and appoint a guardian who will allow the treatment. I have been involved in cases like that from time to time. Usually they involve parents who, often for religious reasons, refuse a fairly standard medical treatment. A common example is a blood transfusion in a family that belongs to the Jehovah’s Witnesses. The medical treatments at issue are generally standard, well-accepted ones.

But what if the treatment the doctors want to do is a complicated, high-risk one? Perhaps a treatment that was once a highly experimental one, but which is now more mainstream, although not entirely so? What then? Do the parents have to allow the treatment or risk having the courts take custody of their child?

A recent article in the Lahey Clinic Medical Ethics Journal addresses just such a situation — surgery for an uncommon condition known as hypoplastic left heart syndrome (HLHS). This condition is where a child is born missing a functioning left ventricle, a key pumping chamber of the heart. Several decades ago we had no treatment for the condition — babies were kept comfortable, but they all died within a few weeks of life. Then a surgical procedure to treat this condition was devised by Dr. Norwood in 1981. The outcomes from this procedure for the first few years were dreadful, with most children not surviving. Over time, however, heart surgeons got better at doing it and the science of pediatric intensive care advanced considerably, so the majority of children now survive the initial surgery.

But what is in store for them is at least one more major surgical procedure, called the Fontan procedure, which, if all goes well, allows them to live at least through childhood and usually to adolescence at least. Many do well subsequently, although it is common to need additional surgeries. However, for many children with HLHS, their heart fails and they then require a heart transplant to survive. Most children on the waiting list for a heart transplant die before they get one.

The article from the Lahey Clinic Ethics Journal asks if it is ethical for parents, once they have learned all about this complicated and high-risk series of surgeries, to refuse and allow their infant to die. In other words, is the surgical treatment of HLHS so mainstream that doctors should go to court if parents refuse? I know cardiologists who think so, and the author of the article describes such a situation. But I also know several cardiologists who say they would never choose the surgery for their own baby. These are doctors who are in the trenches and know exactly what the Norwood procedure and its subsequent course can mean in suffering for a child. They would not put their child through that. They feel it is preferable to allow a baby to die than to subject a child to years of often painful treatments, only to have a high risk of dying as an older child or adolescent.

I don’t know what I would do. I’m too old to have any more children myself, but I could have a grandchild in the future who is born with HLHS. There is no easy answer to this question. Many medical treatments, bone marrow transplant for example, are now standard after years as experimental treatments. Even if surgery for HLHS crosses that murky divide between experimental and standard, there are others that will confront us with the same question.

For HLHS, I agree with the essayist in the article: I think parents should be allowed to refuse the treatment.

Medical ethics: a brief primer

April 4, 2011  |  General  |  1 Comment

Medical ethics is something we deal with frequently in the PICU. It may sound esoteric, but generally it isn’t. Even so, it can be complicated. Complicated or not, it’s also something all of us should know a little about. This is because, in fact, many of us will encounter its issues quite suddenly and unexpectedly with our loved ones, or even ourselves.

So what are the accepted principles of medical ethics? There are four main principles, which on the surface are quite simple. They are these:

1. Beneficence (or, only do good things)
2. Nonmaleficence (or, don’t do bad things)
3. Autonomy (or, the patient decides important things)
4. Justice (or, be fair to everyone)

Beneficence
The first of these principles, beneficence, is the straightforward imperative that whatever we do should, before all else, benefit the patient. At first glance this seems an obvious statement. Why would we do anything that does not help the patient? In reality, we in the PICU are frequently tempted to do (or asked to do by families or other physicians) things that are of marginal or even no benefit to the patient. Common examples include a treatment or a test we think is unlikely to help, but just might.

Nonmaleficence
There is a long tradition in medicine, one encapsulated in the Latin phrase primum non nocere (“first do no harm”), which admonishes physicians to avoid harming our patients. This is the principle of nonmaleficence. Again, this seems obvious. Why would we do anything to harm our patients? But let’s consider the example of tests or treatments we consider long shots — those which probably won’t help, but possibly could. It is one thing when someone asks us to mix an innocuous herbal remedy into a child’s feeding formula. It is quite another when we’re considering giving a child with advanced cancer a highly toxic drug that might treat the cancer, but will certainly cause the child pain and suffering.

Autonomy
Our daily discussions in the PICU about the proper action to take, and particularly about who should decide, often lead us directly to the third key principle of medical ethics, which is autonomy. Autonomy means physicians should respect a patient’s wishes regarding what medical care he or she wants to receive. Years ago patients tended to believe, along with their physicians, that the doctor always knew best. The world has changed since that time, and today patients have become much more involved in decisions regarding their care. This is a good thing. Recent legal decisions have emphasized the principle that patients who are fully competent mentally may choose to ignore medical advice and do (or not do) to their own bodies as they wish.

The issue of autonomy becomes much more complicated for children, or in the situation of an adult who is not able to decide things for himself. Who decides what to do? In the PICU, the principle of autonomy generally applies to the wishes of the family for their child. But what if they want something the doctors believe is wrong or dangerous? What if the family cannot decide what they want for their child? Finally, what if the child does not want what his or her parents want — at what age and to what extent should we honor the child’s wishes? As you can see, the simple issue of autonomy is often not simple at all.

Justice
The fourth key principle of medical ethics, justice, stands somewhat apart from the other three. Justice means physicians are obligated to treat every patient the same, irrespective of age, race, sex, personality, income, or insurance status.

You can see how these ethical principles, at first glance so seemingly straightforward, can weave themselves together into a tangled knot of conflicting opinions and desires. The devil is often in the details. For example, as a practical matter, we often encounter a sort of tug-of-war between the ethical principles of beneficence and nonmaleficence — the imperative to do only helpful things and not do unhelpful ones. This is because everything we do carries some risk. We have different ways of describing the interaction between them, but we often speak of the “risk benefit ratio.” Simply put: Is the expected or potential benefit to the child worth the risk the contemplated test, treatment, or procedure will carry?

The difficult situations, of course, are those painted in shades of grey, and this includes a good number of them. In spite of that, thinking about how these four principles relate to each other is an excellent way of framing your thought process.

If you are interested in medical ethics, there are many good sites where you can read more. Here is a good site from the University of Washington, here is a link to the President’s Council on Bioethics (which discusses many specific issues), and here is an excellent blog specifically about the issues of end of life care maintained by Thaddeus Pope, a law professor who is expert in the legal ramifications. If you want a really detailed discussion, an excellent standard book is Principles of Biomedical Ethics, by Beauchamp and Childress.