One of the chief dilemmas of healthcare reform is that, without some sort of intervention, increased access will raise costs enormously. That is what happened in Massachusetts, which chose to attack the problem of access first and costs second. When all those uninsured people finally got insurance, there was an explosion of pent-up demand. After all, as a nation we cannot afford what we’re spending now, even with millions of Americans without access to care. If we just provide the access through universal coverage without doing anything else, the cost will bankrupt us in short order.
Michael Porter, in a recent editorial in the New England Journal of Medicine, points out the obvious solution — finally, finally, we need to get good value for our health dollars. We certainly don’t get that now; as a nation we spend far more than any other Western nation for mediocre results. He reminds us that the goal is to take care of people: “Good outcomes that are achieved efficiently are the goal, not the false “savings” from cost shifting and restricted services. Indeed, the only way to truly contain costs in health care is to improve outcomes: in a value-based system, achieving and maintaining good health is inherently less costly than dealing with poor health.” We can save money, but this will come as a byproduct of doing the job right.
For America this truly can be win-win, although there will be losers. The losers will be physicians and hospitals that do too much of the wrong things, because they can and the present system rewards doing things. These potential losers have powerful friends. The insurance companies, I think, have finally realized that they cannot keep passing increased costs on to their subscribers. No business can survive if they make their product unaffordable, and healthcare premiums have reached that point.
Wheezing is common in small children — around a third of all children will have an episode of wheezing before they are three years old. Although it’s common, we still don’t quite know the best thing to do about it. The problem is that wheezing, like fever, is a symptom of a disease, not a disease itself. It’s not one thing. Every physician who treats small children in the office, the emergency department, or the pediatric intensive care unit is often faced by the dilemma of what to do with a wheezing small child.
In such children wheezing is often triggered by a viral illness. When it happens in infants it is often caused by a virus we call RSV (short for respiratory syncytial virus) and causes a disorder called bronchiolitis. For those children, we know that not much of anything helps the symptoms — all we can do is provide supportive care and wait for the illness to run its course. What about wheezing children who don’t have bronchiolitis? Can anything help them?
The problem facing the doctor is that all the treatments we’ve tried over the years for small children who wheeze are taken from how we handle older children who have chronic, frequent wheezing — what we call asthma. These treatments work for asthma, yet they often don’t for wheezing that isn’t. A certain number of children who have their first spell of wheezing will go on, over years, to develop true asthma. But most wheezing toddlers won’t progress to asthma — they will have an episode or two (or three) of wheezing and then “grow out of it.” If you bring your infant or toddler to the doctor for a first (or second) episode of wheezing, the doctor has no way of knowing which of these two things will happen. There are a few clues, such as a family history of asthma, which will increase the chances of future asthma, but there’s no good way to tell.
How do most doctors handle this problem? Most will try a dose or two of asthma medications (inhaled albuterol and/or budesonide, or oral prednisolone are commonly used) just to see if it helps. If the child gets better, they can be continued.
My point is that you should understand that for this problem — wheezing in an infant or toddler — your doctor is handicapped by not being able to predict the future. Only time will tell. It’s a frustrating, but common medical scenario.
In spite of all its scientific underpinnings, medicine is not really a science; rather, it is an art guided by science. Medical students spend long hours learning about the science of the body, but they really do not become doctors until they have learned the art at the bedside from experienced clinicians. Medical practice is called practice for a reason; we learn it by practicing it in a centuries-old apprenticeship system, which is really what a residency is. As we do so, and again, in spite of the scientific trappings, we imbibe ways of thinking, of talking, and of doing that are as old as Hippocrates. The rest of this chapter will show you that aspect of medicine. Seeing it is fundamental for your understanding of what doctors do and why.
Although physicians learn at the feet of their elders — the experienced practitioners — a young doctor’s peers also heavily influence his training, and through that, his outlook; resident culture is important. Residency is an intense experience that comes at a time in life when most new doctors are relatively young and still evolving their adult characters. In a manner similar to military training, residency throws young people together for lengthy, often emotion-laden duty stints in the hospital. Not surprisingly, and also like military service people, residents often form personal bonds from this shared experience that last for the rest of their lives. Most physicians carry vivid memories from their residency for the duration of their careers.
Recent regulations have limited the maximum number of hours a resident may work each week. These rules came from two sources. One was the common-sense observation that tired residents cannot learn
The mandated maximum of an eighty-hour workweek is still long by any standard, but it had been much longer, and many of today’s doctors (myself included) trained under the old system when 110 hours or more per week was not uncommon, with perhaps the gift of every third Sunday off. My own residency program director told us, intending no irony: “The main problem with being on-call only every other night is that you miss half the interesting patients.” So, like garrulous ex-Marines, doctors swap tales of the time that, although brief in comparison to a lifelong career, was extraordinarily important in forming their professional behavior. Generalizations are tricky, especially when applied to such a diverse group of people as resident physicians. This caveat aside, parents who understand something about resident culture will gain useful insights into why many physicians think and act the way that we do.
Residents have come through a pathway that generally fosters intense competition and that values academic achievement above all else. In recent years, medical schools and residency programs have, to varying degrees, tried to emphasize the importance of more humanistic skills like empathy and compassion, and the specialty of pediatrics has been among the leaders in doing this. However, it remains true that physicians are the products of a system that rewards those who excel at competing with their colleagues at how much information one can learn, remember, and then produce when asked for it by a superior.
Resident culture encourages young doctors to appear and act all-knowing and self-confident even when they are not. This skill is often called “roundsmanship” and is inculcated from early on in their training. Residents get much of their teaching during the time-honored ritual of rounds, in which a team of residents and their supervising physician walk around to their patients’ rooms, pausing at each doorway to discuss the case. The discussion typically begins with the resident presenting the patient’s problem and the resident’s plan to deal with it to the assembled group, following which the supervising physician often grills the resident about the case. Residents adept at roundsmanship are quick thinkers and have rapid recall of pertinent facts. Master roundsmen, however, are best characterized as fearless when clueless—they appear assured and in control of the situation even when they are not.
I am exaggerating a little for effect, of course, but my point is to show you how years and years of this kind of environment affect most doctors to some extent. Such a background can cause doctors to seem defensive when questioned, for example by a parent, because doctors spend their formative years defending what they are doing to both their peers and to their exacting teachers. It can also make it difficult for a doctor to admit he does not know what to do with a patient, since physicians are conditioned to regard that admission as a real defeat. This attitude is encapsulated in the saying, often applied to surgeons but relevant to all physicians: “Seldom wrong, never in doubt.”
So how does one train to be a physician? The first step is to obtain a four-year undergraduate degree at a college or university. This was the most fundamental change in post-Flexner medical training; before that time, many medical schools had little or no requirement for previous education, some not even demanding a high school diploma. Today the prospective physician’s baccalaureate degree can be in anything (mine happens to be in history and religion), but all medical schools require premedical course work in biology, chemistry, biochemistry, physics, and often mathematics. As a result of these science-heavy requirements, most premedical students choose to major in one of the sciences.
The next step is to gain admission to medical school. This traditionally has been a difficult thing to accomplish, although admission statistics for individual schools are hard to interpret because virtually all students apply to several schools, often more than ten. In general, a medical school applicant’s overall chances of being admitted to medical school has fluctuated between 25 and 35 percent over the last several decades. One thing that has changed is drop-out rate. Fifty years ago, many students did not complete the course; these days, drop-out rates are extremely low.
Medical school generally lasts four years, at the end of which time the graduate is properly addressed as “doctor.” However, the new doctor is one in name only, because no state will allow her to practice medicine independently without further training. Medical licenses are in fact granted by the individual states, and their requirements vary, but all demand at least one year of supervised on-the-job training beyond medical school. Fifty years ago, many physicians stopped their training after doing that single year of training — called an internship — because that was all a physician needed to obtain a medical license and begin working as a general practitioner. These days virtually no one stops after one year, because nearly all physicians require more training just to find a job. You will still hear doctors in their first year out of medical school referred to as interns, but the term does not mean much now.
Medical students receive a standard training curriculum that varies little between the various medical schools; this is enforced by the organization that accredits medical schools. Toward the end of their four years, however, students generally do get some freedom to select courses geared toward what specialty they choose for their residency, the term for the several years of practical training they get after medical school. The usage comes from the fact that medical residents once actually lived in the hospital; these days, even though resident workweeks average eighty hours or so, no one literally lives in the hospital.
Residencies come in the standard broad categories of areas of expertise like internal medicine, pediatrics, surgery, and obstetrics and gynecology, as well as specialties like radiology, neurology, dermatology, and psychiatry. There are in total twenty-four recognized medical specialties, each of which sets its own requirements for the residents training in their respective fields. (You can read more about each individual specialty here.) Medical science has expanded sufficiently that a medical student who wishes to specialize in not being a specialist — that is, who wants to take care of all sorts of patients — must do a residency in family practice.
Residency lasts from three to five years after medical school, depending upon the specialty. At the end of training, the resident takes an examination. Passing it makes her “board-certified” in the field; someone who has completed the residency requirement but has not yet passed (or has failed) the examination is called “board-eligible.” Some physicians choose to continue their training even further beyond residency, to subspecialize in things like cardiology, infectious diseases, or hematology.
The person you encounter when you bring your child to her doctor’s appointment has thus spent at least eleven years getting ready to meet you: four years in college, four years in medical school, and three to five years in residency. That person has also spent much of that time being initiated, perhaps indoctrinated, into a culture, a worldview, that is shared by most physicians. It is a culture foreign to that of many nonphysicians. Its attributes come primarily from the way physicians have been trained since Flexner’s reforms of medical education a century ago. Knowing about this time-honored system will help you understand your child’s physician, and understanding improves communication. More about that in later posts.
A couple of conversations I’ve had with patients’ families over the past month have made me realize that many folks don’t know how our system produces a pediatrician, a radiologist, or a surgeon. And a lot of what people know is wrong. Physicians are so immersed in what we do that we forget that the process is a pretty arcane one. Just what are the mechanics of how doctors are trained? Understanding your physician’s educational journey should help you understand what makes him or her tick. As it turns out, a lot of standard physician behavior makes more sense when you know were we came from. This post concerns some important history about that.
Most physicians in the nineteenth century received their medical educations in what were called proprietary medical schools. These were schools started as a business enterprise, often, but not necessarily, by doctors. Anyone could start one, since there were no standards of any sort. The success of the school was not a matter of how good the school was, since that quality was then impossible to define anyway, but of how good those who ran it were at attracting paying students.
There were dozens of proprietary medical schools across America. Chicago alone, for example, had fourteen of them at the beginning of the twentieth century. Since these schools were the private property of their owners, who were usually physicians, the teaching curriculum varied enormously between schools. Virtually all the teachers were practicing physicians who taught part-time. Although being taught by actual practitioners is a good thing, at least for clinical subjects, the academic pedigrees and skills of these teachers varied as widely as the schools — some were excellent, some were terrible, and the majority were somewhere in between.
Whatever the merits of the teachers, students of these schools usually saw and treated their first patient after they had graduated because the teaching at these schools consisted nearly exclusively of lectures. Although they might see a demonstration now and then of something practical, in general students sat all day in a room listening to someone tell them about disease rather than showing it to them in actual sick people. There were no laboratories. Indeed, there was no need for them because medicine was taught exclusively as a theoretical construct, and some of its theories dated back to Roman times. It lacked much scientific basis because the necessary science was itself largely unknown at the time.
As the nineteenth century progressed, many of the proprietary schools became affiliated with universities; often several would join to form the medical school of a new state university. The medical school of the University of Minnesota, for example, was established in 1888 when three proprietary schools in Minneapolis merged, with a fourth joining the union some years later. These associations gave medical students some access to aspects of new scientific knowledge, but overall the American medical schools at the beginning of the twentieth century were a hodgepodge of wildly varying quality.
Medical schools were not regulated in any way because medicine itself was largely unregulated. It was not even agreed upon what the practice of medicine actually was; there prevailed at the time among physicians several occasionally overlapping but generally distinct views of what the real causes of disease were. All these views shared a basic fallacy — they regarded a symptom, such as fever, as a disease in itself. Thus they believed relieving the symptom was equivalent to curing the disease.
The fundamental problem was that all these warring medical factions had no idea what really caused most diseases; for example, bacteria were only just being discovered and their role in disease was still largely unknown, although this was rapidly changing. Human physiology — how the body works — was only beginning to be investigated. To America’s sick patients, none of this made much difference, because virtually none of the medical therapies available at the time did much good, and many of the treatments, such as large doses of mercury, were actually highly toxic.
There were then bitter arguments and rivalries among physicians for other reasons besides their warring theories of disease causation. In that era before experimental science, no one viewpoint could definitely prove another wrong. The chief reason for the rancor, however, was that there were more physicians than there was demand for their services. At a time when few people even went to the doctor, the number of physicians practicing primary care (which is what they all did back then) relative to the population was three times more than it is today. Competition was tough, so tough that the majority of physicians did not even support themselves through the practice of medicine alone; they had some other occupation as well — quite a difference from today.
In sum, medicine a century ago consisted of an excess of physicians, many of them badly trained, who jealously squabbled with each other as each tried to gain an advantage. Two things changed that medical world into the one we know today: the explosion of scientific knowledge, which finally gave us some insight into how diseases actually behaved in the body, and a revolution in medical education, a revolution wrought by what is known as the Flexner Report.
In 1910 the Carnegie Foundation commissioned Abraham Flexner to visit all 155 medical schools in America (for comparison, there are only 125 today). What he found appalled him; only a few passed muster, principally the Johns Hopkins Medical School, which had been established on the model then prevailing in Germany. That model stressed rigorous training in the new biological sciences with hands-on laboratory experience for all medical students, followed by supervised bedside experience caring for actual sick people.
Flexner’s report changed the face of medical education profoundly; eighty-nine of the medical schools he visited closed over the next twenty years, and those remaining structured their curricula into what we have today—a combination of preclinical training in the relevant sciences followed by practical, patient-oriented instruction in clinical medicine. This standard has stood the test of time, meaning the way I was taught in 1974 was essentially unchanged from how my father was taught in 1942.
The advance of medical science had largely stopped the feuding between kinds of doctors; allopathic, homeopathic, and osteopathic schools adopted essentially the same curriculum. (Although the original homeopathic schools, such as Hahnemann in Philadelphia, joined the emerging medical mainstream, homopathic practice similar to Joseph Hahnemann’s original theories continues to be taught at a number of places). Osteopathy maintains its own identity. It continues to maintain its own schools, of which there are twenty-three in the United States, and to grant its own degree—the Doctor of Osteopathy (DO), rather than the Doctor of Medicine (MD). In virtually all respects, however, and most importantly in the view of state licensing boards, the skills, rights, and privileges of holders of the two degrees are equivalent.
I wrote here about the recent findings that driving while using a cell phone impairs driving ability as much as being legally drunk. It should come as no surprise, then, that not paying attention to what you’re doing, when what you’re doing requires concentration, can be dangerous if what you’re doing can be dangerous. It’s obvious; I’m surprised someone actually did this study, but they did. The study, in the journal Pediatrics, drove home the implications of this common-sense observation.
The study looked at children (preadolescents) in simulated street-crossing situations. To no one’s surprise, when a child was talking or text-messaging on a cell phone while doing this, they had a higher liklihood of being hit by a (simulated) car. The younger and more generally distractable the child, of course, the more likely this was to happen.
One of the four key principles of standard medical ethics is the principle of autonomy, which I’ve written about here. Autonomy means that patients are in control of their own bodies and make the key decisions about what sort of medical care they will (or will not) receive. For children, this principle means that the child’s parents make these decisions.
There are exceptions, as with all things in medicine. For example, if a child’s physicians believe that the parent’s choice will harm the child, the physician can ask a court to intervene. This is a very rare occurrence, but it happens sometimes. I have been involved in a few of those cases. But that’s not what I’m writing about now — I’m writing about nearly-adults, those children who are almost independent, but not quite.
The law generally defines the age of majority, the point at which a child is no longer a child and may decide these things for herself, at age eighteen, although there are variations between states. (The age is younger for so-called emancipated minors — those children who are entirely self-supporting or who are married.) What should we do when such a near-adult and her parents disagree about the treatment the child should get? There have been several recent examples of the variety of things that can happen then.
One case is that of Dennis Lindberg, a fourteen-year-old boy who died from leukemia in 2007. Dennis was a Jehovah’s Witness and, like others in his faith, rejected blood transfusions, even in life-saving situations. It is common for the courts to mandate transfusions in very small children over the objections of Jehovah’s Witness parents. The rationale for this is that a small child is too young to decide himself if he agrees with his parents. Dennis’s doctors went to court to get such an order.
But this case was different — Dennis was not a toddler or small child. He was an aware, articulate, young man who understood the meaning of both his illness and the consequences of not getting the transfusion. The court ruled that Dennis had the right to make his own choice, which he did.
His case dramatized a very grey area in medical ethics — when ought a young person be able to make these decisions on his own? In my own career I have had several occasions when an adolescent disagreed with the doctors, his parents, or both about what to do. In all those situations everyone eventually came to an understanding. That’s the best outcome, of course, but these will always be ambiguous situations because children mature at differing rates. Some thirteen-year-olds are wiser than seventeen-year-olds. For that matter, some young adolescents are wiser than others who have already attained the magic age beyond which we give them the right to make all these decisions.
If you are interested in these kinds of ethical questions as they relate to children, here is another example of a teen (with the support of his parents) going to court to assert his right to refuse standard therapy for cancer.
I’ve written before about traumatic brain injuries in children. These sorts of injuries are frustratingly common — I’ve just seen several new ones. Although we’ll never eliminate them, there are many ways to reduce both their number and the severity of those that do occur. These ways are well known and extremely low-tech. Since car accidents are the leading cause of them in children, that is where we can really have an effect.
A small child who is unrestrained by a car seat is particularly likely to have a severe brain injury if involved in an accident, and that accident need not be at highway speeds. These days more and more parents know how to use car seats for their infants and toddlers, and over the past decade I’ve seen fewer and fewer injuries to unrestrained small children.
What I continue to see, however, are teenagers who are out by themselves, away from their parents, and don’t use a seat belt. The result is they are ejected from the car after impact, and this raises enormously the risk of severe brain injury. I’ve just seen yet another such case.
The most severe injuries come from what we call diffuse axonal injury, or shear injury. This injury results from the brain being jarred suddenly inside the skull, often with a bit of rotational effect. We call it shear injury because the force of impact shears apart the delicate wiring bundles that connect the nerve cells to one another. Most children recover to some extent, but some degree of permanent damage is common.
If you are interested in learning more about traumatic brain injury, the Brain Trauma Foundation is an excellent place to start.
We have known for a long time poverty is associated with illness. Tiny Tim did not die at the end of Dickens’ Christmas Carol. The reason he lived was because, just in time, Scrooge had an epiphany and raised the Cratchit family’s standard of living. That Christmas goose brought more than good cheer to the Cratchits — it brought good health, too. Some historical studies, such as those of Thomas McKeown, have linked the long population rise of the past century to improved nutrition. Experts still debate if this is true or not, but either way it is old news.
It may be old news, but for today’s Tiny Tims it is very much still current news. The furious debates over what to do about health care reform are often about choice — what choices Americans should have selecting their health care, what choices doctors should have in providing it, and what choices society has in paying for it. I take care of children, so that is the lens through which I see the issue. And children have no choice at all in this matter, because the family they are randomly born into determines everything, even if they will live or die. Across America we have constructed what are, in effect, a series of laboratories to test the results of what happens when different sorts of children get severely ill. These laboratories are pediatric intensive care units.
Poor children are far more likely than affluent children to end up in a PICU. The simplest indication of this is to look at the proportion of children in PICUs who are on Medicaid: it is generally at least half, often more. Yet the proportion of children in the general population who are on Medicaid is roughly a third. Why is this? Why are poorer children more likely to become critically ill or injured?
One reason is that pregnant woman who are poor are more likely to deliver prematurely, and former premature infants have a high prevalence of residual medical problems, things which often lead to future PICU admissions. Thus more premature births equals more PICU admissions. Another reason is that, because of low reimbursement rates for providers, it is often hard for a child on Medicaid even to find a doctor. So children with chronic problems, such as asthma or diabetes, often cannot get the kind of good routine care that would keep them out of the PICU. These reasons are straightforward, ho-hum, so obvious we have become inured to their implications. (Though we should not be.)
If we dig deeper, though, we find other disturbing possibilities. For example, a study by Evans and Kim on the physiological effects of poverty found that poor children have chronically high levels of stress hormones that correlated with the length of time they were in poverty. Adolescents who were recently poor did not show these findings; what mattered most was the duration of poverty. We know childhood poverty is strongly associated with poor health as an adult, and this may be one of the reasons. Even if a poor child somehow later breaks through to affluence, the health effects linger on.
Thankfully, evidence shows that once children on Medicaid who need a PICU get there, they get the same level of care and have the same outcomes as children with private insurance. That is reassuring; poor kids on Medicaid do not get second-class care and have the same risk of mortality as the affluent ones. However, the research uncovered a very disturbing finding — children without any insurance at all were more likely to die. Why? Because they were sicker when they first arrived in the PICU, undoubtedly because their parents feared to bring them to the doctor. Because of our current dysfunctional non-system, the parents waited, and their children died as a result. Personal anecdotes are not research, but I have thirty years of them saying the same thing — uninsured kids are sicker when they get to the PICU. This is entirely predictable. Of course the prospect of a massive, bankruptcy-inducing medical bill makes even the best of parents equivocate and delay why they should not.
It is fair to debate how many adults without health insurance are in that situation owing to their own choices, although I think that argument is a straw man, as is the notion that many homeless adults choose to live in boxes under bridges. But it is not fair to inflict this debate on children, who are stuck with their birth situation. Childhood poverty carries life-long health care risks, but at least Medicaid generally gets the poorest children the care they need. Denying children health care insurance, however, kills them. I find this to be obscene.
Working as a physician in a hospital means being buried with paper — lots of it. A patient’s medical record, the medical chart, is typically a fat three-ring binder that gets rapidly fatter by the day the longer the patient stays in the hospital. Children in the PICU may build up a medical record that weighs more than they do. Old medical records for patients — records that describe their previous hospital stays — are often delivered to the hospital floor from the medical records department in a very full shopping cart. Plowing through these old records can take hours. More importantly, one can miss important things, key nuggets buried deep in the largely unhelpful mass of paper. And, of course, if the patient has had medical experiences at another hospital, those are not even in the chart.
Many believe the answer is an electronic medical record (EMR), with everything stored on a computer. The record can be easily organized and searched for important information. Assuming that systems are standardized (a big if), the record can then be easily portable and travel with the patient on a disk or be sent over the internet.
The whole topic of EMR is a highly emotional one among physicians. Many like the idea, many absolutely hate it, even though the latter group recognizes the EMR is inevitable, really. For hospitals, the start-up costs of implementing the EMR can be huge. Thus far, few have done so. A recent survey in the New England Journal of Medicine found that only 1.5% had done so in a comprehensive way, although many had begun implementation of various portions of the EMR. The Obama administration has proposed federal funds for part of the costs, but inevitably each hospital will have to spend money upfront to initiate EMR systems.
For myself, I happen to work at one of the few hospitals with complete EMR. I like it a lot. For PICU practice, the ability to get important data quickly is key to giving good care to critically ill children. I’ve been doing this for 30 years, long before there were computers on every (or any) desk and the EMR allows me to do my job better. I look forward to seeing it implemented across the nation.