I swiped this editorial cartoon by Steve Sack from the redoubtable Dr. David Gorski’s blog, who goes by the nom-de-web of Orac. Recent epidemiology shows reducing the fraction of vaccinated children in the population rather promptly leads to a resurgence of the diseases vaccines protect against. This is the concept of community or herd immunity. Epidemiologists debate the concept around the margins but overall its importance is well accepted. People who deny the effectiveness of vaccines or even think vaccines are dangerous don’t accept it, though, and you can find many examples of this around the internet, some sort of reasoned, some not. Although I’m trained in pediatric infectious diseases, a field that includes a lot of epidemiology, I’m not an epidemiologist. So I’m not going to chew on the whole herd immunity thing. I’m going to write about a particular form of concern trolling common among vaccine denialists: the claim that they would fully support vaccines if only vaccines could be shown to be fully safe and effective, using their own special definitions of what that means. In effect they erect an impossible standard to meet, which is of course how concern trolling works.
A common claim is that, although individual vaccines may be safe, the safety of combinations of vaccines together has never been shown. It’s not always clear what the demand is here, but it often appears to me to mean they want a trial of all the many possible combinations of vaccines compared. If you just do the math on how many vaccine combinations are possible you can see this demand is absurdly impossible to meet. We actually do have ongoing surveillance of vaccine safety happening all the time and the results show them to be the safest medical procedure we have, with around 1 complication per million doses administered. The only thing safer is homeopathy, which does nothing but harms nothing. Vaccine denialists elide this fact by redefining what a vaccine complication is through including nearly anything that happens to a person afterwards, even years afterwards, as vaccine-caused. I’ve read posts by many adults who claim, for example, that their fatigue, difficulty concentrating, and “metabolic problems” stem from vaccines they received as children. Also automobile accidents have been blamed as vaccine complications. The absurdity is truly mind boggling.
And then there is the mythical unicorn vaccine denialists claim is the only acceptable answer: a direct comparison of vaccinated and unvaccinated children, searching for differences in outcomes. Key here is the frequent claim unvaccinated children, on average, are healthier than vaccinated ones. To someone who knows little of clinical research this seems like a perfectly reasonable demand. Just compare vaccinated to unvaccinated children, see who got illnesses or complications and who didn’t. I just saw this one on my Twitter feed today:
Also note this tweet includes the common fallacy the Amish don’t vaccinate — most do. Anyway, there have been some terrible studies of this sort reported, such as this one, which highlight the fallacy of doing such a simple comparison without controlling for any confounders. Fundamental to any proper study like this is that the two groups being compared, in this case vaccinated and unvaccinated, must differ only in the variable being tested. A common way of handling the question is a case-control study, in which each case is matched with one or more controls that, as best as can be determined, satisfy that requirement. But vaccinated and unvaccinated children, by parent choice, are hopelessly self-selected right out of the gate. There are other confounding issues, such as blinding, but that’s the main one.
Well then, as some have actually demanded, we must have a randomized controlled trial (RCT), the gold standard of clinical research. RCTs use random assignment of subjects to one group or the other, in this case vaccine or a placebo (fake vaccine), and ensure both the subjects and evaluation team be blinded to who got what. Think about this for a minute. They are demanding parents agree to subject their child to a trial in which they have a 50/50 chance of getting a fake vaccine. All this to satisfy the concerns of vaccine deniers. It would be incredibly unethical to do such a study, and no institutional review board (aka human studies committee) would ever approve such a thing. For such trials there must be reasonable uncertainty about which group is getting the better treatment and in this case there is none. The bottom line is any vaccine skeptic who demands proof like this is being massively disingenuous. It’s akin to demanding a randomized controlled trial of parachutes.
The enduring mystery in this perennial chestnut of a topic is that vaccine deniers demand a level of safety and certainty from vaccines that they demand from no other medical procedure or treatment. Absolutely every treatment I can think of is riskier than vaccination. Some are far, far riskier. I suppose it’s partly owing to a visceral resistance to injecting something into a healthy person, but vaccine denial in general has deep, deep historical roots.
Computed tomography, or CT scanning, is one of the most powerful diagnostic tools to emerge during my medical career. Just look at the detail in the brain images above, taken at 90 degree angles through the brain. And I was there at the beginning. I remember well when I was a medical student taking neurology and the first CT scanner arrived at the Mayo Clinic. By today’s standards it was incredibly crude. It displayed a tiny image on a cathode ray tube that was then photographed with a Polaroid camera. Preservative lacquer was then smeared on the photograph and it was pasted into the patient’s chart with glue. But the crude photographs were amazingly superior to what physicians had previously, which was nothing. They had skull x-rays to look at the bone and the very painful and very indirect imaging technique called pneumoencephalography. So neurologists and neurosurgeons were ecstatic at the new technology because it allowed them to see the brain directly.
Over the years head CT emerging as pretty much a standard test for evaluating any bonk on the head, particularly if the person clinically had a concussion or especially if they lost consciousness. This is because one of the things a head CT does particularly well is identify brain swelling or bleeding inside the skull. But then some concerns began to arise about the radiation that comes with CT scanning. And CT scans do deliver an order of magnitude at least more radiation than do ordinary x-rays like chest, arm, or leg x-rays. So this raised the concern of all these head CT scans contributing to increased cancer risk, a particular concern in children who have developing brains and their life ahead of them. It turns out there is a measurable increase in lifetime cancer risk from CT scans. It’s tiny, but it’s measurable. How tiny? About 2 in 10,000 head CT scans. The risk is higher for abdominal CT scans, but these deliver much higher radiation doses. Radiologists recognized this issue and a decade or more ago instituted protocols for children that reduced radiation significantly (the Image Gently program). But the risk is still there. For small children there is often the additional risk of the need for sedation to do the scan because the child cannot hold still enough to get a sharp image. The point is that we should use the same risk/benefit calculation when ordering a head CT that we use when ordering any other test. If the risk, however tiny, exceeds the expected benefit we shouldn’t do the test. So if the benefit of a head CT in minor head trauma in children is essentially zero we shouldn’t get the scan. But how do we determine that? To help us with that question various professional organizations have issued guidelines regarding when a head CT is needed to evaluate pediatric head trauma and when it’s not. An interesting recent study investigated how we are doing in adhering to those guidelines. You can read one commonly used set of guidelines here.
The authors studied the years 2007-13. The guidelines had been recently put in place at the beginning of that period. Their goal was to see any effect of this; they hypothesized that, over a decade, implementation of the guidelines should result in a reduction in pediatric head CT scans. They used the enormous National Hospital Ambulatory Care Medical Survey database, a resource that includes information on over 14 million children who visited an emergency department during the 9 year study period with a diagnosis of head trauma. Their question was crude but simple: Did rates of CT scan use for pediatric head trauma change over the study period? The simple answer they found was: no change. The below graph shows the proportion of children who got head CTs over the study period. The points of implementation of various guideline initiatives are noted — Image Gently, PECARN Rules (described in the above reference), and Choosing Wisely. But the line is unchanging within the confidence intervals. I suppose we should not be surprised most of this excess CT use occurred at community hospitals rather than academic facilities; up-to-date practice would be more expected to take place at the latter.
So what does this mean? An accompanying editorial to the above study considers the implications.
It is disappointing that US children have generally not benefited from current best practice research and continue to experience unnecessary radiation exposure. This is a reminder that pediatric research and education efforts are frequently not focused where most US children receive their medical care. . . . A recent study of a community ED revealed that a maintenance of certification program sponsored by a children’s hospital was associated with lowered CT scan use from 29% to 17%.
Most discussions of this sort bring up defensive medicine, that is, doing things out of a fear of lawsuits. However, adherence to nationally recognized best practice guidelines is a pretty solid defense against later claims of negligence. In this case, it’s not at all inconceivable that not following best practice guidelines actually puts a physician at risk from being sued.
I first posted about this subject a couple of years ago but it’s so fascinating to me I’m writing about it again. I happened to run across this study containing some amazing information. It’s from a publication called The Journal of Voice. The link is to the abstract — the complete article is behind a paywall but I can get it for anybody who’s interested in reading the whole study in detail. Its title is “Fundamental frequency variation in crying of Mandarin and German neonates.” I have always assumed, like most people I suspect, that babies cry the same the world over. When they’re uncomfortable or hungry they let us know by crying. It turns out this may not be the case. If so, then language development is pushed to the very first days of life — even before that, perhaps. There is actually previous work of a similar nature that studies what the authors termed the melody of an infant’s cry and how it varies with the mother’s native language. It’s long been known that from birth infants have a particular recognition of their mother’s voice, something that appears to be associated with the melodic contours of how the mother modulates her voice.
Some languages are tonal. This means the pitch of the speaker’s voice affects the meaning of the words; the same sound can mean something entirely different depending upon the pitch. Mandarin Chinese, spoken by over a billion people, is such a language. There are four pitches that must be mastered to speak Mandarin. I have a friend who spent three years in China and really struggled with this. I don’t recall the details, but as I recall she told me, as one example, the word for “fish” means something entirely different when uttered in a different pitch. There is a language spoken in Cameroon that is even more complicated, sound-wise. This language has eight different pitches that affect word meaning, and there are further modulations in pitch that complicate things even more.
The particular study cited above was a collaboration between investigators in China and Germany. German is not a tonal language, of course. The investigators recorded the vocalizations of 102 newborn infants, examining in particular pitch, fluctuation, and range. The results, in the words of the lead author, were clear:
The crying of neonates whose mothers speak a tonal language is characterized by a significantly higher melodic variation as compared to – for example – German neonates.
It’s even more interesting when you consider that these are newborns. They’ve only been out of the womb a couple of days. So the clear implication is that they heard their mothers speaking while still in utero and acquired patterns of vocalization that they begin to use immediately after they were born. That’s quite amazing, I think. It has implications even for those mothers who don’t speak tonal languages: that is, your baby can hear what you’re saying, especially during the last trimester of pregnancy, so be sensitive of your tone. So maybe if you are angry and yell a lot your infant may actually be impacted by that, and not in a good way. Something to think about.
The stethoscope. Nothings says “I’m a doctor” more than the stethoscope in a pocket or draped around the neck. Forty-five years ago when I got my first one, a gift from my physician-father, the former was more common. Then we were more likely to wear coats — white coats or suit coats — and pockets were available. I had suit coats in which the lining was worn out from the weight of the thing and at the Mayo Clinic back then, perhaps still, the sartorial police didn’t allow white coats. Of course medicine was overwhelmingly male then so suit coats were the norm. These days you’re much more likely to see them draped around the neck like the guy in the picture above. Back then we did put them around our necks sometimes, but that required that the springy metal arms to be around your neck, like this guy.
I found that tended to give me a headache because I think it partly occluded blood flow down the jugular veins in my neck. Interesting to me is the newer fashion statement of draping it around the neck like picture #1 is made possible because the tubing on today’s stethoscopes is much longer. The longer tubing also makes wearing it like the guy in image #2 cumbersome because the end of it bangs on your belt rather than the middle of your chest.
Materials of course have evolved as well. My first one was made of steel with rubber tubing; today’s versions are mostly lightweight plastic. Mine was one of the style called Rappaport-Sprague, named after its inventors, I assume. The very best model was sold by Hewlett-Packard, which, alas, no longer makes it. I wish I could get one again because the one my father gave me was stolen from the hospital doctors’ lounge by some evil weasel. I saw the one pictured below offered on E-Bay for $900.00, so I guess other folks share my nostalgia. It looks just like the one I had. I have to say I think the old rubber tubing gave much better sound transmission than today’s plastic stuff.
Notice the shorter tubing. If you draped the tubing of one of the older versions around your neck the thing could fall to the floor when you moved if you weren’t careful. In contrast, the newer Rappaport-Sprague style on the guy in image #2 has long tubing made for neck draping. The plastic tubing versions made 50 years ago, and there were those, also had shorter tubing; now they are much longer, like the one sported by the guy in image #1. So let’s focus on the interesting matter of tubing length, because I think it tells us something about the sociology of medicine. It requires we consider for a moment the origin of the stethoscope and conventions of how physicians examined patients.
Until late in the 19th century it was common for physicians not to examine patients at all. They would often render a diagnosis based on the patient’s story. The notion of how useful is touching, examining the patient came with the growth of scientific medicine, as well as some changes in social conventions. The stethoscope as a tool to listen to the internal organs was invented by Laennec, who devised a wooden tube like the one below. The traditional story is that he first used a tube made of rolled paper, purportedly because the patient was a women and it was ungentlemanly for him to touch her or place his ear to her chest.
My first stethoscope with its shorter tubing made auscultation a more intimate act than today’s version; you had to lean closer in to the patient. I think that’s a socially signifiant difference. Some might say the longer tubing is there because it makes taking the blood pressure easier, but I don’t buy that. For one thing, blood pressure measurement works just fine with shorter tubing. For another, these days blood pressure is generally taken with an automatic machine rather than manually. In a real sense we’re now more distant from our patients.
The stethoscope itself is becoming less and less used in practice. I still find it very useful in the PICU every day, but it’s not unusual to find on daily rounds that a student or resident hasn’t used one in their daily examination of a patient. I think one thing that keeps it going is not so much its function as a tool but rather as a key part of a physician’s regalia, a badge that proclaims status. Just look at all the stock internet images of people posing with one prominently displayed. But like the white coat that once indicated physician status, but now is worn by everyone, you’ll find a stethoscopes hanging on necks everywhere. What I see now is an interesting sort of countertrend; many senior physicians no longer carry one. When they want one they just borrow one from the dozens available around the PICU.
You can read another take on these trends in this essay from The Guardian. It points out how several aspects of medical practice we take for granted, including the use of the stethoscope, were fostered by the social changes of the French Revolution. I recommend it.
I would think by now that I wouldn’t have to write anything about the importance of child car seats. But I find I do, because I still see as I drive adults holding babies and toddlers over their shoulder, often while sitting in the front seat. This has been illegal in most places for many years, but it is still common and it is still stupid and dangerous. I also still see the results–several children each year come through the PICU who were unrestrained passengers in a car accident, and a few of them die. A recent CDC study found that around 600,000 children ride unrestrained at least once in a given year. Interestingly, and not surprisingly, children riding unrestrained are often in vehicles in which the driver is also not wearing a seat belt.
Here are some recent statistics on car seats and motor vehicle accidents. In 2015 nearly 59,000 children under the age of 5 were injured in motor vehicle accidents, 8% of them seriously, and about 1% died. This amounted to 471 children. Significantly, over one third of the children who died were unrestrained.
Most of us have been lectured to about these things, but I have found many parents have difficulty understanding notions of statistical risk. For example, one study showed 72% of parents were seriously afraid their child would be abducted by a stranger. That is, I suppose, a legitimate fear, but it is not very likely to happen; in fact, it is vanishingly unlikely. It is only one-fourth as likely as you getting struck by lightning.
My point is that parents should do what they can to reduce the chances of their child suffering harm: by all means tell your child about what to do when approached by strangers, but also please buckle them into a car seat, preferably in the back seat, when you drive anywhere with them, even a short distance.
You can find an excellent overview of all manner of car seats and how to use them here.
I’m being sarcastic, of course, but that’s often how it seems some days. Those are days when I’ve been busy at patients’ bedsides all day and then struggle to get my documentation done later, typically many hours later. I jot notes to myself as I go along, but it can be hard to recall at 5 PM just what I did and why at 8 AM.
It used to be very much the other way, and that wasn’t always a good thing either. Years ago I spent months going through patient charts from the era of 1920-1950. They were all paper, of course, and the hospital charts were remarkably thin, even for complicated patients. I recall one chart in particular. It was for a young child who was clearly deathly ill. The physician progress notes for her already prolonged stay in the hospital consisted of maybe 2 sheets of paper. Most of the daily notes were a single line. I could tell from the graphs of the child’s vital signs — temperature, pulse, breathing rates, and blood pressure — that one night in particular was nearly fatal. The note the next morning was written by a very famous and distinguished physician. I knew him in his retirement and he was a very loquacious man in person. His note after the child’s bad night was this: “mustard plaster did not work.” If I were caring for a patient like that today there would be just for that day and night multiple entries probably totally several pages on the computer screen.
Patient charts are burdened with several purposes that don’t always work together. The modern medical record as we know it was invented by Dr. Henry Plummer of the Mayo Clinic in the first decade of the twentieth century. Up until that time each physician kept his (only rarely her) case notes really as notes to themselves. When the multi-specialty group appeared, and Mayo was among the first, the notion of each physician have separate records for the same patient made no sense; it was far more logical to have a single record that traveled from physician to physician with the patient. That concept meant the medical record now was a means for one physician to communicate with another. So progress notes were sort of letters to your colleagues. You needed to explain what you were thinking and why. Even today’s electronic medical records are intended to do this, although they do it less and less well.
Now, however, the record is also the principal way physicians document what they did so they can get paid for it. Patient care is not at all part of that consideration. The record is also the main source for defending what you did, say in court, if you are challenged or sued. The result is that documentation, doctors entering things in the record, has eaten more and more of our time. Patients and families know this well and the chorus of complaints over it is rising. Doctors may only rarely make eye contact these days as they stare at a computer screen and type or click boxes. But we don’t have much choice if we are to get the crucial documentation done. That’s how we (and our hospitals) are paid and payers are demanding more and more complex and arcane documentation. I don’t know what the answer is, but I do think we are approaching a breaking point. We are supposed to see as many patients as we can. But the rate-limiting step is documentation.
To some extent we brought this on ourselves. In our fee-for-service system physicians once more or less said to payers: “We did this — trust us, we did it — now pay us for it.” I can’t think of a formula more guaranteed to cause over-utilization or even outright fraud. But there is only so much time in the day. In my world an ever smaller proportion of it is spent actually with the patient.
Anyone who has worked in medicine for a long time well understands the power of the statement coming from an experienced person: “This kid looks sick.” That person could be a physician or nurse. Years of experience does tend to give one a sort of sixth sense for when to worry something serious is going on that just hasn’t shown itself fully yet. Seasoned parents can often provide the same perspective. A fascinating recent article pertaining to this appeared in Critical Care Medicine, the journal of the Society of Critical Care Medicine, entitled “What faces reveal: a novel method to identify patients at risk of deterioration using facial expressions.” It suggests an empiric perspective for studying just how this phenomenon may work. It’s not about children, but the findings could easily apply to pediatric patients.
The authors include experts in empirical evaluation of facial expressions, broken down into something called “action units.” This is a scientific field I have to say I had no idea even existed. They used video recordings of 34 patients identified by nursing as potentially, but not yet, deteriorating clinically. The patients were then followed in time to identify those who ended up in intensive care for deterioration and what their faces were doing just before that. They also used a standard measure in the UK for deterioration termed the National Early Warning Score. This is based on objective measures such as heart rate, respiratory rate, level of consciousness, and other things that can be measured. The video recordings were analyzed by observers trained in this sort of thing but who were blinded to who deteriorated and who didn’t to see if subtle facial signs predicted this. You can look at the paper for the minute details, but some of the most useful distinguishing features were overall head position and what the person was doing with their eyes. I sure have seen that aspect in action. For example, a very useful observation when evaluating a child with respiratory distress is to look into their eyes: Are they paying attention to anything besides breathing? Can you distract them?
The authors provide some visual illustrations of what they are talking about, including this famous painting (the AU categories are some of their analytical tools):
Painters have been capturing face expressions since antiquity. The painting “The Dead Christ Mourned” by Annibale Carracci (1560-1609) is striking in its composition. Carracci showed the same facial expression in the dead Christ and Madonna, clearly displaying . . . AU 15 (lip corner depression), AU 43 (eye closure), AU 51 (lateral position of head), and AU 25 (lips parted).
The authors think their methods might be incorporated into standard evaluation systems. Maybe. What I think is their work validates what we have known for years. When experienced clinicians look at a patient, they unconsciously incorporate into their assessment what they have gleaned after years of looking at sick people and what happens to them.
Here’s another interesting example. Separating the very ill and liable to deteriorate from the not-so-sick is a perennial challenge in the emergency department setting, particularly in pre-verbal children. Untold numbers of research studies have tried to come up with something, anything, perhaps some blood test, that could help in this sifting process. Not surprisingly, it turns out the most useful measure for children is for the most experienced person in the room to say: “That kid looks sick.” When you hear that, believe it.
Anyway, I find this work fascinating as an example of how cross-disciplinary research can work, and I applaud whichever author first thought of it. I believe the article is behind a paywall; if anyone can’t get access to it and wants a copy, let me know via the contact form on my homepage.
(I posted a version of this little essay some years ago at the request of Maggie Mahar, but I think it’s an important issue that’s worth dusting off and putting out there again.)
We want competent physicians, but we also want compassionate ones. How do we get them? Is it nature or is it nurture? Is it more important to search out more compassionate students, or should we instill compassion somehow in the ones we start along the training pipeline? I think the answer lies in nurturing what nature has already put there.
My background is in pediatric critical care, which I have practiced for thirty-five years. Throughout most of my career I have taught medical students, residents, and fellows. So I have seen young physicians as they made their way as best they could through the long training process. I also served on a medical school admissions committee for some years and interviewed many prospective students, so I have had the opportunity to see and speak with them before the medical education system even got hold of them. I think the main principle to keep before us is not so much to figure out a way to teach compassion, but rather to devise ways such that the training process does not reduce, or even extinguish, the innate compassion all humans have toward one another. Unfortunately, our current way of doing things does not do a very good job at that task. But I do not think our present state of affairs is anyone’s fault. We are hobbled by our success. Some historical background is helpful, I think, to explain what I mean.
When my grandfather graduated from medical school in 1901 he had only a few tools to help the sick. He could do useful things to help injuries mend. He had the newly discovered techniques of aseptic surgery, as well as ether to allow him to do it painlessly. Other than that, though, he did not have much – narcotics to relieve pain, powdered digitalis leaf to help a failing heart, and a few other things. Mostly, though, he had bagful of useless nostrums. Some of them were even harmful. Because he had little to offer, compassion figured prominently in whatever therapy he did. It had to.
When my father graduated from the same medical school in 1944, things were better. Surgery had advanced further from his father’s day, although only brave surgeons entered the chest cavity. There was sulfa, and penicillin soon became available, working miracles with previously deadly infections. Streptomycin and later drugs made the scourge of tuberculosis treatable. He soon had some drugs to treat high blood pressure, which by then had killed his father, plus a rapidly enlarging stock of other useful drugs to put in the black bag he took on house calls. But there were still many things for which he could do nothing. For a heart attack he gave some morphine to take away the pain and then waited to see what happened. If a cancer could not be removed surgically, he had nothing to offer. Although my father’s black bag held more than his father’s had contained, compassion was still a crucial part of my father’s armamentarium. As for his father, it had to be.
I graduated from medical school in 1978. If scientific medicine was just spreading its wings during my father’s training, I experienced it in full flight. By then our medical-industrial complex had rolled out nearly all of the varieties of therapies we have still, although of course we have polished and improved them. What has happened, I think, is not that we have become less compassionate on purpose, but that we came to act as if we no longer needed the compassion of my father or my grandfather’s era, now that we had so many really useful and exciting therapies to offer.
I also think one other historical change is key to understanding how our young doctors react to the experience of seeing death and dying. In my grandfather’s era, it was an unusual person, even an unusual child, who had not personally seen someone die. Children and young adults saw how those around them behaved and reacted to death. If they became doctors, both they and their patients had shared this common experience, so both knew how to act. I saw death for the first time when I was sixteen on my very first day working as an orderly in our local hospital. I was giving a bath to an old man; he looked at me oddly, and then he was dead. None of my friends or schoolmates had ever seen such a thing. I still recall it vividly. I also remember well how helpful the nurses, all women in their fifties or sixties, were to me afterwards. I watched them wash the body, a once sacramental task now largely done by nurses in hospitals instead of families in their homes. They were respectful, but matter-of-fact as they went about it. After all, it was a natural thing.
I think compassion for others is innate in all of us, although it is stronger in some than in others. All of us possess an inner light. Perhaps that opinion makes my theology show, but I think it is fair to say our medical school selection process already skews toward selecting students more compassionate than the average person. We need to encourage that quality, certainly, but that is not the key issue in my mind; mainly we need to prevent medical training from driving it into the background, belittling it, or even snuffing it out. So I do not think we need so much to ponder how to teach compassion as we need to find ways of letting students’ natural humanity shine through. For medical educators, that would seem to me to be good news. Framed that way, it ought to be doable – but how?
There are many things in medicine that can be taught with the old “see one, do one, teach one” model that those of us older than fifty remember. We also remember never seeing a faculty attending physician in the hospital at night, because, after sundown, the place belonged to the residents. Even during the day, attending physicians were more likely to be found in their offices or their research laboratories than out and about on the wards. I learned how to intubate a baby and place an umbilical artery catheter from my senior resident, who had learned the year before from her senior resident. But my senior resident was not much help when a premature baby died; she was as much at sea as I was. All she had learned about that from her senior resident was to cultivate a sort of hard-boiled persona. We aspired to it partly because it gave us a mental escape hatch in those situations. But mainly it was because nobody showed us any other way.
How to show that other way? In my mind, there is no substitute for senior, seasoned physicians demonstrating, in the moment, how to let out our own innate empathy and compassion. Good, experienced physicians are comfortable admitting their medical ignorance and failures to families; nothing terrifies residents more than that. When they see it in action, students and residents respond with a version of: “That’s why I became a doctor.” Structurally, medical education has already made great strides in the right direction. We now have rules for resident supervision that involve much more oversight, even at night, than I ever had. This was done mostly for patient safety, I think, with education as a secondary and really unintended consequence.
So the opportunities are there – we just need to implement them better. For example, after an unsuccessful resuscitation and a death, the folks with the grey hair should spend as much time discussing with students and residents the psychic dimensions of the death as they do the sequence of medical decisions. Most of my colleagues already do that to varying degrees, but it should be an expectation.
We should never again send a resident, alone and emotionally at sea, to comfort a grieving family without backup. We do not do that for complicated invasive procedures; we should not do it for this other, equally important task either. Certainly some organized instruction – seminars, discussion groups, lectures and the like – can be part of the process. But the training curriculum is already stuffed with subjects. Taking residents by the hand and leading them through these experiences does not require another fat syllabus. It only takes a little time. If we want to foster compassion in our students we should ourselves show them compassion for the situations we put them in. We should let their innate, inner compassion and empathy find an outlet and breathe free.
The recent prominence of the MeToo movement has shined a light at many places in our society where insidious or even obvious sexism against women has long gone unremarked. Even when noticed it’s just shrugged off as the way things are. In honor of this MeToo was named Person of the Year for 2017 by Time Magazine. Medicine is no exception to this pervasive problem. A very interesting recent essay in the New England Journal of Medicine examines why this is and what we could do about it.
It’s well documented women are vastly underrepresented in leadership positions in medicine, such as full professors and department heads. This is in spite of the fact the proportion of women to men in medical schools is roughly equal and has now been so for over 15 years. Last year the number of women admitted to medical school even slightly outnumbered men. This graph shows the trends over the last 50 years.
In spite of the steadily increasing proportion of women in medicine the culture of medicine has not caught up. Certainly one can postulate the number of women in leadership positions will increase because typically these positions are held by physicians at mid-career or older; it may take time to generate women physicians with sufficient quantities of grey hair. But I’m not so sure about that. Note from the graph the number of women has been close to that of men for nearly 2 decades. My own field of pediatrics has been at least equal in the proportion of men to women for decades, and for the last decade or so the number of women pediatricians has actually been larger than men. So if it were just a matter of time in rank women should have caught up, at least in pediatrics. Yet this hasn’t really happened. Why is this? One thing observers point to is that women are more likely to interrupt their careers for child-bearing and other family reasons. At least in academic medicine such pauses in one’s medical career can be huge set-backs. My answer to that is, so what? Change expectations of what an academic medical career means. That would actually be a good thing. Along with the author of the essay, I think the answer clearly runs deeper; women physicians are simply not respected to the same degree as are their male colleagues, not by the medical system and apparently not by the public. That’s how deeply the sexism is ingrained. The essayist offers an example of this phenomenon.
A recent study of speaker introductions at internal medicine grand rounds revealed that even when women are acknowledged as physicians, they are more likely than men to be introduced informally: women were referred to by their professional titles 49% of the time, as compared with 72% for male speakers. This finding has important implications. Calling women by first names in a setting in which men are referred to by formal, professional titles is a tacit acknowledgment that women are perceived as less important, even as their contributions are publicly recognized during grand rounds.
I’ve been practicing medicine for 40 years now and have long noticed women physicians are far more likely to be addressed by their first names, even by those who rank below them in the hierarchy. Of course the fact the majority of nurses continue to be women can be a bit confusing to patients who make assumptions. Yet this occurs constantly in spite of today’s large and obvious name badges and prominent labels on coats identifying women physicians. We cannot change patients’ attitudes much, although I gently correct them when they make this mistake. But we can change our own behavior. We can also give equal pay for equal work. It’s well documented women physicians make significantly less money than do men for doing the same thing.
There is another fascinating aspect to this issue. There is some research suggesting women physicians provide overall better care, possibly by being more likely to adhere to evidence-based medicine standards. Some observers have added to that explanation the higher likelihood of women physicians to work in a collaborative manner with the rest of the care team. The study examined 30 day hospital readmission and mortality rates for a large number of Medicare patients. The differences in patient outcomes between women and men physicians were significant and persisted across multiple disease categories. That’s pretty strong stuff.
The same issue of the New England Journal also provided a vignette of one of the most famous of women physicians, Dr. Helen Taussig. Dr. Taussig more or less invented the specialty of pediatric cardiology and her name remains attached (with second billing!) to a common pediatric cardiac surgical procedure, the Blaylock-Taussig shunt. The essay author wonders:
Since that time, how many Helen Taussigs have we lost to discrimination, harassment, and marginalization? And how many more will we lose if things don’t change?
Forty years ago I was fortunate to have been trained by 2 extremely gifted women who took different approaches to the obstacles they faced. Both possessed spines of steel and they needed them. My fellowship mentor overcame first polio and then the grinding annoyance of belittlement at an extremely stodgy medical center, one actually renowned for its male stodginess. Her progression to full professor was inordinately delayed. She was often assumed to be some sort of social worker. Because she covered several clinical services it was her habit to wear her various pagers on a cord around her neck. Incredibly, I met one physician who assumed she was “some kind of beeper repair lady.” She was a perpetual winner in the resident polls for teacher of the year; the department chair finally told the residents they had to select someone else for a change. And, of course, her patients adored her. The higher-ups . . . not so much, as the kids say today. She was known to seek them out in their comfortable lairs and make them less comfortable by confronting them in her calm yet firm way. Another of my mentors took a quite different approach. She was one of the giants of pediatrics and was among the founders of neonatology. No one messed with her because she met sexism head on, wielding a figurative 2 by 4 that she used to whack, among others, the chief of surgery on occasion. When necessary she could swear like a sailor. Tough doesn’t even begin to describe her. She succeeded and thousands of premature babies benefited.
These women took very different strategies dealing with sexism. And, as was said of Senator Elizabeth Warren, they persisted. But the thing is, it need not have been that way. That’s the point.
Vaccines have been hailed by virtually all medical experts, as well as medical historians, as the among the greatest triumphs of public health to occur in the past two centuries. Yet since Jenner first proposed vaccination for smallpox using the vaccinia, or cowpox, virus there have been both skeptics of its effectiveness and people who thought it was dangerous. That is, they had the risk/benefit ratio of vaccination exactly backwards, believing risk high and benefit low. They also often ridiculed the entire procedure, even from the beginning, as this 18th century cartoon shows — Jenner is the fat gent kneeling by the cow.
Against this constant background of vaccine denial, things changed two decades ago when Andrew Wakefield published his now notorious claim of an association between the MMR (measles, mumps, and rubella) vaccine and autism. The claim has not only been soundly refuted in a large number of well-controlled population studies, but Wakefield himself has been stripped of his UK medical license for unethical and fraudulent practices related to his publication. The paper itself was retracted by the journal Lancet, an extraordinary thing. Wakefield left the UK and moved to Texas. But the damage had been done. Vaccine hesitancy increased, not just for MMR but for all vaccines. The rise of social media, particularly Facebook and Twitter, appears to have amplified the effect. But did it? We do know there is more anti-vaccine noise, but has this resulted in decreased uptake of vaccines? Most importantly, has this led to an increase in vaccine-preventable diseases?
Measles offers a good example to examine because, not only did it feature in Wakefield’s original claim, but measles is highly infectious with a high attack rate among susceptibles and the vaccine is highly protective. In the pre-vaccine era the attack rate for measles was at least 95% and most persons had had it by early adulthood. It is not a trivial illness; the death rate is around 1 per 1,000 cases and there is a substantial risk for complications and life-long disability. It does seem clear from epidemiological work that decreased vaccine prevalence has been linked to measles outbreaks in Europe and the USA. But are these isolated pockets in the population or part of a larger trend?
Before examining if these thankfully still isolated instances represent some broader trend it’s worth looking closer at vaccine denial. A key problem is that we don’t know if such denial is more common now than in the past or if today’s media environment has just made it noticeably noisier. This interesting study from the UK examines the profile of the typical vaccine denialist. Recurring themes found among surveys of such people is a belief in many conspiracy theories, suspicion of authority, and feelings of disillusionment and powerlessness. Particularly interesting to me was the first of these. In general, conspiracy theories are attempts to explain events as the secret acts of powerful, malevolent forces. For people who believe these things, particularly those who participate in social media, it is a simple, even natural thing to add vaccine denial to their stock of other conspiracy theories. There is also often a general animus toward mainstream medical practice in general.
So, to address my question in the title: What do we know about if vaccine denialism has affected overall vaccination rates in the USA? I’m pleased to note that recent reports from the CDC that at least cover the past five years indicate not much has changed. There are definitely pockets of low rates, and it’s interesting how measles seems to find those places where herd immunity has dropped sufficiently low to allow disease to break out. This is an abject lesson for all of us. The figures are compiled by the CDC from vaccination records from the individual states. Here is what they found about vaccine uptake for MMR, DTaP (diphtheria, tetanus, and pertussis), and varicella vaccines among children entering kindergarden.
During the 2016–17 school year, kindergarten vaccination coverage for MMR, DTaP, and varicella vaccine each approached 95%, and the median exemption rate among children attending kindergarten was 2%; these rates have been relatively consistent since the 2011–12 school year.
The legal principle that the state may compel vaccination to attend public school for the safety of other children was established over a century ago. All states allow some exceptions, although they vary in the specific categories allowed. The number of children who had some sort of exemption from vaccination has been steady, as the CDC notes. There are medical reasons for a child not to receive vaccines, but most of the exemptions are for religious or philosophical reasons as determined by the parents. California recently caused quite a stir among the vaccine denialist world by eliminating the philosophical exemption option if a child wanted to attend public school. They filed a spate of lawsuits against the state, all of which have been denied.
When I started reading about this subject I had been discouraged by the headlines from Europe and California. But the extensive CDC compilations remind us that, in spite of all the sturm und drang in social and other media, the overwhelming majority of Americans support vaccination.