The explosion of narcotic abuse over the past decade or so has led to a marked increase in the death rate from overdoses. More people taking narcotics also inevitably means some of them are pregnant women, and the drug crosses the placenta to affect her baby. When someone addicted to narcotics suddenly stops taking them the result is acute narcotic withdrawal syndrome, a very painful and even dangerous thing. This is what happens when a baby is born to a mother who has taken significant amounts of narcotics during pregnancy — the drug supply is cut off when the umbilical cord is cut. Within a day or so the infant experiences acute withdrawal symptoms, termed neonatal abstinence syndrome (NAS). The incidence of this has risen dramatically, the CDC says by over 300% in the last decade. The incidence of NAS varies widely by state. West Virginia is the highest, affecting 3.3% of all births, closely followed by Maine and Vermont, which also have rates higher than 3%. Those are very disturbing statistics. Hawaii has the lowest rate.
The symptoms of NAS are variable and depend to some degree on the extent of the mother’s drug habit, and therefore the doses the infant was exposed to before birth. These symptoms include irritability, a peculiar high-pitched cry, tremulousness, difficulty sleeping, sweating, poor feeding, vomiting, and diarrhea. Not all infants have all the symptoms; most of them have several, though. We can confirm the diagnosis with tests on the infant (or mother) looking for opioids. Infants with mild symptoms often do not require specific therapy. If the symptoms are more severe we give the infant sufficient narcotic to relieve the acute symptoms and then gradually reduce the dose over time to wean the child’s brain from the need for the drug.
A recent study highlights another aspect of the drug epidemic. It indicates that, whereas 5-7 years ago the increase was uniform between urban and rural parts of the country, since then rural America has been much harder hit. The graph above is from that study. It also shows the related finding, as expected, that maternal opioid use parallels the increases in NAS. The observation that the raw numbers are very similar points out another thing: women who use opioids during pregnancy have a very high risk of giving birth to an infant afflicted with NAS. Some studies suggest the risk is well over 90%.
Much has been reported in the media about the epidemic, and it is an epidemic, of drug abuse washing over rural America, particularly West Virginia and Maine. This comes at a high cost to innocent babies. One area of particular concern is that we don’t know at this time if NAS affects a child’s brain permanently. It would not surprise me at all if it did.
That’s the provocative title of a recent article in Pediatrics, the journal of the American Academy of Pediatrics. The background to this controversy is the increasing recognition that chronic traumatic encephalopathy (CTE), a severe and debilitating brain problem first identified in former professional football players, can have its beginnings in college or even high school football players. That’s deeply concerning.
The article takes the form of this hypothetical scenario:
A primary care pediatrician in a small town receives a phone call from the local school board asking her if she will come speak to them about the benefits and the risks of football for high school students. The board is worried because, during the recent high school football season, a newspaper reported about the experiences of 3 young men who sustained concussions while playing football.
Four experts are then asked to comment on the scenario. The first expert is a pediatrician and an epidemiologist. It’s interesting that he, the first one in the discussion, chooses to evaluate the question in light of the principles of medical ethics. These include primum non nocere (first do no harm), beneficence (strive to do good), autonomy (people have a right to decide what to do with their bodies), and justice. Regarding the first of these, he states we already have ample evidence football can cause severe harm.
Knowing that football causes more harm to the brain than any other sport and yet encouraging participation while we await the results of more rigorous research violates the principle of do no harm.
This expert also suggests football violates the principle of beneficence, in that the harm it does outweighs whatever positive attributes it provides.
Football may provide benefits in character building, teamwork, and physical fitness, but given the potential long-term devastating harm, football does not meet the criterion of beneficence, because it is unknown whether the benefits exceed the risks.
But what about a person’s right to autonomy in decision-making? The expert concludes, in spite of parents signing waivers, football programs as currently constructed do not meet the fairly strict criteria for informed choice we use in medicine. Those are, among other things, that patients are informed of the known risks of things, in this case brain injury.
As is apparent from a recent report on tackling and football, pediatricians do not have the knowledge required to help parents decide whether the potential health risks of sustaining these injuries (football-related) are outweighed by the recreational benefits associated with proper tackling.
Finally, the expert points to the ethical principle of justice. He concludes football, as currently played, violates this principle owing to, among other things, the huge preponderance of African-American players.
This means that African-American football players face a disproportionate exposure to the risk of concussions and their consequences. Out of fairness, the community must ask whether those who participate in football are already at increased risk of poor health due to social determinants, such as race/ethnicity and socioeconomic status. This is not to make the patently unfair suggestion that African-American males or others at risk should be singled out from participation in football, but rather to acknowledge that acquiescence toward the risks of football disproportionately affects these populations.
Abolish high school football, says expert number one. Pretty strong stuff, and I suspect most would regard it as highly controversial.
The second expert is a sports medicine physician. He notes football has always been a dangerous sport and has been regarded as such for nearly a century. He also notes that, according to the CDC, concussions are now ten times more common than a decade ago. A large component of this increase is probably increased awareness, however. This expert describes the paucity of data about the issue in children, which is kind of true. He also notes football’s popularity and that concussions occur in other sports, which is also true. But his conclusion, offering the standard “we need more research” dodge beloved of climate change deniers and vaccine skeptics seems pretty feeble to me. Yes, we always need more research. But we already have quite a bit on this topic, with more appearing constantly.
The third expert is also a sports medicine physician. He also points out concussions occur in other sports, sometimes even at higher rates. Like the second expert, he also cites the character and teamwork-building aspects of sports participation, an argument that has always struck me as more an article of faith rather than of substantive proof. He advances the paradoxical argument that concern over concussions has gone too far, and that, managed appropriately by trained physicians, football’s risk is worth its benefit. He says keep football, but be proactive in promoting safety, by which I assume he means having trained medical personnel at practices and games. That’s always a good idea, since a key part of reducing brain injury is not letting a player with a concussion return to contact play until the first concussion has healed.
The last expert is a bioethicist. His contribution is brief, a sort of summary statement, and he doesn’t discuss any ethical questions the way the first expert did. Still, like the first expert, he doesn’t think high school football is worth the risk to the players — he would cancel it.
But I doubt that I would prevail. In spite of publicity about the dangers of football, the sport remains popular. More high school students participate in football than in any other sport. And many experts in sports medicine believe that football can be made safe enough. Given that, pediatricians should try to help parents and school boards understand the facts. They should also insure that a culture of safety prevails over a culture of winning at any cost.
People are deeply attached to high school football. I know this personally. My father was president of our local school board and also a pediatrician. When the local team experienced an epidemic of Staphylococcal aureus skin infections, my father, on the recommendation of the state health department, close down the locker room so it could be thoroughly fumigated and cleared of this potentially deadly bacteria. That meant he had to suspend football for several weeks. Most people understood his action, but many were furious. A few threw bags of flaming garbage (and worse) on our front lawn. It was highly disturbing.
What do I think? I would not allow my son to play football. Beyond that, I think any parent who allows their son to play should be told the very real risks of doing so, something that is not happening now. Removing public financial support of high school football could well cripple the college and professional versions of the game, since high school play feeds both of those. But I would support the decision of any brave school board that takes such an action. If they did so, I suspect football would still exist, it just would need to find another source of financial support. I suspect that would be forthcoming from enthusiasts of the game.
We do many things in medicine to patients that are either not helpful or actually have the potential to harm. If you take the long view of medical history, this should not be surprising. After all, less than a century ago physicians were still giving toxic mercury compounds to people in the form of calomel, and a century before that physicians were bleeding people because they thought that was a good thing to do for serious illness. The dawn of scientific medicine in the late 19th century began the process of putting medicine on a scientific basis, that is, of demanding proof a particular therapy actually works — and why. But we still have many, many things we do in medicine that have never been studied rigorously and are done more because of tradition than anything else. I have been encouraged over the past decade or so to see more and more of the accepted practices, therapies that have never been shown to be helpful, are being questioned. Treatment of respiratory syncytial virus (RSV) infection is a good example of this; we now follow fairly specific guidelines regarding what to do, guidelines which are based upon actual evidence rather than tradition. Traditions die hard, though, and I still see some of my colleagues clinging to the older approach that has been shown not to help. We need to keep the stuff that works and discard that which doesn’t.
An interesting recent article asked the fundamental question of how many children receive one of of our regrettably common treatments that not only don’t help, but might cause harm. The authors focused on a 20 of these, such as cough and cold remedies, and analyzed how many children in a database of over 4 million children received one or more dubious therapies during the preceding year. The results showed such unhelpful or even dangerous therapies are still extremely common. Around 10% of all children received such therapies, costing 27 million dollars, a third of which was paid out of pocket by the children’s families.
So what are these therapies? I noted over-the-counter cough and cold remedies above, which have been shown at best not to help and at worst to cause dangerous side effects. Other examples include testing for strep throat in children less than 3 years of age, blood tests for vitamin D deficiency in normal children, sinus x-rays in children with uncomplicated sinus infection, and head CT scans (or other neuroimaging) in a child with simple headaches. You can read the whole list at the reference cited above.
The consensus estimate is that around a third of all medical therapies done in America are at best unhelpful and at worst potentially harmful. We in pediatrics need to do our part to address this problem. A major issue is that our culture is conditioned to regard doing something as better than not doing something. We are primed to think the physician who listens to the parents (or the patient), ponders what to do, and then recommends doing nothing is somehow a poor physician because they “haven’t done anything.” We don’t value the explanation, the thinking, the diagnosis, as an important contribution to a child’s health. The irony here is that physicians are much more highly compensated for doing things and much less for offering advice. So there is a strong compulsion to do something. Also, listening to a parent and pondering takes more time than just prescribing a test or a therapy and physicians are rewarded for throughput, seeing one patient after another quickly. The market incentives are perversely stacked against practicing good medicine. I wish I could say that will change, but I don’t see any hopeful signs that it will.
A fairly recent article in the journal Pediatrics is both intriguing and sobering. It is intriguing because it lays bare something we don’t talk much about or teach our students about; it is sobering because it describes the potential harm that can come from it, harm I have personally witnessed. The issue is overdiagnosis, and it’s related to our relentless quest to explain everything.
Overdiagnosis is the term the authors use to describe a situation in which a true abnormality is discovered, but detection of that abnormality does not benefit the patient. It is not the same as misdiagnosis, meaning the diagnosis is inaccurate. It is also distinct from overtreatment or overuse, in which excessive treatment is given to patients for both correct and incorrect diagnoses. Overdiagnosis means finding something which, although abnormal, doesn’t help the patient in any way.
Some of the most controversial and compelling examples of overdiagnosis come from cancer research. Two of the most common cancers, prostate cancer for men and breast cancer for women, run smack into the issue. It is certainly generally true early diagnosis and treatment of cancer is better than late diagnosis and treatment . . . usually, not always. A problem can arise when we use screening tests for early cancer as a mandate to treat them aggressively when we find them. The PSA (prostate-specific antigen) blood test was developed when researchers noticed it went up in men with prostate cancer. From that observation it was a short but significant leap to use the test in men who were not known to have cancer to screen for its presence. The problem is at least two-fold. There is overlap between cancer and normal, and many small prostate cancers do not progress quickly. Since the treatment for prostate cancer is seriously invasive and has several bad side effects, the therapy may be worse than the disease, especially in older men. You can read more about the PSA controversy here. There are similar questions about screening for breast cancer; you can read a nice summary here. The controversy has caused fierce debates.
Children don’t get cancer very often, but there are plenty of examples of overdiagnosis causing mischief with them, too. The linked article above describes several common ones. A usual scenario is getting a test that, even if abnormal, will not lead to any meaningful effect on the child’s health. Additionally, an abnormal test then typically leads to getting other tests, which can lead to other tests, and so on down the rabbit hole. I have seen that many times. As the authors state:
Medical tests are more accessible, rapid, and frequently consumed than ever before. Discussions between patients [or their parents] and providers tend to focus on the potential benefits of testing, with less regard for the potential harms. Yet a single test can give rise to a cascade of events, many of which have the potential to harm.
This is kind of a new frontier in medicine, and the issue grows larger as the huge number of diagnostic tests we have mushrooms every year. For a parent, a good rule of thumb is to ask the doctor not just what the benefits of a proposed test are, but also the risks. Importantly, ask what the doctor will actually do with the result. We are prone to think more information is always a good thing, but that clearly is not the case. And never, ever get a test just because you (or your doctor) are merely curious.
It has long been known excessive exposure of your child to screens and social media — television, computers, iPads, iPhones, video games — can have profound effects on brain development. A big question is: “What counts as excessive?” No one is sure about that, and it is likely there is no clear-cut threshold. Brains being complicated things it seems probable threshold varies from child to child. Also keep in mind computers and the like are necessary things in modern life and can contribute significantly to learning. How to find a balance? The American Academy of Pediatrics, the organization representing most pediatricians, issues consensus recommendations on many child health issues, including this one: “Media and Young Minds.” It’s an excellent summary of what we know about the issue and provides a list of specific suggestions. These current recommendations allow for more screen time than previous ones, but still recommend less than one hour per day for preschool children and little or none for children under eighteen months. The AAP suggests each family should devise their own comprehensive media plan, rather than just letting things happen in the home without considering the implications.
I also suggest you read this article from NPR, which summarizes some of the results presented at a recent meeting of the Society for Neuroscience. It includes information about both pros and cons of screen exposure. Here are some results from mice suggesting video games function almost like a drug in their effects on the brain:
. . . a study of young mice exposed to six hours daily of a sound and light show reminiscent of a video game. The mice showed “dramatic changes everywhere in the brain,” said Jan-Marino Ramirez, director of the Center for Integrative Brain Research at Seattle Children’s Hospital. Many of those changes suggest that you have a brain that is wired up at a much more baseline excited level, Ramirez reported. You need much more sensory stimulation to get [the brain’s] attention.
Other investigators have suggested some degree of stimulation of this sort helps the developing brain stay more calm in our current environment, which is becoming ever more cacophonous and stimulatory. That viewpoint stresses we can’t turn back the clock to a simpler time and we should try to use media to prepare children for our world today. A sort of middle ground is the viewpoint that exposure to lots of screens and media is good for some children but not for others. Okay, that sounds reasonable, but how do we know who it helps and who it hurts? Nobody has an answer to that question.
What do I think? In my family we do limit screen time and virtually ban video games. It’s the rapid, flashing changes of games that appear most associated with learning problems like ADHD. I suppose I’m biased because I write books (on a computer!), but I think for older children the distinction is between using the computer as a tool versus as an amusement toy. Every parent needs to make their own decisions, of course, but developing an informed family policy and plan is better than just ignoring the issue.
Sometimes an interesting thing happens on patient rounds. Rounds are a traditional exercise in hospitals going back at least a century. In the old days, this meant the physician going from patient to patient. He (it was nearly always he back then) went over the patient’s progress with the bedside nurse, examined the patient, reviewed pertinent test results, made an assessment, decided on a plan for the day, and gave orders to implement the plan. He also explained things to the patient. That traditional system worked fine when there was only one physician running things. These days there are many caregivers involved, and intensive care units pioneered the practice of multidisciplinary rounds. What those amount to is an often fairly large group of people going together from patient to patient. Typically in the group are the bedside nurse(s), physician(s), which at an academic center includes residents, fellows and medical students, pharmacists, dieticians, physical therapists, and assorted other people involved in the patient’s care. It can be a large group, so large that some ICUs I know of hold these “rounds” sitting down around a table in a conference room.
I am often struck by the language people use at these rounds. In particular, there is an intriguing lack of assigning agency to what is going on. This is accomplished by extensive use of both the passive voice and strange sentence constructions in the third person: “The patient was thought to have pulmonary edema,” for example. Who thought this? “The patient was given furosemide (a medication to induce water loss).” Who gave this? They did — you know, them. Or you’ll hear something like this: “The patient was thought to have decreased cardiac function so an echocardiogram was ordered.” Who thought this and who ordered the test? Why did whoever it was think this?
In today’s ICUs there’s a lot of shift work. Nurses typically work 12 hour shifts. Resident physicians now also work shifts, since limits have been placed on how much time they can work without relief. Often the only person who has been caring for the patient on successive days is the attending physician, the leader of the care team. A nurse or resident, someone who has not been caring for the patient consistently (or ever before) may only know what other people have told them. There is also the medical record, of course, but in these days of the electronic medical record there typically is a huge amount of extraneous stuff in the computer that obscures one’s ability to figure things out. (I’ve written about this issue before.) This can make for some amusing — or discouraging — exchanges on rounds. “They were thinking the patient might be in heart failure so they got an echocardiogram.” At which point I need raise my hand and point out “they” was me.
The old game of telephone (also known as the Chinese whispering game) is where people explain the same incident to a successive chain of listeners. The story unavoidably gets altered a bit with each retelling such that after a few rounds key details get left out, wrong ones get added, and it can transform itself into a completely different tale. I have seen that happen many times in the ICU setting. Now and then I have to rummage through the medical record to track down the source of whatever it is and call that person to get the story. You know, the old fashioned technique of one colleague talking to another, one who has first hand knowledge of events. These days that often seems a quaint old practice.
In today’s complex hospital environment, especially at academic centers with resident physicians, it is uncommon for the same physician to admit a patient, care for him throughout his hospital stay, and then dismiss him. At one of the smaller hospitals where I work I actually do this frequently because I cover the PICU for a week-long chunk of time all by myself. The nurses change, though. So now and then I hear from one of them at morning bedside report about the mysterious “they” doing this or that when “they” is really just me.
The telephone game illustrates the problem with handoffs, times when care of a patient is transferred from one individual to another. Handoffs happen at both the nursing and the physician level. They are made necessary by the way shift work happens, but they are known to be fraught with danger. Many standardized communication tools are now available to reduce the risk of things getting missed or misrepresented but none of them are perfect. My advice is, when things are murky, to take a big breath and dive down the rabbit hole of the electronic record and identify when and why the thing started. Then if necessary call the person involved. Of course that takes time, precious minutes we often don’t have. That’s another problem for another day.
The following is from this recent study. It’s from Pediatrics, official journal of the American Academy of Pediatrics. That’s a medical journal, but the introduction to the paper is so clearly written as to be understandable by anyone that I’m quoting it pretty much as written.
Approximately 1.6 to 3.8 million sport/recreation-related concussions (SRCs) occur annually in the United States. In 2007, there were 250 000 emergency departments visits for SRC, more than double the rate in 1997. Concussions result in symptoms (eg, headache, dizziness, nausea), impairment (eg, cognitive, vestibular, visual), academic and/or psychosocial problems, and recovery times ranging from days to months. Adolescents are at greatest risk for SRC and experience longer recovery than adult athletes due to maturation or other unknown etiology. Clinical guidelines recommend immediate removal from play if an athlete has a suspected SRC. These guidelines are based on . . . research indicating compromised neurometabolic function during the first 10 days’ postinjury that increases the risk of a subsequent SRC. These guidelines are also intended to reduce the risk of second impact syndrome, a rare but catastrophic condition that involves the loss of cerebrovascular autoregulation and brain herniation, and is often fatal among adolescent athletes who sustain brain injuries in short succession.
I’ve seen a death from second impact syndrome. The second head injury that produced it was a trivial tumble down 2 carpeted steps.
The Centers for Disease Control’s Heads Up concussion education program states, “It is better to miss one game than the whole season.” However, due to many factors, including the culture of sports (ie, play through injury), poor awareness of the signs and symptoms, and limited access to medical professionals, an estimated 50% to 70% of concussions go unreported/undetected. In fact, in 2013, the Institute of Medicine and National Research Council stated that the culture of sports negatively influences SRC reporting and that athletes, coaches, and parents do not fully acknowledge the risks of playing while injured.
There is immense social pressure to ignore the symptoms and keep playing.
Researchers suggest that exposure to physical activity immediately after concussion decreases neuroplasticity and cognitive performance and increases neuroinflammation. The physical exertion required for an athlete to remain in play after SRC may increase energy demand at a time when the brain is metabolically compromised and lead to similar outcomes reported in animal models. Axonal injury, astrocytic reactivity [two kinds of brain cells], and memory impairment are also exacerbated following a second injury 24 hours after an initial injury. The potential effects of continuing to play with an SRC have yet to be examined in adolescent and young adult athletes at risk for these adverse outcomes. The present study compared recovery time and related outcomes between athletes with an SRC who were immediately removed from play and athletes who continued to play with an SRC.
Understand this: a concussion, defined as a transient interruption in brain function, is brain damage. Just because we don’t yet have a specific scan or test to document the damage doesn’t mean it’s not there. The symptoms described above are proof enough. The good news is that the damage generally heals if allowed to do so. But if we don’t allow the brain to heal, repetitive injury leaves lasting damage. The recent recognition among professional football players of what is currently termed chronic traumatic encephalopathy is chilling evidence of what can happen. We don’t know what the lower threshold for causing this entity is — how much recurrent injury is required to produce it — but it has recently been identified in college football players.
So if your child suffers a concussion follow the rules, no matter what the coach or your overeager child wants. We’re talking about the brain here. This article well demonstrates that we’re not protecting our children well.
A manifesto has been making the rounds on Twitter (and other places) over the past year. It has been attributed to Dr. Mike Ginsberg, a California pediatrician. It reportedly was originally a Facebook post that has since been taken down, perhaps because of the controversy it generated. I can understand why — vaccines are a hot button topic and anyone who writes about them attracts attention, some of it unpleasant. I know that’s happened to me. Anyway, here’s the quotation attributed to Dr. Ginsberg:
In my practice you will vaccinate and you will vaccinate on time. You will not get your own “spaced-out” schedule that increases your child’s risk of illness or adverse event. I will not have measles-shedding children sitting in my waiting room. I will answer all your questions about vaccine and present you with facts, but if you will not vaccinate then you will leave my practice. . . . .
I have patients who are premature infants with weak lungs and hearts. I have kids with complex congenital heart disease. I have kids who are on chemotherapy for acute lymphoblastic leukemia who cannot get all of their vaccines. In short, I have patients who have true special needs and true health issues who could suffer severe injury or death because of your magical belief that your kid is somehow more special than other children and that what’s good for other children is not good for yours. This pediatrician is not putting up with it.
Never have, never will.
These are strong words indeed, and they came out of the context of the recent measles epidemic experienced by California that was driven by unvaccinated children. I’ve seen it percolate around social media and wondered how common his (assuming it’s his) stance is. Of interest is that California recently passed legislation severely restricting the ability of parents who send their children to public schools to opt out of vaccinations. A recent article in the journal Pediatrics, the official journal of the American Academy of Pediatrics, gives us some answers about how widespread Dr. Ginsberg’s viewpoint is. The title of the article was “Vaccine delays, refusals, and patient dismissals: A survey of pediatricians.”
The authors compared two surveys done on a random sample of pediatricians who belong to the American Academy of Pediatrics (nearly all do). The first was from 2006, the second from 2013. It found the percentage of pediatricians dealing with families that refuse standard vaccination schedules has risen from 75% to 87%; in other words, most practicing pediatricians encounter this. The survey also asked what reasons the parents gave for declining to vaccinate. The pediatricians perceived that, although most parents declined out of fear of vaccine toxicity, a rising number — 73% in 2013 — did so because they believed they are not necessary. What has changed significantly is what pediatricians do in this situation. The percentage of pediatricians who will dismiss from their practice families who do not vaccinate has doubled, from 6% to 12%.
This is a complicated issue. Dismissing a patient from one’s practice is a formalized process that may take several months. Common reasons include nonpayment of bills or repeated failure to follow treatment advice, vaccinations in this case. There is a legal process for doing this that involves written notification. That’s the legal part. Ethically, physicians have a duty to care for their patients. When you “fire” a patient from your practice, you have a duty to help them locate another physician. Meanwhile you should continue to care for them. The fallout from this varies significantly depending where you are. As near as I can tell, Dr. Ginsberg practices in Northern California near a large metropolitan area. He is also part of a large group. So there should be plenty of other pediatricians in the area. But what if there aren’t? And what if what the patient wants is so far outside the mainstream that no physician will accept them? Are the physician and the patient stuck with each other?
It appears to me some pediatricians are handling the issue by refusing to accept new patients who won’t vaccinate. This approach gets around the problem because in that case the physician has no duty to care for the prospective patient. That’s not good enough for pediatricians like Dr. Ginsberg and, apparently, now 12% of all America’s pediatricians. We will have to see if this trend continues.
You should not have to guess my own view on vaccines. I trained in the subspecialty of pediatric infectious diseases before I trained in pediatric critical care. I well recall the severely damaged or dead children from, for example, H. influenza infection. The current HIB vaccine has eliminated this scourge. I also have an advanced degree in history of medicine, and have studied epidemics over time. Vaccines have had an enormously beneficial effect. Recall the famous line of Santayana: “Those who cannot remember the past are condemned to repeat it.”
Asthma is a complex, chronic lung problem that now affects nearly 10% of all children. Both the incidence of new cases and the prevalence of ongoing cases in the pediatric population have been rising steadily for years, although there are hints these increases may have leveled off. A wealth of research suggests a huge part of asthma causation comes from the environment the child lives in, things like air quality and exposure to various agents. Some children have clear-cut allergies as a contributing factor, but the majority don’t. Genetic factors also play a role, probably because the way the lung responds to these various things is a tendency we inherit. Exactly what might be triggering the rise in asthma has been debated for many years. Candidate causes include a more sedentary lifestyle in children, increasing childhood obesity, and increasing urbanization of our country. A century ago most people lived on farms; now most don’t. This possibility is the subject of a recently published and fascinating study in the New England Journal of Medicine. The authors were curious about asthma rates and causation in children who live in the traditional, rural setting typical many years ago. There are some data children raised on traditional dairy farms with early exposure to farm animals have a reduced risk of asthma. The investigators used a clever comparison between two groups of children: Amish children in Indiana and Hutterite children in South Dakota. Both of these are religious groups descended from seventeenth century Pietistic sects originating in Europe.
I have had many Amish patients in my career, but not any Hutterites. They are similar in lifestyle but there are some key differences that the authors of the study used to get at studying asthma. The Amish are predominately farmers, although I have known many who are not. Amish farming practice is straight out of the 19th century. They use no power machinery. They use horses to plow, they manure their fields as their great-great grandparents did, and they use horse-drawn implements to harvest their crops. If they have dairy cows, they milk them by hand into a bucket. The religious practices of Hutterites are very close to those of the Amish, but their farming practices definitely are not. The Hutterites live communally and use modern machinery on highly industrialized modern farms. The comparison between the two groups is useful because in other respects they have very similar lifestyles in things believed to be important for asthma risk. These include large families, minimal exposure to urban air, prolonged breast feeding, no indoor pets, minimal exposure to tobacco smoke, and low rates of obesity. They also have very similar diets and similar genetic backgrounds. Going into the study the authors already knew the prevalence of asthma for Hutterite children was 21% (a high rate) versus only 5% in Amish children. Why the difference?
The investigators measured many things, but key among them were studies on dust samples collected from the children’s environments. They analyzed the microbial makeup of these samples, as well as used them to challenge the lungs of mice in an experimental asthma model. They also looked at several markers of immune function in the blood cells of the two groups of children. What did they find?
The results are a bit complicated, and if you want the details look at the article. There’s also a good editorial accompanying it. The bottom line is that the innate immune response of the Amish children was profoundly different from that of the Hutterite children, and this difference appeared to be shaped by exposure to very different microbial agents in the environment as measured in the dust samples. More than that, the dust samples from the Amish farms actually protected the mice in the animal model from an asthma attack. That’s amazing. Maybe we should be treating asthma with Amish farm dust? I’m not serious, of course, but the study does suggest some reasons why the change from a traditional farming way of life to the way we live now may be part of why asthma is now so common. Early and sustained exposure to certain microbes may be a good thing, a notion that was proposed many years ago — the so-called hygiene hypothesis.
The bacterium Neisseria Meningititis, also known as meningococcus, is a horrible pathogen. It can cause rapidly lethal infection. The infection comes in two forms. It can cause meningitis, inflammation of the surface of the brain, or it can just circulate in the bloodstream, a condition called meningococcemia. Some patients have evidence of both. You might think the brain infection is the worse of the two, but actually meningococcemia without meningitis has a far worse outcome. When the bacteria circulate in the bloodstream they cause profound shock, which can itself be fatal. They also activate the blood clotting system so the patient clots off blood vessels, leading to loss of limbs and worse. I’ve cared for more than a few patients with this infection, and it’s a dreadful one. The bacterium itself is actually quite commonly carried in the throat of well, normal people. It gets passed around by respiratory secretions. Invasion of the body is rare, and we don’t know why it happens in a few people and not everybody. We do know once the percentage of people in a confined area, such as an army camp or a college dormitory, who carry the organism reaches a certain point the risk of invasive infection rises quickly. There are many reports of epidemics in such settings.
Vaccines are designed to neutralize whatever pathogen they’re directed against. The immune system has several components that work in concert to accomplish this. Vaccines cause the body to produce antibodies, proteins that specifically bind to the surface of the microorganism. Antibodies call down other cells and blood proteins that recognize the red flag of the bound antibody and which destroy the bug. So the effectiveness of a vaccine depends on its ability to do this. Researchers typically measure blood levels of the relevant antibody as a proxy for effectiveness because it has been shown to neutralize the bug. You can verify that by what is known as a bactericidal test, in which blood components including antibody and other things are mixed with bacteria in a tube to see if it kills them.
Meningococcus comes in several related but distinct strains, as do most bacteria. We have an FDA-licensed, effective vaccine for all the strains except one, termed type B. These vaccines induce antibodies directed against the sugar coating (polysaccharide) on the bacteria. But type B has a surface polysaccharide similar to those on our own cells, and we don’t want antibodies against those things or they could also attack our own tissues. There is a vaccine, licensed in Europe, against type B meningococcus that chemists made by purifying six of the unique protein components on the bacterial wall and mixing them together in a vaccine. When injected into a person, that person makes antibody aimed at the components. But is that enough to kill the bacteria? That’s the bottom line. The answer turns out to be: more often than not, but not all the time. The safety of the vaccine has been shown, but its effectiveness leaves something to be desired. A recent paper and editorial in the New England Journal of Medicine is instructive. If you want to dive into the details of how vaccines are made and work it’s a good article to look at, especially the editorial.
There have been 7 outbreaks of type B disease during the past 6 years at US universities. The investigators studied the vaccine when it was used (with FDA approval) during a recent outbreak of type B meningococcal disease in New Jersey that caused 9 cases and 1 death. The vaccine was offered to 6,000 students, and this paper reports results from 607 of them. Of note, the strain causing this particular epidemic was shown to have 2 of the protein components contained in the vaccine. How well did it work?
Sixty-six percent of the students developed the ability to neutralize the strain of meningococcus that caused the outbreak. That’s not very good, really. When tested against the reference strain, the one used to make the vaccine, nearly 100% had neutralizing ability. Recall that the outbreak strain had at least 2 of the six protein components included in the vaccine strain. So one way to look at it is that yes, the vaccine induced immunity to the reference strain, but not so much to the strain that actually caused the outbreak. In the authors’ words:
Our results indicate that knowledge of [bacterial killing] immunity against the vaccine reference strains is not sufficient to predict individual-level immunity against an outbreak strain, even when the [outbreak] strain expresses one or more antigens that are closely related to the vaccine antigens.
So now what? I assume researchers just need to keep trying, although it’s a daunting task because there is quite a bit of variability among the proteins in type B organisms. Clearly hitting 2 out of 6 was not enough in this case. Should the vaccine be licensed in the US even though it’s not as effective as we would like? The editorialist puts things this way:
The regulatory approval and clinical use of vaccines for pathogens that cause outbreaks will remain challenging.
Yeah, I’d say so.