I occasionally dip my toe into the constant internet flame wars over what the generous term vaccine skepticism, the less generous vaccine denial. If you’re looking to see one of these quickly, Twitter always has some ongoing vaccine fireworks. The character limit of Twitter tends to compress the exchanges into hurled invectives, only occasionally punctuated by futile pleas for calm. Many quickly devolve into exchanges between posters of what they believe to be crushing evidence in the form of an image or two. A huge variety of message boards and blog comment streams can be just as vitriolic and the posters are not confined to 140 characters. It’s a swamp, but it can be a fascinating swamp. Of course full disclosure requires I say I am a stout defender of vaccine efficacy and safety, which is not surprising since I am a pediatrician with subspecialty training in infectious diseases and critical care. One particular talisman vaccine skeptics (I’ll be kind here) obsess over is the package insert found in the vaccine boxes. I’m often challenged on Twitter with an image of one and told to “read the insert, stupid.” Here’s a fairly benign example of what one sees:
That one’s just kind of ominous. There are plenty more that make it sound as if vaccines are highly dangerous — if you just read the insert. Here’s what can easily result:
Are you terrified yet? No? Well, if you’re not, perhaps it’s because you have a broader understanding of what drug package inserts are and how they got there. The inserts are produced by the manufacturer of the drug and they have long been required by the FDA to include them. Every drug has them, and you can actually see a huge compendium of what’s in them by referring to a book that has been around for many decades: the Physicians Desk Reference, or PDR. For many years every licensed physician received these huge tomes in the mail every year, although I haven’t gotten one in a long time because now it’s on the internet. Mine always went straight into the trash bin anyway. Why? Because the package insert isn’t really used by anyone in using any drug. We use published scientific data and expert opinion in deciding how and when to use a drug. The PDR site has a pretty interesting disclaimer in fine print at the bottom:
PDR.net is to be used only as a reference aid. It is not intended to be a substitute for the exercise of professional judgment. You should confirm the information on the PDR.net site through independent sources and seek other professional guidance in all treatment and diagnosis decisions.
The package inserts are, in my opinion, mostly designed as an attempt to protect the manufacturers from liability for untoward outcomes. They throw in absolutely everything that seems remotely possible or has been anecdotally reported. And you should know the reporting system is not filtered at all, merely compiled. I’ve seen lists of side effects that include opposite things: constipation or diarrhea, high blood pressure or low blood pressure, and quite a few others. Vaccine package inserts are no different in this respect from those for a host of other drugs. Even the insert for normal saline, which is salt and water, sounds kind of scary. Saline is one of the cornerstone treatments we use in pediatric critical care, yet the insert warns: “Safety and effectiveness of Sodium Chloride Injection, USP 0.9% . . . in pediatric patients have not been established.” This is pure nonsense. We inject it as quickly as possible all the time for severe shock. I’ll end with this useful image, since vaccine skeptics do love to pepper their posts with dramatic images:
No, we don’t. That, as the kids say, is the tl;dr (too long, didn’t read). But if you are interested in why I think so, read on.
Here are some background statistics. In 2017 nearly 36,000 new doctor applicants competed for about 32,000 resident positions in all specialties. For pediatrics, there were 2,738 first year residency slots spread among 204 different training programs. This is about 9% of the total positions available, a figure that has remained constant for some years. And, consistently, around 7-9% or so of graduating medical students choose pediatrics for their career. Since most first year residents go on to finish their residency that means we have around 2,500 new pediatricians completing training each year. How many of those choose to continue training in some subspecialty? In 2017 it was nearly 1,400, meaning around half of new pediatricians. The flip side is that half choose to practice primary general pediatrics, a much higher proportion than is the case for internal medicine.
Of the new pediatricians choosing to subspecialize, to train beyond general pediatrics in something like neonatology or cardiology, how are we doing in pediatric critical care? If you look at the results for the subspecialty match, you see critical care was tied with pediatric hematology for number three in popularity; neonatology is perennially number one, followed by pediatric emergency medicine at number two. In 2017, 188 pediatricians chose to continue their training to become pediatric intensivists. Nearly all of them matched to a training program. Here are the trends for the past 5 years:
Note there hasn’t been much change, maybe 10% from year to year. So the popularity of pediatric critical care has been fairly constant. Most of these fellows finish their program, so roughly 175 or so new intensivists enter practice each year. A few of the graduates may choose to return to their home country if they are not US citizens; we don’t know how large that number is. We need to staff our current PICUs and replace those intensivists who leave practice. Do we have enough people?
The American Board of Pediatrics has been certifying pediatric intensivists since 1988, the official dawn of the formal subspecialty. As of 2015 the board has certified 2,377 intensivists over that entire time span. (My certificate is #163, making me an old coot in the specialty.) How many of these intensivists are still practicing, and how many PICUs do we have in the US that need their skills?
In 2015 American Board of Pediatrics published a detailed analysis of the workforce in pediatrics, including historical trends. Among other things, it showed the total number of pediatric subspecialty trainees in all fields had doubled over the past 15 years. Critical care, for example, has gone from 99 entering trainees in 2000 to the 179 noted above. How many of these 2,377 pediatric intensivists are still practicing in the US? A few clues help decide how many of my colleagues have left practice. We know that, by 2015, 399 of the 2,377 did not renew their certification, a pretty solid indicator they are no longer practicing. That’s a lot of people — 17% of the total. If we add back the 2016 finishing fellows we are left with around 2,200 practicing pediatric intensivists at most. The workforce data show something else: how old we are. Around half of currently practicing intensivists are in their prime career age of 35-50. A quarter of us are over age 55. I’m one of only 81 still practicing beyond age 65.
What about PICU numbers? There are just over 400 PICUs in the US at last count. They vary greatly in size from 4-6 beds at smaller community hospitals to 40 beds or more at the nation’s 200 dedicated children’s hospitals. There are around 4,000 PICU beds in total throughout the country. Now we need to match up intensivists with PICUs.
On average, there is one pediatric intensivist per 35,000 children in the US. This varies quite a bit state to state, from a maximum density of one per 17,000 in Vermont to one per 225,000 children in Montana. Wyoming doesn’t have any, but they don’t have any PICUs, either. So we have 400 PICUs to work with and about 2,200 intensivists to distribute them to. Right out of the gate that’s 5-6 intensivists per PICU. But large PICUs at academic medical centers have many more than that, often 8-10 at least. This is of course partly because these units are large. In addition, their intensivists typically have other duties such as teaching and research. My totally but I think educated guess is there are about 50 of these large units, potentially soaking up around 500 pediatric intensivists. This leaves maybe 1600 for the 350 remaining PICUs, or perhaps 4 intensivists for each one. All of this assumes everybody is working full time, which is not the case, and that the workforce is evenly distributed, which it isn’t. To me the numbers look pretty tight. My own anecdotal experience bears this out; hardly a month goes by, often more frequently than that, without somebody somewhere calling me and recruiting me. And they know I’m 65 years old so, although I have no plans to retire yet, I would just be a short-term fix for their staffing problem. Many of my colleagues around the country are working short-staffed and have been for years. Of course this overwork can contribute to burn-out and make the problem worse. My off the cuff estimate is that there are about 2 intensivist positions for every working intensivist.
So my bottom line is that we don’t have enough pediatric intensivists and, looking at the training statistics, we are unlikely to get significantly more. I don’t think the number of PICUs is too high; closing a bunch of them would be an unsatisfactory solution. The standard of pediatric care has advanced over the decades and every critically ill child deserves an appropriately skilled physician. I think the number of people we are training at best matches the number leaving practice. Occasionally I learn of new PICUs opening, but I think we currently have roughly the number we will have for the foreseeable future. Of course some may close owing to various reasons, primarily financial (PICUs generally lose money) or staffing-related.
So what should we do? I think we need to modify our staffing model. Some have advocated using advanced practice nurses, physician assistants, or other what are termed mid-level providers. This might help, if supervised by intensivists. This can work in large PICUs, particularly ones that have significant numbers of patients with the same types of problems, such as dedicated cardiac units. But smaller PICUs treat a wide variety of complicated issues, things not well suited to practice pathways and protocols often used by mid-level providers in other settings. All manner of things roll through the PICU door. So I don’t think that’s the answer. I think our future lies in the rapidly growing pediatric hospitalist movement, pediatricians who work exclusively in the hospital and can rapidly become fairly proficient in managing straightforward, less dire critical care issues if they have backup from pediatric intensivists if needed. For many smaller PICUs, a blended practice of intensivists and hospitalists can work; I know of several PICUs that use this model successfully. One thing seems clear: we never will have sufficient numbers of intensivists to care for all these children by ourselves.
The continuing controversies and debates over healthcare tend to conflate several overlapping terms: access to health insurance, access to healthcare, and actual health. The current Republican position appears to me to be that universal access to health insurance solves all problems. Of course if you don’t have money to pay for health insurance all the access in the world won’t help you, and the subsidies proposed in the Ryan bill fell far short of allowing millions of Americans to buy adequate health insurance. I emphasize adequate because a cheap policy that has large holes in coverage and caps on the total aren’t much help for many or most serious illnesses and injuries. The bare-bones, catastrophic policies also generally won’t cover mental health and addition treatment, the latter of which is a huge and growing problem. These sorts of policies were forbidden by the Affordable Care Act.
There is also the issue of easy access to healthcare. Someone without insurance does not have that because many, many physicians will not see patients without insurance. Those who insist emergency departments are required to see all patients, even the uninsured, and thereby provide access to care, are ignorant of what emergency departments are and how they work. Medicaid traditionally covered only children, pregnant women, and the disabled. A key part of the Medicaid expansion contained in the ACA extended the program to low income adults. That expansion, for states that took the offer from the federal government, should allow research concerning the true bottom line — actual health. In the long run, of course, keeping the population healthier is the best way to reduce societal healthcare costs. It turns out we already have some data about that question. An article in The New England Journal of Medicine 5 years ago gave us some information about that true bottom line. In the current climate it’s well worth reading the paper again.
The authors took advantage of a natural experiment in which some states — Arizona, Maine, and New York — expanded Medicaid in some fashion prior to when the ACA kicked in. The investigators compared an easily measurable statistic — death rates at the county level for ages 20-64. Neighboring states without Medicaid expansion provided the control numbers. These older expansions were not as generous as those later provided in the ACA, but they yielded some interesting numbers. Here’s a summary of what they found regarding death rates. The trends were statistically significant at the p < 0.01 level.
As we typically say, association does not prove causation, but these results are pretty striking and need more discussion and recognition. If you have good healthcare insurance, you can get access to healthcare and, it appears, ultimately have a lower risk of dying. Medicaid has its flaws — it’s not the best insurance there is — but it makes a difference. I think a healthier society is a happier society.
Those of us who work in pediatric intensive care have frequent encounters with the problem of suicide and attempted suicide. It has seemed to me for some years that the numbers are increasing, and this has been shown to be the case. After years of declining, the suicide rate in our country has been increasing, now at about 125% of the rate of several decades ago. This increase accelerated after 2006. Although all age groups showed an increase, the rate among women, particularly adolescent girls, took a notable jump. In 2012 suicide was the second leading cause of death in adolescents aged 12 to 19 years, accounting for more deaths in this age group than cancer, heart disease, influenza, pneumonia, diabetes mellitus, human immunodeficiency virus, and stroke combined. Here are some recent statistics of women from the CDC (Centers for Disease Control), although they don’t quite break out adolescents they way I would like.
Actual suicide is just the tip of the iceberg, since, at least among adolescent girls who attempt it typically with drug overdose, there are as many as 90 attempts for every death. Since a large number of these attempts end up in the PICU I’m not surprised we are seeing more and more of them come through our doors. A few other points are worth noting here. The success statistics for adolescent boys are unfortunately much higher because boys tend to use more violent means than girls, such as hanging, firearms, or automobiles. However, although rates for boys are up slightly, they really haven’t changed much. It’s also important to realize suicide attempts are a spectrum — some are more serious than others. Many girls take an overdose and then immediately tell somebody about it. These are often called suicide gestures and can be quite impulsive. Some use the term “cry for help” to describe them. More ominous are children who carefully plan, such as by hoarding powerful drugs in secret and taking them in a setting where they won’t be found. They may leave a suicide note. I couldn’t find any data about whether these different categories are discordant in the rate increase, but I assume the two are tracking together. Finally, a child may not know which drugs are truly dangerous. I have seen very serious suicide attempts by children who take overdoses of what we know to be innocuous medications but the child does not. Regardless of what category the attempt is, of course, the child needs mental health services subsequently. These days we find a child’s text messages to be very helpful. So why the increase in adolescent girls?
Presumably suicide rates are rough and ready markers for rates of depression. Is teen depression increasing? A 2006 study says no, at least up until then. What about the last decade, since 2006 appears to be the year suicide rates inflected upward in adolescent girls. I did find a snapshot for 2015 from the CDC of the number of adolescents who experienced a major depressive episode during the year — girls were nearly 20%.
A recent study in Pediatrics, the journal of the American Academy of Pediatrics, found a nearly 50% increase in adolescent depression over the past 11 years. Mental health problems are notoriously difficult to study because of course we have no definitive test for them — no blood test, no fancy brain scans. We mostly rely on surveys. Still, it does seem something changed about a decade ago, and this is probably reflected in the increase in suicide attempts among girls at roughly the same rate as the increase of major depression.
There are a few other things to keep in mind. Prescriptions of anti-depressants have increased dramatically, particularly of drugs in the class we call selective serotonin re-uptake inhibitors (SSRIs). Common brand names for these are Prozac, Paxil, Celexa, and Zoloft. There has been concern that, in the short term after starting them, SSRIs may actually increase thoughts about suicide in adolescents. Another new development is social media. Teenagers, especially those in difficult home situations or who are socially isolated, are quite susceptible to bullying behavior, and cyberbullying has emerged as a new threat to such children. There have been several dramatic cases in the news about suicides following cyberbullying.
I’m sorry to say I really don’t know what explains these increasing rates, except to point out the overall rate of suicide for the whole population has also increased to some extent; it was 10.5 deaths per 100,000 persons in 1999 and is now 13 per 100,000. Middle-aged males have seen a dramatic jump in rates. It appears to me that, for many possible reasons, there is more social anxiety and depression in America, which in turn increases suicide rates. Adolescent girls are feeling this in particular. You might say our entire society is issuing a cry for help.
For many centuries medical practice was a black art. What physicians did was based upon theories of how the body worked that turned out to be fanciful at best, dangerous at worst. The late nineteenth century brought breakthroughs in the biological sciences, such as the identification of bacteria and new understandings of physiology, which increasingly placed medical practice on a scientific basis. That process has continued over the past 150 years, but parts of medicine remain a kind of black art; some of what we do is still based upon tradition, theories never completely validated, and sometimes just intuition and guesswork. I wish that weren’t the case, but there it is. Unfortunately, this can mean subjecting a patient to dubious, even dangerous therapies for which we have only sketchy evidence of efficacy. Once established, such practices can be hard to change because physicians, like everybody else, become attached to pet ways of doing things. The recent movement toward what is generally termed evidence-based medicine is an attempt to change this. Non-physicians are typically surprised, even shocked, to learn that much of what we do is not very evidence-based. The situation becomes even more interesting, if that is the right word, when we in fact have evidence that something doesn’t work but we keep doing it anyway.
A recent fascinating essay in The Atlantic, entitled “When evidence says no, but doctors say yes,” provides a good discussion of what can be at stake in this issue. In this example the financial subtext also becomes text because several of the therapies in question makes a lot of money for the doctor. So there may well be other motives here. The therapy of stenting for coronary artery disease, discussed in detail in the essay, is a good example of how common sense can be wrong.
When the coronary arteries, which supply blood flow to the heart muscle, get narrowed by disease the heart can be starved of oxygen and respond with pain. At worst the result can be a myocardial infarction, a heart attack. Forty years ago the only way we had to open up the blockages was to bypass them with a graft — major surgery. There are still times this is the best option. Subsequently cardiologists started doing other things to open up the vessels without surgery. This involved passing a thin device, a catheter, up through the vessel to the narrowing and stretching it open. Soon after came the notion of keeping the narrowed part open by placing a stent, a kind of wire expander, inside the offending vessel(s). That procedure makes intuitive sense to anybody with a rudimentary knowledge of plumbing; if the pipe is narrowed, open up the pipe and then prop it open. The procedure benefits some patients in some situations, but not most of them, especially not patients who are otherwise stable. It turns out to be better to use medicines to both eliminate the pain and reduce the chances of a heart attack. In the words of one expert, “Nobody that’s not having a heart attack needs a stent.” Importantly, putting a piece of expandable metal in the heart is not risk-free; it can cause serious complications, even death. So here we have a therapy that usually doesn’t help and can kill you. But many, even most invasive cardiologists keep doing it. The essay examines why this might be. It also gives several other examples besides coronary artery stents where research does not support continued use of a drug or a procedure.
There are a couple of issues in play — one specific, one general. The specific one is that here we have a procedure widely done apparently for reasons that solid research has shown are not valid. Why? The essay’s author concludes physicians are driven very much by social rather than scientific reasons. The highly regarded Dartmouth Atlas of Health Care has been studying for years how physician culture, not science, substantially determines what they do. The more general issue is that it may surprise you to read we have solid proof of benefit for a disturbingly small list of things doctors do. I think the main reason for that is we do many, many things in medicine and conducting a controlled trial on all of them, even most of them, is impossible. Evidence-based medicine is fine for those things for which we have evidence. But for those things for which we only have intuition and sometimes guesswork it is often best to remember the famous formulation of Loeb’s laws. Many times it is best to go by this dictum, when tempted to forge ahead into the mist: “Don’t just do something, stand there”!
At any rate, the Atlantic article is well worth a careful read.
The explosion of narcotic abuse over the past decade or so has led to a marked increase in the death rate from overdoses. More people taking narcotics also inevitably means some of them are pregnant women, and the drug crosses the placenta to affect her baby. When someone addicted to narcotics suddenly stops taking them the result is acute narcotic withdrawal syndrome, a very painful and even dangerous thing. This is what happens when a baby is born to a mother who has taken significant amounts of narcotics during pregnancy — the drug supply is cut off when the umbilical cord is cut. Within a day or so the infant experiences acute withdrawal symptoms, termed neonatal abstinence syndrome (NAS). The incidence of this has risen dramatically, the CDC says by over 300% in the last decade. The incidence of NAS varies widely by state. West Virginia is the highest, affecting 3.3% of all births, closely followed by Maine and Vermont, which also have rates higher than 3%. Those are very disturbing statistics. Hawaii has the lowest rate.
The symptoms of NAS are variable and depend to some degree on the extent of the mother’s drug habit, and therefore the doses the infant was exposed to before birth. These symptoms include irritability, a peculiar high-pitched cry, tremulousness, difficulty sleeping, sweating, poor feeding, vomiting, and diarrhea. Not all infants have all the symptoms; most of them have several, though. We can confirm the diagnosis with tests on the infant (or mother) looking for opioids. Infants with mild symptoms often do not require specific therapy. If the symptoms are more severe we give the infant sufficient narcotic to relieve the acute symptoms and then gradually reduce the dose over time to wean the child’s brain from the need for the drug.
A recent study highlights another aspect of the drug epidemic. It indicates that, whereas 5-7 years ago the increase was uniform between urban and rural parts of the country, since then rural America has been much harder hit. The graph above is from that study. It also shows the related finding, as expected, that maternal opioid use parallels the increases in NAS. The observation that the raw numbers are very similar points out another thing: women who use opioids during pregnancy have a very high risk of giving birth to an infant afflicted with NAS. Some studies suggest the risk is well over 90%.
Much has been reported in the media about the epidemic, and it is an epidemic, of drug abuse washing over rural America, particularly West Virginia and Maine. This comes at a high cost to innocent babies. One area of particular concern is that we don’t know at this time if NAS affects a child’s brain permanently. It would not surprise me at all if it did.
That’s the provocative title of a recent article in Pediatrics, the journal of the American Academy of Pediatrics. The background to this controversy is the increasing recognition that chronic traumatic encephalopathy (CTE), a severe and debilitating brain problem first identified in former professional football players, can have its beginnings in college or even high school football players. That’s deeply concerning.
The article takes the form of this hypothetical scenario:
A primary care pediatrician in a small town receives a phone call from the local school board asking her if she will come speak to them about the benefits and the risks of football for high school students. The board is worried because, during the recent high school football season, a newspaper reported about the experiences of 3 young men who sustained concussions while playing football.
Four experts are then asked to comment on the scenario. The first expert is a pediatrician and an epidemiologist. It’s interesting that he, the first one in the discussion, chooses to evaluate the question in light of the principles of medical ethics. These include primum non nocere (first do no harm), beneficence (strive to do good), autonomy (people have a right to decide what to do with their bodies), and justice. Regarding the first of these, he states we already have ample evidence football can cause severe harm.
Knowing that football causes more harm to the brain than any other sport and yet encouraging participation while we await the results of more rigorous research violates the principle of do no harm.
This expert also suggests football violates the principle of beneficence, in that the harm it does outweighs whatever positive attributes it provides.
Football may provide benefits in character building, teamwork, and physical fitness, but given the potential long-term devastating harm, football does not meet the criterion of beneficence, because it is unknown whether the benefits exceed the risks.
But what about a person’s right to autonomy in decision-making? The expert concludes, in spite of parents signing waivers, football programs as currently constructed do not meet the fairly strict criteria for informed choice we use in medicine. Those are, among other things, that patients are informed of the known risks of things, in this case brain injury.
As is apparent from a recent report on tackling and football, pediatricians do not have the knowledge required to help parents decide whether the potential health risks of sustaining these injuries (football-related) are outweighed by the recreational benefits associated with proper tackling.
Finally, the expert points to the ethical principle of justice. He concludes football, as currently played, violates this principle owing to, among other things, the huge preponderance of African-American players.
This means that African-American football players face a disproportionate exposure to the risk of concussions and their consequences. Out of fairness, the community must ask whether those who participate in football are already at increased risk of poor health due to social determinants, such as race/ethnicity and socioeconomic status. This is not to make the patently unfair suggestion that African-American males or others at risk should be singled out from participation in football, but rather to acknowledge that acquiescence toward the risks of football disproportionately affects these populations.
Abolish high school football, says expert number one. Pretty strong stuff, and I suspect most would regard it as highly controversial.
The second expert is a sports medicine physician. He notes football has always been a dangerous sport and has been regarded as such for nearly a century. He also notes that, according to the CDC, concussions are now ten times more common than a decade ago. A large component of this increase is probably increased awareness, however. This expert describes the paucity of data about the issue in children, which is kind of true. He also notes football’s popularity and that concussions occur in other sports, which is also true. But his conclusion, offering the standard “we need more research” dodge beloved of climate change deniers and vaccine skeptics seems pretty feeble to me. Yes, we always need more research. But we already have quite a bit on this topic, with more appearing constantly.
The third expert is also a sports medicine physician. He also points out concussions occur in other sports, sometimes even at higher rates. Like the second expert, he also cites the character and teamwork-building aspects of sports participation, an argument that has always struck me as more an article of faith rather than of substantive proof. He advances the paradoxical argument that concern over concussions has gone too far, and that, managed appropriately by trained physicians, football’s risk is worth its benefit. He says keep football, but be proactive in promoting safety, by which I assume he means having trained medical personnel at practices and games. That’s always a good idea, since a key part of reducing brain injury is not letting a player with a concussion return to contact play until the first concussion has healed.
The last expert is a bioethicist. His contribution is brief, a sort of summary statement, and he doesn’t discuss any ethical questions the way the first expert did. Still, like the first expert, he doesn’t think high school football is worth the risk to the players — he would cancel it.
But I doubt that I would prevail. In spite of publicity about the dangers of football, the sport remains popular. More high school students participate in football than in any other sport. And many experts in sports medicine believe that football can be made safe enough. Given that, pediatricians should try to help parents and school boards understand the facts. They should also insure that a culture of safety prevails over a culture of winning at any cost.
People are deeply attached to high school football. I know this personally. My father was president of our local school board and also a pediatrician. When the local team experienced an epidemic of Staphylococcal aureus skin infections, my father, on the recommendation of the state health department, close down the locker room so it could be thoroughly fumigated and cleared of this potentially deadly bacteria. That meant he had to suspend football for several weeks. Most people understood his action, but many were furious. A few threw bags of flaming garbage (and worse) on our front lawn. It was highly disturbing.
What do I think? I would not allow my son to play football. Beyond that, I think any parent who allows their son to play should be told the very real risks of doing so, something that is not happening now. Removing public financial support of high school football could well cripple the college and professional versions of the game, since high school play feeds both of those. But I would support the decision of any brave school board that takes such an action. If they did so, I suspect football would still exist, it just would need to find another source of financial support. I suspect that would be forthcoming from enthusiasts of the game.
We do many things in medicine to patients that are either not helpful or actually have the potential to harm. If you take the long view of medical history, this should not be surprising. After all, less than a century ago physicians were still giving toxic mercury compounds to people in the form of calomel, and a century before that physicians were bleeding people because they thought that was a good thing to do for serious illness. The dawn of scientific medicine in the late 19th century began the process of putting medicine on a scientific basis, that is, of demanding proof a particular therapy actually works — and why. But we still have many, many things we do in medicine that have never been studied rigorously and are done more because of tradition than anything else. I have been encouraged over the past decade or so to see more and more of the accepted practices, therapies that have never been shown to be helpful, are being questioned. Treatment of respiratory syncytial virus (RSV) infection is a good example of this; we now follow fairly specific guidelines regarding what to do, guidelines which are based upon actual evidence rather than tradition. Traditions die hard, though, and I still see some of my colleagues clinging to the older approach that has been shown not to help. We need to keep the stuff that works and discard that which doesn’t.
An interesting recent article asked the fundamental question of how many children receive one of of our regrettably common treatments that not only don’t help, but might cause harm. The authors focused on a 20 of these, such as cough and cold remedies, and analyzed how many children in a database of over 4 million children received one or more dubious therapies during the preceding year. The results showed such unhelpful or even dangerous therapies are still extremely common. Around 10% of all children received such therapies, costing 27 million dollars, a third of which was paid out of pocket by the children’s families.
So what are these therapies? I noted over-the-counter cough and cold remedies above, which have been shown at best not to help and at worst to cause dangerous side effects. Other examples include testing for strep throat in children less than 3 years of age, blood tests for vitamin D deficiency in normal children, sinus x-rays in children with uncomplicated sinus infection, and head CT scans (or other neuroimaging) in a child with simple headaches. You can read the whole list at the reference cited above.
The consensus estimate is that around a third of all medical therapies done in America are at best unhelpful and at worst potentially harmful. We in pediatrics need to do our part to address this problem. A major issue is that our culture is conditioned to regard doing something as better than not doing something. We are primed to think the physician who listens to the parents (or the patient), ponders what to do, and then recommends doing nothing is somehow a poor physician because they “haven’t done anything.” We don’t value the explanation, the thinking, the diagnosis, as an important contribution to a child’s health. The irony here is that physicians are much more highly compensated for doing things and much less for offering advice. So there is a strong compulsion to do something. Also, listening to a parent and pondering takes more time than just prescribing a test or a therapy and physicians are rewarded for throughput, seeing one patient after another quickly. The market incentives are perversely stacked against practicing good medicine. I wish I could say that will change, but I don’t see any hopeful signs that it will.
A fairly recent article in the journal Pediatrics is both intriguing and sobering. It is intriguing because it lays bare something we don’t talk much about or teach our students about; it is sobering because it describes the potential harm that can come from it, harm I have personally witnessed. The issue is overdiagnosis, and it’s related to our relentless quest to explain everything.
Overdiagnosis is the term the authors use to describe a situation in which a true abnormality is discovered, but detection of that abnormality does not benefit the patient. It is not the same as misdiagnosis, meaning the diagnosis is inaccurate. It is also distinct from overtreatment or overuse, in which excessive treatment is given to patients for both correct and incorrect diagnoses. Overdiagnosis means finding something which, although abnormal, doesn’t help the patient in any way.
Some of the most controversial and compelling examples of overdiagnosis come from cancer research. Two of the most common cancers, prostate cancer for men and breast cancer for women, run smack into the issue. It is certainly generally true early diagnosis and treatment of cancer is better than late diagnosis and treatment . . . usually, not always. A problem can arise when we use screening tests for early cancer as a mandate to treat them aggressively when we find them. The PSA (prostate-specific antigen) blood test was developed when researchers noticed it went up in men with prostate cancer. From that observation it was a short but significant leap to use the test in men who were not known to have cancer to screen for its presence. The problem is at least two-fold. There is overlap between cancer and normal, and many small prostate cancers do not progress quickly. Since the treatment for prostate cancer is seriously invasive and has several bad side effects, the therapy may be worse than the disease, especially in older men. You can read more about the PSA controversy here. There are similar questions about screening for breast cancer; you can read a nice summary here. The controversy has caused fierce debates.
Children don’t get cancer very often, but there are plenty of examples of overdiagnosis causing mischief with them, too. The linked article above describes several common ones. A usual scenario is getting a test that, even if abnormal, will not lead to any meaningful effect on the child’s health. Additionally, an abnormal test then typically leads to getting other tests, which can lead to other tests, and so on down the rabbit hole. I have seen that many times. As the authors state:
Medical tests are more accessible, rapid, and frequently consumed than ever before. Discussions between patients [or their parents] and providers tend to focus on the potential benefits of testing, with less regard for the potential harms. Yet a single test can give rise to a cascade of events, many of which have the potential to harm.
This is kind of a new frontier in medicine, and the issue grows larger as the huge number of diagnostic tests we have mushrooms every year. For a parent, a good rule of thumb is to ask the doctor not just what the benefits of a proposed test are, but also the risks. Importantly, ask what the doctor will actually do with the result. We are prone to think more information is always a good thing, but that clearly is not the case. And never, ever get a test just because you (or your doctor) are merely curious.
It has long been known excessive exposure of your child to screens and social media — television, computers, iPads, iPhones, video games — can have profound effects on brain development. A big question is: “What counts as excessive?” No one is sure about that, and it is likely there is no clear-cut threshold. Brains being complicated things it seems probable threshold varies from child to child. Also keep in mind computers and the like are necessary things in modern life and can contribute significantly to learning. How to find a balance? The American Academy of Pediatrics, the organization representing most pediatricians, issues consensus recommendations on many child health issues, including this one: “Media and Young Minds.” It’s an excellent summary of what we know about the issue and provides a list of specific suggestions. These current recommendations allow for more screen time than previous ones, but still recommend less than one hour per day for preschool children and little or none for children under eighteen months. The AAP suggests each family should devise their own comprehensive media plan, rather than just letting things happen in the home without considering the implications.
I also suggest you read this article from NPR, which summarizes some of the results presented at a recent meeting of the Society for Neuroscience. It includes information about both pros and cons of screen exposure. Here are some results from mice suggesting video games function almost like a drug in their effects on the brain:
. . . a study of young mice exposed to six hours daily of a sound and light show reminiscent of a video game. The mice showed “dramatic changes everywhere in the brain,” said Jan-Marino Ramirez, director of the Center for Integrative Brain Research at Seattle Children’s Hospital. Many of those changes suggest that you have a brain that is wired up at a much more baseline excited level, Ramirez reported. You need much more sensory stimulation to get [the brain’s] attention.
Other investigators have suggested some degree of stimulation of this sort helps the developing brain stay more calm in our current environment, which is becoming ever more cacophonous and stimulatory. That viewpoint stresses we can’t turn back the clock to a simpler time and we should try to use media to prepare children for our world today. A sort of middle ground is the viewpoint that exposure to lots of screens and media is good for some children but not for others. Okay, that sounds reasonable, but how do we know who it helps and who it hurts? Nobody has an answer to that question.
What do I think? In my family we do limit screen time and virtually ban video games. It’s the rapid, flashing changes of games that appear most associated with learning problems like ADHD. I suppose I’m biased because I write books (on a computer!), but I think for older children the distinction is between using the computer as a tool versus as an amusement toy. Every parent needs to make their own decisions, of course, but developing an informed family policy and plan is better than just ignoring the issue.