Asthma rates in children may finally be abating . . . somewhat

July 19, 2016  |  General  |  2 Comments

Asthma is by far the most common chronic lung problem in children, affecting nearly 10% of all children. It may even be the most common chronic health problem of any sort if you exclude obesity. What is it?

Here is a schematic drawing of what a normal lung looks like:

You can think of the lungs as being composed of two parts. The first is a system of conducting tubes that begin at the nose and mouth, move through the trachea (windpipe), split into ever smaller tubes, called bronchi, and end with tiny tubes called bronchioles. The job of this system is to get the air to the business portion of the lungs, which are the alveolar sacs. This second part of the lung brings the air right next to tiny blood vessels, or lung capillaries. Entering capillary blood is depleted in oxygen and loaded with carbon dioxide, one of the waste products of the body’s metabolism. What happens next is gas exchange: as the blood moves through the capillaries, oxygen from the air we breathe in goes into the blood, and carbon dioxide leaves the blood and goes into the air we breathe out. The newly recharged blood then leaves the lungs in an ever enlarging system of pulmonary veins and then goes out to the body.

The main problem in asthma is that the conducting airway system gets blocked in several ways, so the oxygen can’t get in and the carbon dioxide can’t leave. Although both are a problem in a severe asthma attack, getting the air out is usually a bigger issue than getting it in because it is easier for us to generate more force sucking in air than blowing it out. So the hallmark of asthma is not getting the air out — called air trapping. Why does this happen? There are two principal reasons: for one, the small airways, the bronchioles, constrict, get smaller; for another, the walls of the airways swell and the airways themselves fill with excess mucous, blocking air flow. Here’s another schematic drawing of what that looks like.

Thus during an asthma attack these things happen, all of which act together to narrow the airways and reduce air flow:

  1. The smooth muscle bands around the tiny airways tighten
  2. The linings of the airways get inflamed and swell
  3. The mucous glands in the airways release too much mucous, filling the airways

The medicines that we use to treat asthma work by reducing (or even preventing) one or more of these things.

Between 1980 and 1995 the prevalence of asthma in children doubled. Then from 1995 to 2010 the number continued to increase, but more slowly. Where are we now? A recent report in Pediatrics, the journal of the American Academy of Pediatrics, gives some answers. The authors looked at the time period from 2000-2013. They separated patients by gender, race, geographical location, and socioeconomic group.

The investigators found that across all groups asthma prevalence steadily increased from 2000-2009, although the rate of rise had slackened compared with the previous 2 decades. The year 2009 was the peak. After that there was a plateau, and since then the rates have actually fallen a bit for the total group. Among poor children, however, asthma rates have continued to rise steadily. This is concerning. The reasons for these changes in asthma prevalence are complex and experts think it is most likely an interplay of several things. If you are interested you can find more information in the article linked above. But it does appear that, on balance, the “asthma epidemic” is abating.

Linguistic theory and medical care: Sapir and Whorf join rounds in the ICU

July 17, 2016  |  General  |  No Comments

It’s common sense that language and thought are closely related. For example, any politician knows the words one uses to describe something can profoundly affect how listeners understand and react to what they hear. Some think it goes deeper than that. An old theory of linguistics, the Sapir-Whorf hypothesis, has been batted around since the mid-20th century. This notion, named for Edward Sapir and his student, Benjamin Whorf, proposed that language actually controls thought, how we regard the world. Their argument was that we think using language, and thus the idiosyncrasies of our language determine how we think. The theory implies there are some things two native speakers of different languages cannot fully explain to each other or even completely understand. In its purer form the hypothesis is not that highly regarded these days among experts, but in its more rudimentary form the notion makes considerable sense to me. You can read more about the linguistic pros and cons of Sapir-Whorf many places, such as here, here, and hereHere is a nice PowerPoint presentation of its basic tenets. George Orwell’s 1984 is a powerful expression of the way whoever controls language can control thought. Today’s seemingly endless debates about “political correctness,” which generally seem silly to me, are getting at the same notion.

Yet language really does matter. I think that, over time, the words we physicians use to describe patients to each other, to interview patients and their families, and to explain treatments and therapies have importance far beyond the particular encounter. Our words have a cumulative impact on us, on our own attitudes and feelings. It is important to be empathetic and respectful not just because that is the correct way to behave to our patients, but also because it is the most caring way to nurture ourselves.

The intensive care unit is a place that can harden you. You hear it all around you in the language people use to describe patients and families. There can be a compulsion to sound hard-boiled and savvy. This easily degenerates into cynicism. Much has been written about the burn-out rate of people who work in the ICU environment. I think a portion of the burn-out relates to the language we use. After practicing critical care for 35 years I don’t think I’m in any danger anymore of experiencing burn-out. I also think one way we can at least partially inoculate ourselves against that possibility is to be careful of how we speak to each other, to our patients, and to their families.

From ALTE to BRUE: Changes in categorizing frightening spells in infants

June 1, 2016  |  General  |  No Comments

baby

 

There is a troubling entity in pediatrics. Sometimes infants appear to have suffered some catastrophic problem, only to recover within minutes. Forty years ago these were termed “near miss sudden infant death syndrome events.” The notion was that they represented part of the spectrum of SIDS — sudden infant death syndrome — that had, for some reason, been averted. In the mid-1980s researchers realized these spells weren’t really related to SIDS because infants who experienced them were not at higher risk for having a real SIDS event later. A new term was coined: “Apparent life threatening event (ALTE).” The working definition was this:

An episode that is frightening to the observer and that is characterized by some combination of apnea [pause or cessation of breathing], color change (usually cyanotic [blue] or pallid but occasionally erythematous [red, flushed] or plethoric), marked change in muscle tone (usually marked limpness), choking, or gagging. In some cases the observer fears that the infant has died.

ALTE events are not only terrifying to parents; physicians are also worried and uncertain about what to do. I know — I’ve cared for many infants over the years who fit this description. It’s easy to know what to do if the infant looks abnormal when you examine him because whatever abnormality you find guides your course of action. But typically these babies look fine by the time the physician sees them. If the parents called 911, the paramedics also find a normal-appearing baby. Usually we admit such babies to the hospital and place them on heart and breathing monitors to see if they do whatever it was again. In my experience, they typically don’t. So now what? In at most half the cases we identify a probable cause, the most common of which is reflux of feedings from the stomach to the mouth, after which a small amount gets in the airway. When that happens a common infant reflex is to stop breathing — stimulating them gets them to start again. Respiratory infections (particularly RSV) and convulsions are also potential causes. But at least half the time we have no idea what happened.

What is the risk of an ALTE recurring? It turns out we have no idea about that, either. A recent review of nearly 37 research articles spanning 40 years concluded such a prediction can’t be made, largely because the definitions used for what is or is not an ALTE have been quite variable in spite of the above consensus statement. A huge issue is that infants who are clearly abnormal, but who arrive for evaluation with an ALTE, are often lumped in with children who appear normal after the event. Those are very different groups.

To try to analyze this troubling condition the American Academy of Pediatrics has just issued a clinical practice guideline about what to call these events and what to do about them. They did add to the alphabet soup by coining a new term: “Brief, resolved, unexplained event (BRUE).” I’m not sure how useful that term will be, but the intent is to separate out those children in which a detailed conversation with the parents and a thorough physical examination do not identify any potential cause. For those infants the committee (and it was a committee, with all the problems that can bring) recommends not doing so many tests. One thing is clear; a home apnea monitor does not at all reduce the risk of future harm, and so using one is not recommended. A BRUE (or ALTE) is not associated with risk for SIDS — that’s a key observation that has stood the test of time. We do know one thing that is clearly associated with SIDS: putting babies to sleep on their stomachs. Ever since the introduction of the “back to sleep program,” of instructing parents to put their babies to sleep lying on their backs, the incidence of SIDS has dropped remarkably, although it still happens.

What do I think? We don’t know what causes these spells. However, premature infants are at higher risk for them and we know such infants often have disordered breathing and heart reflexes. I have no data at all about it, but my guesswork opinion is these spells represent immaturity of infants’ brains such that they respond to a variety of stimuli by pausing or stopping breathing or slowing their heart rate. The tendency goes away with growing older because we only see this problem in infants. I hope this new categorization leads to useful research about what is really happening during these mysterious events.

Is it time to toss out the traditional “pre-med” pathway? (YES)

May 29, 2016  |  General  |  3 Comments

images

The USA trains its physicians differently from every other Western country I know. Everyone (with rare exceptions) who goes to medical school first must get a four year undergraduate college degree in something. There are no such degrees in medicine, although the overwhelming majority of students going on to medical school major in one of the sciences, such as chemistry, biochemistry, and biology. If they don’t major in a science, they generally must take two years of chemistry (which typically includes organic and biochemistry), a year of biology, and a year each of physics and mathematics. Looking at that list of prerequisites it’s easy to see why most just major in a science. Such students are “pre-meds,” and some colleges actually offer a major in pre-med, something I think is a terrible idea because it’s not really a thing.

Many people recall from their college years the awful reputation, occasionally deserved, pre-meds have for being cutthroat competitors and generally uninteresting persons because they are so focused on getting into medical school above all else. It can be a difficult subculture to be in among college students. Can or should we do anything to change that? After all, a good share of the criticism leveled against unfeeling physicians with poor interpersonal skills has been linked to how we are trained. Would restructuring the premedical years in college help make us more rounded and empathetic humans? A recent editorial in Scientific American advocates changing things.

It’s a good question, although I should say the author to some extent conflates the Flexner Report with premedical education. That report is now over a century old, and was a survey of American medical schools. It documented how terrible the scientific training of medical students was at the time and led to a revolution in how medical education is done, demanding exposure to the sciences. But it applied to medical schools, not colleges. Still, I think it is time to think about how we teach future physicians at the first step, before they get to medical school.

I’m an anomaly. My training for a medical career was outside the mainstream. Although I took the various prerequisite courses for medical school, I didn’t major in a science. Far from that. I double majored in two very nonscience things — history and religion. In spite of that, after medical school and residency I spent half my time for twenty years doing research in the basic sciences of cellular and molecular biology. I knew very little about either when I entered medical school. The point to me is we learn what we need to know when we need to know it. I did that, and I know others who did, too. Cramming my college years with all that stuff would have denied me the opportunity to study other things which, as it turned out, have proved invaluable to my ability to practice medicine. The humanities matter, I think.

Of course medical students need a minimum of grounding in science to even understand medical school. So we still need that, although I think we could shave off some of the physics and mathematics. You need some math to understand chemistry, for example. But calculus? Not really.

During the 1990s I spent four rather discouraging years on a medical school admissions committee. People on the committee kept talking about how to find well-rounded, complete applicants. But they rarely voted for such students. It was all about their grades in science courses. Sometimes I would nearly shout that they should look at me. There I was, doing basic research, yet I started out from religion and history. And I’m not special — there are many like me who just aren’t given the chance.

I think if we are to change physician culture at all we need to look at the people we are putting into the pipeline. There is no evidence whatever that high grades in science courses predict success as a physician. They just don’t, and we should stop pretending they are markers for future medical excellence. From what I’ve seen, they may even predict poor performance as an empathetic doctor.

The Joan Rivers case and patient safety

May 14, 2016  |  General  |  No Comments

I read today a settlement has been reached among the doctors, the medical facility, and the family of Joan Rivers, who died in 2014 during the course of what is usually described as a routine, minor medical procedure. The details of both the settlement and the specific events are confidential, but the Times article does offer some key information we should all think about.

Ms. Rivers had complained of a hoarse voice and a sore throat and was having these symptoms evaluated at an outpatient surgical center in New York City. The procedures she was having — laryngoscopy and endoscopy of the esophagus and stomach — are standard ways of evaluating that problem. Both are generally low risk. But they’re not zero risk — nothing in medicine is.

What apparently happened is Ms. Rivers had her airway close off, laryngospasm, something those of us who sedate and anesthetize people see now and then. There is a standard, step-wise way of dealing with this, and it appears this routine was not followed: appropriate equipment and personnel were not available. So she died from lack of oxygen.

The key point in this tragedy is routines and protocols are key to patient safety, and one always needs to be prepared to deal with the worst case scenario. If you aren’t, statistics will catch up to you eventually and people will be harmed.

From the news story I think in Ms. Rivers’ case there was probably another issue: rich and famous people don’t always get the best care. They can (and do) make special requests to deviate from protocol, and physicians are tempted to allow it. I spent many years working at the Mayo Clinic. Although Mayo cares for many thousands of ordinary folks like you and me, they have a fair number of patients like Ms. Rivers. I’ve had patients in that category. What I saw at Mayo was that everybody, from Saudi royalty to children of billionaires, got treated the same. It’s a patient safety issue we should never forget.

Poverty and a child’s brain: new data on the host of bad effects

May 5, 2016  |  General  |  No Comments

As a group, children in poverty are more likely to experience developmental delay, perform worse on cognitive and achievement tests, and experience more behavioral and emotional problems than their more advantaged peers. In addition, child socioeconomic status is tied to educational attainment, health, and psychological well-being decades later. Increasingly, research is focused on understanding the extent to which these long-term outcomes are related to changes in the developing brain.

This sobering quotation comes from a recent review article in Pediatrics, the journal of the American Academy of Pediatrics. It highlights an observation we’ve sort of known for many years, but now we have some specific research to support the observation. The article lays it all out.

There are decades of animal studies, research in which they can examine the brain, demonstrating that an enriched environment with things like exposure to new things and social stimulation positively affect brain growth. In contrast, chronic stress is bad for the developing brain. The actual structure of the brain is altered. Environmental exposures such as these affect which genes are expressed.  These observations are tied to the notion of brain plasticity, that the brain is not fully developed at birth. We know this is also true in humans. We also know sensitivity to external stimuli, both positive and negative, is heightened during periods of rapid brain growth, such as early childhood.

Childhood poverty is associated with several things known to affect brain growth and development, all of them negatively. Examples include the following:

  1. Cognitive stimulation in the home. Poor children are less likely to be exposed to books, toys, and other stimuli because their parents can’t afford them. There is also evidence poor children are exposed to less complex and stimulating words and speech patterns, possibly because they are more likely to be left alone for extending periods.
  2. Nutritional deprivation. Not surprisingly, a child’s brain needs good nutrition to grow to its fullest capacity. Poor children are less likely to get that. They are especially more likely to be deficient in iron.
  3. Exposure to stress. The phrase “toxic stress” describes negative effects on brain development. Poor children are more likely to live in chaotic social situations that well describe such a state. The biological basis for this is under study, but increased levels of stress hormones, the body’s natural response to stress, have been implicated.
  4. Environmental toxins. Poor children are more likely to be exposed to things that negatively affect brain growth and development. Two examples are exposure to lead and tobacco smoke, both more prevalent in poor households.

If you are interested in the specifics, the article goes on to present the evidence for how these things affect specific areas of the brain, particularly those associated with emotions, learning, memory, and what we call executive functions. The latter refers to higher order planning, reasoning, and decision making. Poor children are more likely to have problems in all of these categories.

But there is hope, as the authors point out:

Although young people are particularly vulnerable to the negative effects of poverty, their systems are also likely more malleable in response to intervention. The success of interventions such as the Perry Preschool Program demonstrate that the impact of poverty may be preventable or reversible at cognitive and behavioral levels.

There are a couple of key points to make with this. Genetics and environment are not destiny. Certainly individual children escape from poverty to achieve normal or even extraordinary things. We are talking about populations here, about averages. And it is clear to me that a crucial part of improving child health in America is directly linked to reducing poverty. The American Academy of Pediatrics has addressed this. The deck is stacked from birth against poor children. Certainly individual efforts are important. But the known effects of poverty on the developing brain make it unfair to demand of poor families that they somehow manage on their own to give their children the same chances at success I had.

The electronic medical record and the loss of the patient’s story

May 2, 2016  |  General  |  No Comments

This is a post I wrote some years ago for Maggie Mahar’s excellent blog, HealthBeat. With the now nearly universal adoption of the electronic medical record (EMR) I think it matters more than ever.

The EMR is here to stay. Its adoption was initially slow, but over the past decade those hospitals that do not already have it are making plans for implementing it. On the whole this represents progress: the EMR has the ability to greatly improve patient care. Physicians, as well as all other caregivers, no longer have to puzzle over barely legible handwritten notes or flip through pages and pages of a patient’s paper chart to find important information.

With the EMR, it is easy to see what medications a patient is taking, when they were started, and when they were stopped. Physicians can easily find key vital signs – temperature, pulse, respirations, and blood pressure – plotted over any time frame they wish. All the past laboratory data are displayed succinctly. But it is not all gravy.

I use the EMR every day, and I am old enough to have trained and practiced when everything was on paper. While overall, I am happy to have electronic records, there is a problem: The EMR is trying to serve too many masters. The needs of these various masters are different, and sometimes they are incompatible, even hostile to one another.  These masters include other caregivers, the agencies paying for the care, and those interested in medico-legal aspects of care

What can happen, and I have seen it many times, is that the needs of the caregivers take a back seat to the needs of the payers and the lawyers. The EMR is supposed to improve patient care, but sometimes it makes it worse. Physician progress notes illustrate how this happens.

Progress notes are the lifeblood of the medical record. They tell, from day to day, what physicians did to a patient and why. They are a narrative of the patient’s care. Three decades ago we sat down, pulled out a pen, and wrote out our daily progress notes. There were standard ways of doing this, but physicians were free to organize their notes however they liked. That was both a blessing and a curse. It was a blessing because not all patients fit the standard way of note writing, so you could modify how you recorded things; it was a curse because every physician was different, and some wrote very sketchy notes indeed, notes from which it was very difficult to figure out what happened.

I once did a research project for which I was reading physician notes from the nineteen twenties, thirties and forties. I recall one patient in particular who was clearly desperately ill. He had critically abnormal vital signs (which I could tell from the nurses’ graphic chart), needed several blood transfusions, and even stopped breathing once. His progress note for the day, written by a very famous and distinguished physician, was one line: “Mustard plaster didn’t work.”

Physician notes have evolved a great deal since 1930. Certainly in my medical career, which began in 1974, physicians were expected to make some reference to what they were thinking, why they did or did not do what they did. Sometimes the notes were cryptic jottings that made it very hard to follow what was happening. But most of the time you could understand what your colleagues were thinking.

But while this worked reasonably well for physicians, other users of the medical record complained loudly. Payers, such as insurance companies and Medicare, based their reimbursement upon those notes. They were unwilling to pay for anything that was not clearly documented. They also increasingly based their payment structure on the complexity of the medical decision making; if physicians wanted to be paid at a higher rate for managing a complex and difficult patient they needed to show in their note just why that patient was complicated. They needed to show what they were thinking, and what information, such as laboratory data and the physical examination, they used to make their decisions.

Finally, for the lawyers, the operative phrase was “if it’s not documented, it didn’t happen.” In theory, the goals of all three users – caregivers, payers, and lawyers – should be in alignment. But with the EMR the needs of the caregivers, which should be paramount, are losing groun

The EMR, since it is on a computer, can be manipulated in all the ways a computer allows. Hospitals are laying out millions to implement the EMR, and to ensure maximum payment they want to make sure it is easy for the payers to find in the EMR all the things the payers want there. This is accomplished, among other things, through the use of templates and “smart text” for progress notes. For example, a physician writing a progress note in Epic, a popular EMR system, can open a template that has many components of the evaluation already filled in. The program can bring into the note all the previous laboratory values. It has all the categories of the physical examination sitting on the screen for the physician to fill in.

It is easy to “drag and drop” information from previous notes with simple keystrokes. There’s nothing intrinsically wrong with all this. It can make producing a complete progress note quick and easy. But it also can destroy the original purpose of the progress note – to give a narrative of the patient’s progress. It can stifle the conversation between physicians embodied in traditional progress notes

Recently I saw an example of the problems this can cause. A couple of weeks ago I heard I was getting a patient into the pediatric intensive care unit with multiple problems, most acutely a blood problem. One of these lesser issues was a heart problem that required surgery. Because of the other serious problems, though, the surgery had been postponed for the future. I read about all this in the patient’s EMR before she even arrived in the PICU, which is one of the great aspects of the EMR. We no longer have to wait for a clerk pushing a cart around the hospital to deliver the paper chart. The patient had been seen just that morning by her hematologist for the blood issue and the progress note in the EMR told me the plan for her heart problem was surgery sometime in the future when the child’s other problems had improved. It said so right there on the screen. In fact, all the notes had been saying that for over a year.

So imagine my surprise when I went in to see the child and saw an obvious and well-healed surgical scar on her chest, clearly from heart surgery. She had had her heart fixed two months before at another institution. I gave her hematologist the benefit of the doubt and assumed her doctor knew the surgery had been done, and that what had happened (I hope) was that the doctor had used the beguiling convenience of drag and drop on the progress note template to do the note. This particular incident was innocuous, but I think you can see the potential for mischief with this sort of thing.

This is not an isolated event. I have seen many examples– so many that I now cast a suspicious eye on all those uniformly formatted progress notes. The ease with which mounds and mounds of verbiage and laboratory data can be stuffed into a progress note may give the payers what they want, but it often does not give me what I want– and that is some evidence that all this information was processed through a physician’s brain and led to a carefully considered decision about what to do. I want a human voice, and that is getting harder and harder to find in the EMR’s stereotypic and bloodless documentation.

Medicine is about stories – patients’ stories. I was taught forty years ago that most of the time the history gives us the diagnosis. Osler reputedly said: “Listen to the patient. He is telling you the diagnosis.” (That attribution has been questioned, but the spirit is definitely Osler’s.)

Of course these days our wonderful scientific tools often give us the answer, and I certainly do not wish to toss all those things aside to go back to using only what Osler had. But medicine is not really a science. It is based on science, uses science, and is increasingly more scientific. But medicine also contains large measures of intuition, educated guessing, and blind luck. I do not think that aspect of medicine will ever completely disappear. When I read (or wade) through a patient’s record, I look for the story. When I cannot find a coherent story, I cannot give the best care.

For myself, even though I of course use the EMR, I refuse to use all those handy smart text templates. It takes me longer, but I type out my progress notes, organized as I did when I used a pen and chart paper. It takes me a little longer, but it makes me think things through. No billing coder has ever complained. More than a few colleagues have told me, that when we share patients, that they search through the EMR to find one of my notes to understand what is happening with the patient.

My advice to other doctors is this: don’t let the templates get in your way. Tell the story.

Should children ever play tackle football?

April 28, 2016  |  General  |  No Comments

images

A recent editorial in the New England Journal of Medicine asked a good question: Is it acceptable for children to play tackle football? The background for this question is the emerging understanding that repeated blows to the head, even a helmeted head, can cause brain injury. That injury is cumulative over time. There has recently been a lot of media attention about this in the wake of well documented examples of professional football players suffering from an entity now called chronic traumatic encephalopathy (CTE). The disorder has actually been known about for a long time. Previously, however, it was believed mainly to be associated with boxers, who of course sustain many hundreds of blows to their exposed heads over the course of their careers. The presumption is that CTE results from the cumulative effects of a long string of concussions, even minor ones.

Although we now know professional football players are at risk for CTE, it is less clear where the risk begins — how many blows to the head it takes to put a person at risk. In other words, is there a safe threshold? Another key question is if a child’s brain has unique properties that affect risk. Some have said teaching “safe” tackling techniques will reduce the risk, but we have no information if this is true or not. We do have some recent data regarding the question of the effects in normal adolescent boys of playing football.

Investigators fitted special helmets on high school football players that collected data on all head impacts, whether or not the individual experienced an actual concussion. Key aspects of brain cellular microstructure were studied before, during, and after the end of the football season. The results are concerning. In the authors’ words:

Our findings add to a growing body of literature demonstrating that a single season of contact sports can result in brain changes regardless of clinical findings or concussion diagnosis.

Of course this is potentially a huge issue, since millions of children and adolescents play tackle football. Knowing what we know now, is this safe? Or, if not totally safe, is the risk for brain damage sufficiently tiny that parents who want their children to play football can realistically allow it? The American Academy of Pediatrics has issued a statement about it, but the recommendations waffle on the big question: it recommends more adult supervision, teaching of proper tackling techniques, and strength training, particularly of the neck muscles. (The article is also good because it reviews all of what we know about the question.) The editorial from the New England Journal is more blunt. It nearly, but not quite, comes down on the side of recommending tackling be eliminated. This reticence is understandable because it’s a potentially inflammatory opinion. Yet the editorialist makes some good points:

The AAP committee shied away from endorsing the elimination of tackling in youth football, because doing so would fundamentally change the way the game is played. Yet evidence indicates that tackle football in its current form is inconsistent with the AAP mission “to attain optimal physical, mental, and social health and well-being for all infants, children, adolescents and young adults.” Repetitive brain trauma can have serious short- and long-term consequences, including cognitive and attention deficits, headaches, mood disorders, sleep disturbances, and behavioral problems. To significantly reduce the incidence of brain trauma in young people, I believe that physicians should consider endorsing strategies that alter the way football is played.

My own view is that we know repetitive blows to the head, and certainly multiple concussions, are associated with permanent brain damage. We also know that the brain of a child or early adolescent takes longer to recover from a concussion than does that of an adult. We don’t know how many are safe, particularly how many severe ones. It appears even a single season of play causes changes in the brain, but we don’t know if these changes are transient or significant. But for these reasons I wouldn’t let my son play tackle football. That’s a personal decision. Other parents can make their own choices. I would not at all be surprised if we learn with further research that tackle football is unacceptably dangerous for children.

It’s time for formal classification and regionalization of pediatric critical care units

April 21, 2016  |  General  |  No Comments

There are over 400 pediatric intensive care units (PICUs) in the USA, as most recently estimated by the Society of Critical Care Medicine. These units vary widely in size, from 4 or 5 beds to fifty or more. The smaller units are generally found in community hospitals; the larger ones are usually in academic medical centers, often in designated children’s hospitals, of which there are 220. Given this size range, it is not surprising the services provided at PICUs vary widely. There are no defined standards for what a PICU should be, although the American Academy of Pediatrics (AAP) suggested some over a decade ago. The AAP also suggested dividing PICUs into two categories, Level I and Level II, although in my experience no one pays much attention to the distinction. A great many of the recommendations are about what the equipment and staffing for a PICU should be. There is little if anything about the crucial issue of range of practice. Right now a PICU cares for whatever patients the facility wishes to care for. I don’t think this is the best way to do things, and there are a couple of examples from other specialties for which there are solid recommendations regarding appropriate scope of intensive care practice.

The oldest example is neonatology, which is practiced in neonatal intensive care units (NICUs) by pediatric specialists known as neonatologists. NICUs care for sick newborn babies, the overwhelming majority of which are infants born prematurely. The neonatal guidelines date back to 1976, when the March of Dimes Foundation spearheaded an effort to classify and sort out newborn care. They proposed 3 levels of units: level I was for normal newborns, level III was for the sickest babies, and level II was somewhere in between. These designations have been broadly adopted, carrying along with them the specific expectations of just what care a level III NICU should be able to offer. The guidelines were revisited and reaffirmed in 2012. Importantly, the guidelines also stated that Level III NICUs had an obligation to provide outreach and training to their surrounding region to help smaller hospitals resuscitate and stabilize sick infants for transfer to a Level III unit.

This system has been successful in that it has been associated with greatly improved outcomes for premature infants. A new category, Level IV, has more recently been added, indicating an even higher level of care for the sickest of the sick. All of these NICUs are in major medical centers. In the beginning, this was also true of Level III units. Since then, however, neonatology has expanded from its origins into medium-sized community hospitals, many of which now have Level III units. This has been good for babies, in that it brought the skills of neonatologists to more infants. But there are some concerns it may dilute the expertise of providers, since there is good evidence that units with more admissions have better outcomes. The optimal number of admissions and NICU size is still a matter of debate. Even so, the situation is a lot more worked out than is the case with older children and PICUs. An unspoken subtext in this discussion is that NICUs, for reasons I won’t get into here, are generally money makers for hospitals, tempting them to start one for mainly that reason. PICUs generally break even financially at best, and often not.

Trauma surgery is another example of a classification system that has improved patient care and outcomes. Trauma center classification is the opposite of NICUs: Level I is the highest, ranging down to Level V for facilities that are only equipped to stabilize patients and send them on to a higher level center. Those in between have progressively more capability until they reach Level I. Unlike NICUs, there is a process of certifying the qualifications for trauma centers. A state can designate a trauma center wherever and however it likes, but the American College of Surgeons verifies that the facility meets its criteria. So it’s a two stage process. There are standard guidelines for what sorts of patients the various levels can care for, and centers lower than a Level I must have an ongoing relationship with higher centers to assure smooth transfer when needed. As with NICUs, trauma centers engage in outreach and teaching to help their regions improve care. Trauma centers also have extensive programs of quality control and outcome measurement to see how they are doing and how they measure up to national benchmarks. There are parallel pathways for adult and pediatric trauma centers. That is, a facility can be Level I for both. It can also be mixed. For example, it might be Level I for adults and Level II for children.

In contrast, PICUs, which care for critically ill and injured children from infancy up to adolescence, are kind of a disorganized mess. There is no classification system accompanied by guidelines to sort facilities into different groups according to what patients are most appropriate where. Large PICUs at children’s hospitals are de facto highest level facilities. But how should we stratify smaller places? For example, one major determinant would be if the facility offers pediatric heart surgery. Many, even most PICUs don’t. The smaller PICUs I know have a more or less worked out arrangement with a higher level PICU to transfer children they cannot care for. But, unlike the case with trauma centers, there is no requirement to have this. There’s no requirement for anything, really.

I think it is past time for some order in this chaos. Many of my colleagues believe the same. The obvious leader for such a process would be the American Academy of Pediatrics. The AAP has been active in this for decades in NICUs. There already exists an AAP Section on Critical Care, of which I am a member. Perhaps some AAP-sanctioned group is already working on the issue. If there is, I haven’t heard anything about it.

How safe are home births? And what does “safe” mean?

February 21, 2016  |  General  |  No Comments

The debate over the safety of giving birth at home, both for the mother and for the infant, has been debated for years. I’ve written about the issue myself. From time immemorial until about 75 years ago or so most babies were born at home. Now it’s around 1% in the USA, although it’s much higher than that in many Western European countries. The shift to hospital births paralleled the growth of hospitals, pediatrics, and obstetrics. With that shift there has been a perceived decrease in women’s autonomy over their healthcare decisions. There has also been an unsurprising jump in the proportion of caesarian section deliveries, an operative procedure, and various other medical interventions in labor and delivery, even though current data suggests the recent jump in caesarian delivery (now around 30%) has not added any benefits. The debate over whether the dominance of hospital births is a good thing or a bad thing (or neither) is much more than a medical debate; it is also a social and political one. It is also to some extent an issue of medical power, a struggle between physician obstetricians who deliver babies in the hospital and nurse midwives who often deliver babies at home. I’m very interested in the social and political aspects, but as a pediatrician I’m particularly concerned with the safety question: Is it more dangerous for your baby to be born at home?

One problem in answering this question is that most of the studies about the safety of home birth came from abroad. But now we have some data from the USA, published in a recent issue of the prestigious New England Journal of Medicine, entitled “Planned out of hospital births and birth outcomes.”

One big problem with evaluating previous data has been that vital statistics from birth certificates counted home births and hospital births, but did not identify as a separate category those women who planned to deliver at home, but then were admitted to a hospital to deliver there because of some issue with the labor. Such women were just counted as hospital births. Also, the recent growth of birthing centers has introduced a location kind of intermediate between home and hospital. A recent large study from Oregon using the years 2012 and 2013 gives some useful information.

The bottom line is that children born to women who intended to give birth at home had an infant mortality rate of 3.9 deaths per 1,000 deliveries. This was significantly higher than the death rate of infants born in a hospital, which was 1.8 deaths per 1,000 deliveries. Not surprisingly, women who delivered in the hospital had a far high rate of some kind of intervention, such as caesarian section.

What should we make of this? Thinking about risk can be difficult, and it is important to understand the difference between relative and absolute risk. (I’ve written about that, too.) Media reports often obscure this key point. For example, in this study the risk of infant mortality increased 100% with home birth. 100%!! But twice a very small number is still a very small number. The absolute risk of a baby dying in a home delivery is very small. Still, it is higher.

What this means is that a woman deciding to deliver at home should understand all the facts. Some will not want to accept this increased risk, however small it is in absolute terms. Some will accept it. The same issue of the Journal had a good editorial discussing how to think about the issue. It’s a very good summary of the fundamental question. It’s all about the issue of acceptable risk, and how that varies with the person. The conclusion:

Ultimately, women’s choices for place of delivery will be determined by the extent of their tolerance for risk and which risks they most want to avoid.