Earlier this year a child died following a surgical procedure in California for a condition called obstructive sleep apnea. The case generated a great deal of concern among parents about both this condition and the surgery often done to treat it. I wrote a post myself about it at the time. I still get questions about it because I care for quite a few children immediately after they have had surgery for it : What is it? How do we diagnose it? How do we treat it? Is surgery always necessary?
Technically the word apnea means cessation of breathing, but what we generally mean by it is a significant pause in breathing. Generally we define an apneic pause as lasting twenty seconds without taking a breath. Most of us can hold our breath that long without difficulty if we mean to, such as when we dive beneath the water. But that is when we are awake and in control of things. Things are different when we are asleep. A person with sleep apnea is unaware of their abnormal breathing pattern because they are asleep, although as you will read it’s often not very healthy sleep.
The condition is called obstructive sleep apnea because the problem is the result of obstruction of air flow. The obstruction happens at the level of what we call the upper airway — primarily the back of the throat. When we’re awake we keep good control of the tension in the muscles around our upper airway. But when we’re asleep the muscular tissues back there relax. They may sort of flop together and this can lead to obstruction. Think of the passageway as a pipe. If you look in the mirror and open your mouth wide you can see most of the key components. The tongue is at the bottom of the pipe and top is composed of the soft palette toward the front and the adenoids toward the back, although you can’t see the adenoids because they are tucked up behind the soft palette at the back of the nasal passages. The sides of the pipe at the narrowest point are the tonsils. Right at the back of the throat the pipe makes a right angle turn and dives downward, so the tissue sat the back of the throat are also important. Obstruction of air flow happens when any of those components bulge out into the pipe to make it significantly smaller.
There’s a principle of physics that comes into play here. It was originally described for fluid flow through a pipe, but it also applies to air flow. The principle is that, if other things are kept constant, flow is proportional to the fourth power of the radius of the pipe. That may sound esoteric, but it has real, practical implications for obstructive sleep apnea. It means that a relatively small reduction in the size of the opening has a huge impact on how much flows through it. So if you make the airway pipe half as wide you reduce flow by a factor of sixteen. People with obstructive sleep apnea have the size of their pipe sufficiently reduced when asleep to block air flow, and this leads to problems.
What are those problems? They stem from not getting enough oxygen into the lungs, not getting enough carbon dioxide out of the lungs, or a combination of both. This causes a reduction in oxygen and a build up of carbon dioxide in the blood, not a good thing. The brain recognizes things are not right and responds by not really going into a deep, normal sleep; the person frequently partially awakens so that more normal muscle tone returns, briefly enlarging the airway — at least until he falls back to sleep. The heart has to work harder than normal. The vessels of the lungs can be affected in a bad way.
What are the symptoms of sleep apnea? The most common is severe snoring, especially when the person is lying on his back. If you listen to someone with sleep apnea snoring you often hear a pattern of steadily worsening snoring, then a pause in breathing, then a sort of snort as they arouse, and then a repeat of the cycle. People with the condition are often drowsy during the day because they never really get a good night’s sleep. For children, poor school performance is common because of this. A headache in the morning is often common.
How common is obstructive sleep apnea in children? A reasonable estimate is that around 1% of children have symptoms of it at one time or another. It may be increasing because the problem is associated with being overweight and the prevalence of that among children is increasing.
How do we treat it? There are really two ways, both designed to keep the airway open when sleeping. A treatment often used in adults is CPAP, which stands for continuous positive airway pressure. The idea is that if you place a tight-fitting mask over the mouth and/or the nose and blow air through it down into the lungs, the pressure will hold the airway open. It can be very effective. As you can imagine, however, many children will not tolerate wearing this apparatus because it can be uncomfortable and the CPAP machine is noisy. The second option is surgery to make the airway bigger. That usually means removing the tonsils and adenoids, although sometimes working on other parts of the airway, such as the soft palette, is part of the procedure.
If you are concerned that your child has sleep apnea — perhaps she snores loudly, you hear pauses in her breathing while asleep, and she is drowsy during the day — how can you be sure? After all, we don’t want to be doing surgery needlessly. It’s an important question and we have some good practice guidelines to go by. The American Academy of Pediatrics has published these. The most important principle is that we have specific tests we can do, tests incorporated into what we call a sleep study. This makes measurements of blood oxygen levels and measures airflow through the airway while the child sleeps. For many years the studies were difficult to do in children, but now many centers can do them. My own view is that, for most children, surgery should not be done without these studies to confirm the diagnosis. The exception would be if the diagnosis is clear (severe, obvious obstruction when asleep) or the child already has signs of stress on the heart and lungs.
How good is the surgery in children? Does it work? A recent research study helps to answer that question. For most children, surgery helped their sleep pattern significantly. The investigators didn’t demonstrate improvement in cognitive function during the day, such as school performance, but the time frame may have been too short to show that.
How dangerous is the surgery? The California case demonstrates that it, like any surgical procedure, is not without risk. There may be excessive bleeding afterwards, although we have ways to deal with that. Although this kind of surgery is often “same day,” meaning you can go home some hours afterwards, your child may need to stay in the hospital overnight. This is particularly likely if the sleep apnea was severe. The figures I’ve seen put the risk of death, the ultimate risk, at somewhere between 1 in 12,000 and 1 in 15,000. The death risk relates not only to the actual surgery but also to the anesthesia required to do it.
From time immemorial until about 75 years ago or so most babies were born at home. Now it’s around 1% in the USA, although it’s much higher than that in many Western European countries. The shift to hospital births paralleled the growth of hospitals, pediatrics, and obstetrics. With that shift there has been a perceived decrease in women’s autonomy over their healthcare decisions. There has also been an unsurprising jump in the proportion of Caesarian section deliveries, an operative procedure, and various other medical interventions in labor and delivery. So the debate over whether this is a good thing or a bad thing (or neither) is much more than a medical debate; it is also a social and political one. It is also to some extent an issue of medical power, a struggle between physician obstetricians who deliver babies in the hospital and nurse midwives who often deliver babies at home. I’m very interested in the social and political aspects, but as a pediatrician I’m particularly concerned with the safety question: Is it more dangerous for your baby to be born at home?
There have been many studies that attempt to answer that question. Many, even most, of them come from outside the USA. The results are mixed. Some say hospital birth is safest (this one, for example), others that there is no difference (this one, for example). One US study that found home delivery to be riskier has had its methodology heavily criticized. What we need are some well designed, large cohort studies from the USA, especially since healthcare systems differ substantially from country to country. I think this recent study from the American Journal of Obstetrics and Gynecology is very useful in that regard.
The main question the authors tried to answer was how the babies did. The measure they looked at was to analyze the frequency of two well-accepted markers of infant distress, things that correlate with trouble later in development. Bear in mind that there can be many reasons for these bad things to happen — some avoidable, some not. The notion is that if one can study a large enough group, then particular circumstances for individual births will wash out in the totals.
The potentially bad things that the authors chose were easy things to count and document. The first was a low Apgar score at 5 minutes after birth. The Apgar score, scaled 0 to 10 and recorded at 1 and 5 minutes after birth, has been a standard, well validated measure of infant distress for many decades. A value of less than 4 is potentially very bad for the baby. The other measure the authors chose was seizures, convulsions, immediately after birth. These can be caused by many things, but most of them are bad.
The study group consisted of over 2 million infants born in 2008. Of these only 12,000 were planned home births (0.6%). This shows how uniform hospital birth has become in the USA, but 12,000 is still a very large group of babies. They excluded babies born unexpectedly outside the hospital.
The results showed that babies born in the hospital, as you would expect, had a very much larger percentage of obstetrical interventions of various sorts associated with their birth. Regarding the distress measures, 0.24% of hospital-born babies had Apgars of less than 4 at 5 minutes; this compared with a rate of 0.37% for babies delivered at home — 1.5 fold higher. This difference was statistically significant. Also significantly different was the rate of seizures: for hospital-born babies it was 0.02% and for home-birth babies it was 0.06%, or 3-fold higher.
So what does this mean? First, the incidences of both of these bad things, although statistically higher in the home-birth group, were still very low. That is encouraging. But to understand things better you need to dig deeper into the data and see who attended at these deliveries. In the hospital it was presumably a physician, but what about at home? After all, “home delivery” can mean many things.
In this study, 26% of the home birth attendants were certified nurse-midwives, 51% were other midwives, and the remainder something else. A key finding to me is that the outcomes for babies delivered at home by certified nurse-midwives were no different than for those born in the hospital. So proper training matters — a lot. One key thing a trained midwife should offer is the knowledge of which pregnancies are higher risk and unsafe to deliver outside a hospital.
Both the American Academy of Pediatrics and the American College of Obstetricians and Gynecologists have issued policy statements about planned home births. The bottom line to me is that, while neither society is thrilled with the practice, both say properly selected (and they give lists of what that means, which is itself a bit controversial, such as if previous Caesarian section should be a disqualifier), low risk pregnancies can be safely delivered at home. As a pediatrician, I should point out that if you chose that there are some routine things that need to be done for your baby in the first days of life, such as a hearing screen. So you should bring your infant to the doctor promptly for a newborn evaluation.
Complicated medical procedures can be dangerous, even when done by highly skilled and experienced people. Why? Because, irrespective of the procedural risk itself, all of us are human and we can overlook or forget things, no matter how many times we have done the procedure. This was recognized many years ago in the airline industry. Flying an airplane is a complicated and potentially dangerous activity and their are many steps to go through and check before takeoff. This is why, as you board an commercial airplane, you see the pilot and copilot going through a standardized list of things even though the pilot may have thirty years experience. Missing something can be fatal.
This process of formal checklists entered medical practice some years ago, first in the specialty of anesthesiology. It is one of the main reasons, along with new monitoring devices, that anesthesia is much, much safer than it was several decades ago. This approach then spread to other areas of medicine, in large part because of the work of patient safety guru Peter Pronovost. The idea is simple: for every procedure, rather than just tick things off in our mind like I was trained to do, we should go through a formal checklist process to make sure everything is correct and in place. Many of these are pretty simple things. Do we have the right patient? Are we doing the correct procedure on the correct body part? Do we have all the stuff we need ready to go for the procedure? This may sound sort of obvious, even silly, but there are many sad examples of physicians doing the wrong operation on the wrong patient.
The checklist concept really took off with Atul Gawande’s widely read book (it was a New York Times bestseller) The Checklist Manifesto: How To Get Things Right. The groundswell to establish checklists before and during procedures has now reached most hospitals. I know in my practice things have changed. In the past when I needed to do a procedure on a patient I just gathered up the personnel and equipment I needed and got started. Now we go through a checklist. An important part of the process is that any member of the team who has questions or issues is encouraged — mandated, really — to raise them. Now that I’m used to it, I like the new way better than the old one.
But the big question, of course, is if this increased role of formal checklists before procedures has done anything. Are rates of, say, wrong patient, wrong site, or other bad things improved? There are data showing that complications from at least one procedure, placement of central venous catheters, are reduced by checklists. But what else do we know? A recent article and accompanying editorial in the New England Journal of Medicine examined this question. The upshot is that things are murky.
The research study is from Canada. It looked at 3,733 consecutive patients at 8 hospitals that had implemented checklists for operative procedures. The bottom line was that there was no improvement in measurable outcomes. But hold on, observed the author of the editorial. As he saw it, the problem was that the checklists were foisted upon the operating room personnel without any preparation. There was apparently some resistance at the novelty of them, accompanied by gaming of the system — “dry-labbing the experiment,” as we used to say in the laboratory. The author’s point is that we really don’t know if the demonstrable success of checklists in some aspects of patient care can be generalized to other things. We hope so, but we don’t know for sure. The editorial author’s explanation for the findings of the research study is simple:
The likely reason for the failure of the surgical checklist in Ontario is that it was not actually used.
Nearly all physicians are now subject to patient satisfaction ratings. In my case, and many thousands of my colleagues across the country, it is via the survey tool sold to healthcare facilities by the Press Ganey Company. There are also many, many online sources that rate physicians, such as this one and this one. The idea is a good one: physicians should be subject to feed-back from patients about patient perceptions of how good a job the doctors do. If nothing else, how are we otherwise to change our behavior if we don’t find out where our problems are? The surveys don’t measure medical competence, but they could be a good metric of another aspect of how good we are as physicians. But, as currently used, patient satisfaction surveys are riddled with problems. They don’t measure what they’re supposed to measure, and they can easily drive physician behavior the wrong way.
I’ve read the Press Ganey survey forms, and the questions they ask are all very reasonable. I’d like to see the results if all the parents of my patients would fill one out. But that’s the problem. It is a fundamental principle of statistics that the sample (those who fill out the survey) you use to analyze the whole data set (which would be all the patients) is representative of the entire group. This doesn’t happen. Although the forms are sent out to a random sample of patients, a very nonrandom distribution of them are returned. Perhaps only the patients who are happy, or those who are unhappy, send them back. This is in fact likely. For the analysis to have any validity at all the patients who do not return the forms must also be randomly distributed among all those sent forms. But a valid survey, one in which efforts are made to get a very high return rate using such things as follow-up calls or contacts, is much more expensive to do.
There is another problem. Patient satisfaction and good medical care do not entirely overlap. It is certainly true that an experienced and skilled physician can and should deliver bad news to patients in a way in which the patient feels understood and accepting. But not infrequently doctors have to tell a patient that what the patient wants is not good medical care. This might be something as simple as not prescribing antibiotics for a viral illness, even though the patient may want that, to not prescribing narcotics to a drug-seeking patient in the emergency department. Both of these scenarios are common, and so can be the result — a dissatisfied patient. This issue would also be solved by a getting surveys from a truly random sample of patients, since the dissatisfied antibiotic or drug-seeking ones would be washed out by all the others. But now the mad ones fill out the forms — many others toss them in the trash.
This is not a trivial issue. Recent research has strongly suggested that the most satisfied patients often don’t get the best care; they are more likely to be admitted to the hospital (an often dangerous place), and they may even have a higher death rate. The best doctors can easily have the worst patient satisfaction scores.
I don’t want you to think I am against holding physicians accountable for what we do — I’m not. Patient satisfaction is a key component of how to do that. But we must have better tools, especially since we are now tying a doctor’s income to the satisfaction score. What we do now can easily result in statistical nonsense. Any scientist will tell you that bad data are worse than no data.
For what it’s worth, I looked for my own scores on several of the big physician rating sites. Good news! I got 4 stars (excellent)! The number of reviews I could find, out of the thousands of patients I’ve seen over 35 years of practice? One — a single review. Maybe it just means I’m not very memorable. But thanks anyway to whoever the reviewer was. Still, one out of many thousands doesn’t seem to be a very representative sample.
By now everyone should know that texting while driving is dangerous. Just talking on a cell phone while driving can have the same overall effect on attention and reaction time as driving while intoxicated. Texting while driving increases crash risk by 23-fold. But like most drivers I still pass cars with drivers bent over their phones texting away. Although we oldsters are catching up, texting is still more prevalent among younger people, especially teenagers. Teenage drivers already have a higher risk of getting into accidents anyway just owing to their inexperience. What do we know about what texting adds to this? We already know that 13% of all car crashes of 18-20 year-olds occur when they are using some sort of mobile device. A recent study gives us some information about how common texting is in particular.
The study was a survey of 8,505 students older than 16 and assessed texting behavior, as well as the association of texting with other high risk behaviors while driving. The results should cause us some concern. The results showed that nearly half of the students had done this during the previous month. More than that, texting while driving was associated with a significantly higher probability of the teenager not wearing a seat belt, riding in a car in which the driver had been drinking, or, most concerning, themselves driving after drinking.
I’m not surprised by these associations since risky behavior in teenagers comes as a bundle — if they do one risky thing they are more likely to do another. But parents should be aware of the particular danger of texting and talk to their kids about it. And, of course, we should set a good example by not texting while driving ourselves.
Few pediatricians doubt that ADHD — attention deficit hyperactivity disorder — is a real thing than can be quite disabling to some children. Further, few pediatricians question that stimulant medications like Adderall and Concerta can be very helpful for these children. But any reasonable person should be skeptical that 11% of all children and 20% of teenage boys have ADHD requiring medication. Those are the recent numbers reported by the Centers for Disease Control. The person most responsible for identifying ADHD and researching for 30 years how to treat it is Dr. Keith Conners, emeritus professor of psychology at Duke University. In a recent interview he had this to say about this apparent epidemic:
“The numbers make it look like an epidemic. Well, it’s not. It’s preposterous,” . . . “This is a concoction to justify the giving out of medication at unprecedented and unjustifiable levels.”
There is big money to be made in ADHD. As you can see from the graph above, sales of stimulants have risen 500% since 2002. ADHD is also just the kind of disorder the drug companies love — a chronic condition that requires daily medication for many years. They are far less interested in developing drugs that cure things because the market goes away. Drug companies market ADHD. They sponsor a large number of conferences for physicians and mental health workers encouraging them to diagnose it and, of course, to treat it. Patient advocacy groups for ADHD are helpful. But they also get a huge chunk of their funding from the drug industry, something few people know. And now a whole new frontier for Big Pharma has opened up with the identification that some adults have ADHD and respond to stimulant therapy. For those who suffer from ADHD the therapy helps significantly. But you can see the temptation to over-diagnosis it in adults, too. With all these prescriptions you can also see the ease with which they can be diverted to the illicit drug market. That happens frequently.
Is there any hope of moderating this trend, of getting the prescription numbers back down out of the stratosphere? A recent editorial in Psychiatric Times by Allen Frances sees some hope. That hope is based in the common sense of people pushing back. Here is what he has to say about that:
The percentage of kids being diagnosed (11% overall and 20% of teenage boys) is so absurdly high that reasonable people can no longer accept that the label is being applied with anything approaching sufficient care and caution.
I hope he’s correct. The principal problem with diagnosing ADHD is that there is no specific test for it, no blood test or scan that doctors can use to decide who has it. The diagnosis is made by ticking off items on a checklist, a checklist that was devised by committees of experts, committees which periodically change their minds and modify the diagnostic criteria. Inevitably the diagnosis has a fair amount of subjectivity built into it. Dr. Frances also reminds us that psychiatry has always wrestled with the subjective nature of mental illness, a situation that is ripe for diagnostic fads and fashions:
The history of psychiatry is littered with the periodic recurrence of fad diagnoses that suddenly achieve prominence and then just as suddenly fade away. Human distress is always hard to explain and sometimes hard to tolerate. Diagnostic labels, even false ones, can gain great and undeserved popularity because they seem to explain the otherwise unexplainable and provide hope that describing a problem will lead to improving it. And once you have a diagnostic hammer, everything begins looking like a nail.
I think that is quite perceptive.
A newly minted physician, one who has just graduated from medical school, is not yet ready (or licensed) to practice medicine. The next phase in medical training is called residency – a 3 to 5 year span of time during which the new doctor is given teaching, supervision, and increasingly allowed to function independently in his or her chosen specialty.
Since 2003 residents have been limited to working 80 hours per week, averaged over 4 weeks, with no individual stretch being longer than 16 hours. The rationale for this time restriction is reasonable and difficult to argue with: it dates back to the famous Libby Zion case. She died, and analysis of the case implicated resident fatigue and lack of supervision as contributing significantly to her death. The tragedy focused attention on the ways that overworked, overtired, and poorly supervised residents can harm patients. We don’t want that. But we don’t know the right balance between the patient care service residents provide and their education.
There is no doubt that for many years residents put in too many long hours — well over 100 per week was common. I did that when I trained in the late 1970s. The first year, long called the internship year, was the most brutal. In my case that amounted to at least 120 hours per week, often longer. We got every third Sunday afternoon off — if it was quiet. Subsequent years were less onerous, but always were 100 hours or more.
There is also no doubt that medicine is a career you learn by doing, so sitting in a lecture hall until you see your first patient as a physician is not the way to train doctors, although it was once done that way a century ago in the era before the Flexner Report. Have we found the right balance among the competing claims of resident education, practical on-the-job experience, and patient safety? A recent review article from the Annals of Surgery, “A systematic review of the affects [sic] of resident duty hour restrictions in surgery: impact on resident wellness, training, and patient outcomes,” gives us some useful information about the question. It is particularly useful because surgery residents must have more than abstract cognitive skills; they also need quick and decisive decision making skills and physical dexterity, which they can only get through practice. The American College of Surgeons has been particularly concerned about the effect reduced duty hours has on resident skills.
The review linked above is what is called a meta-analysis. This is a technique in which many smaller studies are pooled together to yield a larger data set, in this case a total of 135 separate articles. The results were disconcerting for advocates of duty hours restrictions. First, there was no improvement in patient safety. In fact, some studies suggested worse patient outcomes. Resident formal education may well have suffered; in 48% of the studies resident performance on standardized tests of their fund of knowledge declined, with 41% reporting no change. Importantly, only 4% reported improved resident performance. Resident well-being is difficult to measure, but there are some survey tools that assess burn-out; 57% of the studies that examined this showed improved resident wellness and 43% showed no change. So the bottom line is that, for surgical residents, duty hour restrictions were associated with better rested residents doing no better, and often worse, on assessments of their knowledge base. Patient safety, a key goal of the new rules, did not improve and actually may have been worse.
What are we to make of this? I can understand how resident test performance suffered. It suggests to me that most learning takes place at the patient bedside or in the operating room. In this analysis the additional free time for independent study didn’t help, either because residents didn’t do it or it’s not as effective. But what about patient safety? Why did that actually go down in more than a few of the studies?
One reason my be the problem of hand-offs of care. When duty shifts of residents are shorter they need to hand-off care of their patients to someone else. It’s well known that these are potentially risky times since the resident assuming care probably doesn’t know the patient as well. Under the old system, I often would admit a sick patient and stay caring for that patient for 24-36 hours. When that happens you really get to know the details of your patients well. From an educational perspective, you also see the natural history of an illness as it evolves. Finally, you develop a closer relationship with patients and their families than happens if residents are continually coming and going.
For myself, I am conflicted over how well we are doing training residents under the new rules. I don’t want to be like an old fart sitting on the porch and yelling at the neighborhood kids to get off my lawn. The old days were not necessarily the Good Old Days. The system I trained under was brutal to residents and sometimes dangerous to patients. But it also crammed an immense amount of practical experience into the available time. Today’s residents are denied that experience, and it shows. I am occasionally astonished by encounters with senior residents who have seen only a couple of cases in their lives of several not uncommon serious ailments.
What can we do? Some medical educators think that new advances in computer simulations and the like will substitute for lack of encounters with the real thing. Procedural specialties like surgery are particularly interested in simulations. We use them in pediatric critical care as well, and they help.
The bottom line is that the duration of residency has not changed in half a century or more, yet we are demanding that residents know more and more. Then we shorten their effective training time by duty hour restrictions; for some specialties it’s the equivalent of lopping a year off the residency. From what I have seen in my young colleagues, the practical result is that the first year or two of independent practice amounts to finishing the residency, acquiring the needed experience. Perhaps we should be honest about that and have the first couple post-residency years of being a “real doctor” be structured as getting mentorship from an experienced physician. As things stand, I think a fair number of finishing residents aren’t quite — almost, but not quite — ready to have the training wheels taken off their bikes.
Anyone who has worked in a hospital knows that there are a lot of treats around. These include cakes, cookies, and — most prized — boxes of candy, usually chocolates. Most of these are brought in by appreciative patients’ families and, in my experience, are always intended to be shared among the staff. The contents disappear quickly. A fun little paper in the British Medical Journal entitled “The survival time of chocolates on hospital wards” examines how fast they go.
The authors worked at three different hospitals in the United Kingdom. They placed two unopened boxes of chocolates (without revealing their origin) in each of four different patient care wards. They then kept these boxes “under continuous covert surveillance” to see what happened. They used two common UK brands — one by Cadbury and another by Nestle. The tongue-in-cheek use of clinical study language is probably the most fun part of the whole thing.
The median survival time of a chocolate was 51 minutes (39 to 63). The model of chocolate consumption was non-linear, with an initial rapid rate of consumption that slowed with time. An exponential decay model best fitted these findings (model R2=0.844, P<0.001), with a survival half life (time taken for 50% of the chocolates to be eaten) of 99 minutes. The mean time taken to open a box of chocolates from first appearance on the ward was 12 minutes (95% confidence interval 0 to 24). . . . The highest percentages of chocolates were consumed by healthcare assistants (28%) and nurses (28%), followed by doctors (15%).
These findings are in accord with my own experience. Nurses and heathcare assistants get the first bites owing to their increased access opportunities. Passing doctors, if they are lucky enough to be there when the box is opened, may get first crack, but generally they rank further back.
I wasn’t familiar with the chocolate brands and if the contents were the same, but I’d add my own observations on what disappears first from a mixed box. The caramels always go first, followed by anything crunchy. Those cream filled items, especially if the filling is fruity, are the last to go — they can languish for hours. Sometimes it’s difficult to tell what’s inside an individual chocolate. I’ve actually turned over a prospective candidate chocolate to find that the bottom has been pierced with a needle by an ingenious person to see what the filling was, then replaced because the findings were apparently unsatisfactory.
Also in my experience, hospital workers whose job entails covering several units or wards (such as respiratory therapists) often have acute antennae for detecting an open box of chocolates and make multiple passes through those particular areas, nipping one with each pass. Also in my experience, dieticians rarely partake. That is to be expected, I suppose.
Clinical medical research — finding out which treatments help and which ones don’t (or even make the situation worse) — is tough research to do. In the laboratory a scientist can control conditions so that only one thing is different between the control and the experimental groups. This isolates the effect of the particular thing and one can see if there is any difference in outcomes depending upon what is done with that thing. Clinical research is different because humans are complicated. The researcher tries to control the situation as much as possible, but ultimately she is comparing one dissimilar human to another one. Clinical research is also very expensive, and the costs grow as the number of study participants rises.
The result is that a lot of clinical studies, interventions in which researchers give patients this or that medicine and then try to find out if it worked, are underpowered. This means the studies aren’t powerful enough to answer the simple question: does this treatment help? And if one can’t answer that key question the whole enterprise is more or less a waste of time.
Partly to solve this problem the concept of “meta-analysis” was devised. The idea is that one can take a bunch of underpowered studies and lump the information together. This can create, in effect, a single study with enough power to answer the question. Critics have compared meta-analysis to making a silk purse from a sow’s ear — trying to take a lot of poor studies and make a good study from them. This can be a problem, although investigators do their best to choose only studies with usable data. But this kind of analysis can yield very important information. It is also comparatively cheap research to do because one is essentially doing research on the research. If you’ve ever taken your child to the doctor for treatment of croup you and your child have been the beneficiaries of what meta-analysis can accomplish.
Croup is caused by swelling of the airway from a virus (see the link above for details), and corticosteroid medicines reduce swelling. So it seemed logical to try them for croup. But although some of the early studies suggested steroids helped, they were all underpowered to answer the question for sure. Then somebody did a meta-analysis with the data and showed steroids probably helped. This information then led other researchers to spend the large amount of time and effort to do some fully-powered studies. The results? Steroids, by mouth, injection, or even inhaled, help relieve the symptoms of croup.
So in this case it was a silk purse all along.
Welcome to my more or less monthly newsletter for parents about pediatric topics. In it I highlight and comment on new research, news stories, or anything else about children’s health I think will interest parents. I have 30 years of experience practicing pediatrics, pediatric critical care (intensive care), and pediatric emergency room care. So sometimes I’ll use examples from that experience to make a point I think is worth talking about. If you want to get the newsletter regularly you can sign up for it here, on my home page (down at the bottom).
The bad effects of bullying are cumulative
We’ve known for eons that bullying can be hard on children. Not surprisingly, bullying is also hard on children’s health. A new longitudinal study over time is useful in showing this. It studied over 4,000 children serially (that is, the same kids) when they were in the 5th, 7th, and 10th grades. The authors found that bullied children were far more likely to have poorer health overall; both chronic and current bullying are associated with substantially worse health. They conclude: “Clinicians who recognize bullying when it first starts could intervene to reverse the downward health trajectory experienced by youth who are repeated targets.”
One caveat is that children with chronic health problems are more likely to be bullied, so the cause and effect relationship is not totally straightforward. Still, it’s a useful study to have: bullying isn’t just mean.
Should you use retail clinics for your children?
The American Academy of Pediatrics has recently put out a policy statement about retail clinics — those free-standing places sometimes called “doc in a box.” Should you bring your child to one? In a nutshell, the AAP doesn’t like them. Of course you should not be surprised by that because in some ways they represent the competition. But the policy statement makes some good points that you should consider if you are thinking about taking your child with something simple like a sore throat or an ear ache to one.
These places won’t know your child; all they will know about her past medical history is what you tell them. Sometimes that matters, sometimes not, but it is a reality.
I’ve had some experience seeing children who have been to a retail clinic, and my experience tells me the training and skill sets of the providers working there are pretty uneven. Many seem to have poor pediatric knowledge and less than standard practice habits. It seems to me that the default for many of them is that the patient should leave with something, generally a prescription. So in my experience they over-diagnosis ear infections, strep infections, and urinary tract infections. This makes for a lot of overprescription of oral antibiotics. They also tend to give antibiotics for what are clearly viral upper respiratory infections, a big no-no.
I’m not saying never use them if your child has an ear ache in the evening. But bear in mind the care you get may well be less than optimal. As I wrote above — sometimes that matters a lot, sometimes not so much.
Old foe, old remedy
We have a lot of antibiotics to choose from when treating children with pneumonia. There is always the temptation to use the newest and fanciest of them, but that can cause problems. For one thing, using the latest antibiotic on an uncomplicated case of what we call community-acquired pneumonia (that is, not caught while already in the hospital) leads to the scourge of developing bacteria resistant to most antibiotics; so when we really need the fancy ones they may not be effective. The newest ones are also typically the most expensive.
Recently the Pediatric Infectious Diseases Society has put out a recommendation that the older, cheaper, and more “narrow spectrum” antibiotics are preferred in ordinary pediatric pneumonia. So if your child has pneumonia, it would be entirely appropriate for you to bring this up with your doctor if he is ready to prescribe $150.00 worth of antibiotics.
I’ve raised a couple of kids of my own so I know how frustrating it can be when they just won’t go to sleep. Like many parents, I found that for several months my daughter just wouldn’t go to sleep unless I drove her around in the car. Then she was such a heavy sleeper I could bring her into the house and put her in her bed. These baby noise machines work on that principle.
These machines make various sounds — gurgling water, a heart beat, or just “white noise.” That’s all fine, but be aware that a recent study suggested that some of them make noises loud enough to affect a baby’s hearing, which is quite delicate.
The authors measured sound levels in 14 machines at various distances from a child’s ear. They found that all of the machines were capable of producing levels of sound hazardous to hearing. The authors don’t specify the brands, but my reading of the study is that all of them can be too loud even when used according to the directions.
My take home on this is that if you use one of these machine, use the lowest settings. Nobody has the sophisticated equipment that the authors of the study used to measure the sound intensity to make sure things are safe.