CBO finds that 19 million would lose their health insurance if the ACA is repealed

June 19, 2015  |  General  |  No Comments

[This is important. It was written by Phil Galewitz and republished (by permission) from Kaiser Health News (KHN), a nonprofit national health policy news service.]

Repealing the federal health law would add an additional 19 million to the ranks of the uninsured in 2016 and increase the federal deficit over the next decade, the Congressional Budget Office said Friday.

The report is the first time CBO has analyzed the costs of the health law using a format favored by congressional Republicans that factors in the effects on the overall economy. It is also the agency’s first analysis on the law under Keith Hall, the new CBO director appointed by Republicans earlier this year.

CBO projected that a repeal would increase the federal deficit by $353 billion over 10 years because of higher direct federal spending on health programs such as Medicare and lower revenues. But when including the broader effects of a repeal on the economy, including slightly higher employment, it estimated that the federal deficit would increase by $137 billion instead.

Both estimates are higher than in 2012, the last time that the CBO scored the cost of a repeal.

The latest report from the nonpartisan congressional watchdog and the Congressional Joint Committee on Taxation comes just days before the Supreme Court is expected to rule on the health law’s premium subsidies in the nearly three dozen states that rely on the federal marketplace. Such a ruling would cut off subsides to more than 6 million people and be a major blow to the Affordable Care Act. It could also boost Republican efforts to repeal the entire 2010 law, which would likely face a presidential veto.

Last week, President Barack Obama said nearly one in three uninsured Americans have been covered by the law—more than 16 million people.

The CBO said repealing the health law would first reduce the federal deficits in the next five years, but increase them steadily from 2021 through 2025. The initial savings would come from a reduction in government spending on the federal subsidies and on an expanded Medicaid program. But repealing the law would also eliminate cuts in Medicare payment rates to hospitals and other providers and new taxes on device makers and pharmaceutical companies.

The CBO projected that repeal would leave 14 million fewer people enrolled in Medicaid over the next decade. Medicaid enrollment has grown by more than 11 million since 2013, with more than half the states agreeing to expand their programs under the law.

By 2024, the number of uninsured would grow by an additional 24 million people if the law is repealed.

In 2012, the CBO projected repealing the health law would increase the federal deficit by $109 billion over 10 years.  It said the higher amount in Friday’s report reflected looking at later years when federal spending would be greater.

cbo repeal 600

 

 

 

Despite recent media spotlight there really is little controversy over the existence of shaken baby syndrome

June 1, 2015  |  General  |  No Comments

A recent series of articles in the Washington Post and a segment on NPR have caused quite a stir. The articles are about what we have called for decades “shaken baby syndrome.” It can be fatal. We now use the term non-accidental head trauma. This term replaced the older one because it is more specific; children can be deliberately harmed in other ways besides shaking. In addition, inflicted trauma can happen in other places besides the head. The Post article was about the shaking variety, and it focused on several things. It highlighted several individuals who had been apparently wrongly convicted of injuring a child through shaking. It also interviewed physicians who do not believe in the diagnostic entity; they say shaken baby syndrome does not exist. Not surprisingly the article generated a lot of comment and debate, debate that has actually been going on for some time. As a pediatric intensivist for over 30 years I have dealt with many unfortunate examples of this entity, and I have no doubt that it exists. But, like all disorders that do not have a specific, definitive test for them, deciding whether or not a child has suffered shaken baby syndrome depends upon more that some x-rays and an eye examination; you need to consider the entire context of the story.

Shaken baby syndrome was first described in the 1960s to describe the combination of several injuries: subdural hematoma (bleeding around the brain), retinal hemorrhages (bleeding at the back of the eye), and brain swelling. Rib fractures are also common because the person doing the shaking typically squeezes the child’s chest hard enough to crack ribs. How do these injuries happen with shaking? The fundamental cause is that a small baby has a relatively large head compared to the rest of his body and is unable to hold his head firmly in place because the muscles aren’t strong enough yet to do that. So shaking snaps the head back and forth, generating very large forces inside the skull as the brain bangs back and forth. This can lead to rupture of some of the small veins that surround the brain, as well as tiny vessels in the back of the eye. The brain then often swells afterwards, as any tissue does when injured. If death or severe injury follows it is generally because of the brain swelling. If ribs are broken from squeezing the chest, the fractures happen at the back of the bones where the ribs come off the spinal column. It is often illustrated in this way.

There have always been some issues about diagnosing the syndrome. The main one is that all of the components of shaken baby syndrome can occur individually in other settings. Another issue is that, as in most cases of potential child abuse, the alleged assault is unwitnessed and the victim cannot give any evidence. So all evidence is circumstantial. And, of course, the stakes are very high not just for the injured child; adult caregivers can be convicted for murder. The Post article focuses on several cases like that. Some physicians have gone as far to claim that the syndrome doesn’t even exist. The American Academy of Pediatrics vigorously disagrees:

Journalists can be commended for addressing child abuse. Unfortunately, the Post’s report is seriously unbalanced, sowing doubt on scientific issues that actually are well-established. It is very clear that shaking a baby is dangerous.

It is important to acknowledge that mistakes probably have been made in both over-diagnosing and under-diagnosing abuse. The Post focused on over-diagnosis, but under-diagnosis also is a problem, leaving babies vulnerable to further abuse and even death. It’s critical we get this right.

Well okay, you might say. Of course the Pediatric Establishment would say something like that. But I think it is clear the syndrome exists. I have seen it many times and have been involved in legal proceedings charging the perpetrator, the majority of whom ultimately confessed to the act. The thing is, in all the cases I have been involved in there were other things that pointed toward child abuse. For example, a baby’s tissues are delicate and the squeezing and shaking often causes obvious bruising. I have seen several cases where the bruises even matched adult finger marks. If rib fractures are present, there is essentially no way a baby could break ribs in the typical places without shaking. Finally, and very important, is the history. Injuries like bleeding in the brain need to be explained. If there was no child abuse, then there has to be another coherent, logical explanation. I have never been involved in a case in which the potential perpetrators (or their lawyers) could give such an explanation. I have been involved, however, in cases in which no perpetrator was identified because several were possible and none came forward with the real story. More from the AAP:

What are the facts? [about denying the existence of the syndrome] . . . it involves a tiny cadre of physicians. These few physicians testify regularly for the defense in criminal trials — even when the medical evidence indicating abuse is overwhelming. They deny what science in this field has well-established. They are well beyond the bounds where professionals may disagree reasonably. Instead, they concoct different and changing theories, ones not based on medical evidence and scientific principles. All they need to do in the courtroom is to obfuscate the science and sow doubt.

Miscarriages of justice are tragic. But so is child abuse, and it is unfortunately not uncommon. Jury trials are imprecise and blunt tools of justice. If I were sitting in the jury box, I would like some other evidence besides just the triad of subdural hematoma, retinal hemorrhages, and brain swelling. In my experience there is generally additional evidence. I think the Post article is more than a little like the way the media portrays other scientific issues — controversy sells (or these days attracts page views). The media presents several scientific issues, for example childhood vaccinations, as he-said-she-said stories even when the scientific consensus is overwhelmingly one way and not the other.

 

“Cowboy” doctors mostly increase costs and risks without benefiting patients

May 12, 2015  |  General  |  2 Comments

Some months back I read an interesting interview with Jonathan Skinner, a researcher who works with the group at the renowned Dartmouth Atlas of Health Care. More than anyone else I can think of, the people at the Dartmouth Atlas have studied and tried both to understand and to explain the amazing variations we see in how medicine is practiced in various parts of the country. It turns out that specific conditions are treated in quite different ways depending upon where you live. Atul Gawande documented a detailed example of the phenomenon in an excellent New Yorker article here. A major determinant appears to be local physician culture, how we doctors “do things here.” The disturbing observation is that patient outcomes aren’t much different, just cost. Of course it’s more than cost. Doing more things to patients also increases risk, and adding risk without benefit is not what we want to be doing.

Skinner is interested in something else, a phenomenon he calls “cowboy doctors.” By this he means physicians who are individual outliers, who go against the grain by substituting their own individual judgements for those of the majority of their peers. In theory such lone wolf practitioners could go both ways. They could do less than the norm, but almost invariably they do more — more tests, more treatments, more procedures. Such physicians not only may put their patients at higher risk, they also add to medical costs. I have met physicians like that and have usually found them to be defiant in their nonconformity. A few revel in it. They maintain they are doing it for the good of their patients, but there is more than a little of that old physician ego involved. There is also the subtext of what many physicians feel these days, especially old codgers like me who have been practicing for 35 years: it is the tension between older notions of medicine as an art, a craft, and newer evidence-based, team driven practice. Skinner describes it this way:

It’s the individual craftsman versus the member of a team. And you could say, ‘Well, but these are the pioneers.’ But they’re less likely to be board-certified; there’s no evidence that what they’re doing is leading to better outcomes. So we conclude that this is a characteristic of a profession that’s torn between the artisan, the single Marcus Welby who knows everything, versus the idea of doctors who adapt to clinical evidence and who may drop procedures that have been shown not to be effective.

Leaving aside outcomes and moving on to costs, Skinner and his colleagues were quite surprised to discover how much these self-styled cowboys and cowgirls were adding to the nation’s medical bills. They found that such physicians accounted for around 17% of the variability in regional healthcare costs. To put that in dollars, it amounts to a half-trillion dollars. That is an astounding number.

So what we are looking at here is a dichotomous explanation for the huge regional variations in medical costs. On the one hand we have physicians who conform to the local culture, stay members of the herd and go along with the group, even if the group does things in a much more expensive way that confers no additional benefit to patients. On the other hand we have self-styled mavericks who scorn the herd and believe they have special insight into what is best, even if all the research shows they’re wrong.

I think what is coming from all this cost and outcome research is that best practice, evidence-based medicine (when we have that — often we don’t for many diseases) will be enforced by the people who pay the bills and professional organizations. Yes, some will bemoan this as the loss of physician autonomy and the reduction of medical practice to cookbooks and protocols. I sympathize with that viewpoint a little, especially since I am the son and grandson of physicians whose practice experience goes back to 1903. But really, there are many things we used to do that we know now are useless or even harmful. An old professor of mine had a favorite saying for overeager residents: “Don’t just do something — stand there!”

For those who would like to dive into the data and see the actual research paper from the National Bureau of Economic Research describing all this, you can read it here.

 

Recalling a public health triumph: the eradication of cretinism

April 30, 2015  |  General  |  No Comments

One of the words we don’t use any more is cretin; it’s long been a derogatory slur rather than a precise description of something. But a century ago cretinism actually meant a specific thing: a person, generally a child, who was severely damaged by a lack of thyroid hormone during early development, particularly fetal development. Now we call the condition congenital hypothyroidism. A few cases still exist, which is why we screen all newborns for thyroid function. But the overwhelmingly most common cause a century ago was hypothyroidism — low thyroid hormone — in pregnant women. The overwhelmingly most common cause of that was deficiency of iodine in the diet.

The thyroid gland sits in the front of your neck, just below your voice box (larynx). It has two lobes on either side connected by a little bridge. Its job is to make, store, and release thyroxine, or thyroid hormone. This hormone has several important functions, acting upon nearly every cell in the body in one way or another. It affects the metabolism of cells, how they use energy, and is key to cellular growth and development. The thyroid gland needs iodine to make thyroxine properly. A thyroid that is not making thyroxine properly may swell into a goiter, another thing that once was common and now is rare. There are various reasons adults may develop low thyroxine levels, become hypothyroid. These days this condition is easily treated by taking oral thyroid hormone every day. The problem for a baby developing in the womb is that a deficiency of thyroxine in the mother causes irreversible damage before the baby is born, and thus before we can give the infant thyroid hormone.

Congenital hypothyroidism is now rare in the developed world. Why? You can read the history lesson of why in a nice review here, but the reason is iodine supplementation of food, particularly salt. This is a fascinating example of several companies, particularly the giant Morton Salt Company, listening to the advice of medical experts and then just adding iodine to their product. This turned out to be an easy thing to do.

The result was an astounding public health triumph. Congenital hypothyroidism on the basis of iodine deficiency is still a problem in the developing world, but it has been eliminated from the developed world. To me it brings to mind the addition of fluoride to water and the subsequent dramatic reduction in dental caries in children. Interestingly, although I have a graduate degree in history of medicine, I am unaware of any organized efforts by people to resist iodized salt as there has occasionally been for fluoridated water. You can buy salt without iodine, although I don’t know why you would want to, but salt is found in nearly every food product that has been processed in any way, such as bread. So you can’t really avoid it.

Again, this is an example of a simple, well targeted population intervention, like vaccination, that conquered a disease that had plagued people for millennia.

Has Obamacare made it easier or harder to get a doctor’s appointment?

April 23, 2015  |  General  |  No Comments

One of the goals of the Affordable Care Act (aka Obamacare) was to increase access to primary care physicians. The notion is that if people have insurance it would be easier for them to get appointments with primary care physicians. This is because many physicians are unwilling to accept new patients who are uninsured. Further, a key component of the ACA was to increase physician reimbursement for Medicaid because this program was a major mechanism for expanding insurance coverage. Medicaid reimbursement has always been low — significantly lower than Medicare pays for the same encounter — so many physicians would not take it. The ACA drafters hoped higher reimbursement would entice these physicians to accept Medicaid. We don’t know if any of these assumptions are correct, but a recent study published in The New England Journal of Medicine suggests a positive impact.

The authors’ method was a bit sneaky, I suppose. They had trained field staff call physicians’ offices posing as potential patients asking for new appointments. They were divided into two groups; one group said they had private insurance, the other said they had Medicaid. The authors compared two time periods — before and after the early implementation of the ACA. A sample of states were compared to see if the rates of acceptance of new Medicaid patients was associated with a particular state increasing physician Medicaid reimbursement.

The results were not striking, but they suggest a significant positive trend. This is what the results showed, in the authors’ words:

The availability of primary care appointments in the Medicaid group increased by 7.7 percentage points, from 58.7% to 66.4%, between the two time periods. The states with the largest increases in availability tended to be those with the largest increases in reimbursements, with an estimated increase of 1.25 percentage points in availability per 10% increase in Medicaid reimbursements (P=0.03). No such association was observed in the private-insurance group.

Again, these are data from the early days of ACA implementation. But they are encouraging. One of the most important components of slowing the seemingly inexorable rise in healthcare costs is getting people good primary and preventative care. This keeps people with a chronic, manageable condition out of the emergency room and, one hopes, out of the hospital. This is particularly the case with common conditions like diabetes and asthma. For both of those disorders regular care by a primary care physician can spare patients much suffering and save many thousands of dollars.

I hope this kind of research continues as the ACA matures. It’s a good way to see if the overall goals are being met. Of course it raises a new challenge: making sure we have enough primary care physicians. Right now we don’t.

Measuring the economic good of Medicaid and CHIP over the long term

April 16, 2015  |  General  |  No Comments

The CHIP program (Children’s Health Insurance Program) has just been reauthorized by Congress. This is a program that provides health insurance for children of lower income families who still make too much income to qualify for Medicaid. Both CHIP and Medicaid provide essential, even life-saving healthcare for kids. That’s a good thing. A recent research study asked a deeper question: What are the long-term economic effects of providing this care, of keeping children healthy into adulthood? Their study doesn’t address the humanitarian aspects, which are huge, but rather the cold, hard economic ones.

The authors used the expansion of Medicaid and the implementation of CHIP that occurred in the 1990s to follow children who had obtained healthcare via those programs and were now adults. The bottom line is that well over half of those healthcare dollars spent were recouped in the form of taxes over the working lifetime of the subjects. Again, this doesn’t even take into account the global benefit to society of keeping people from suffering. In the words of the authors:

The government will recoup 56 cents of each dollar spent on childhood Medicaid by the time these children reach age 60. This return on investment does not take into account other benefits that accrue directly to the children, including estimated decreases in mortality and increases in college attendance.

There were several, less measurable multiplier effects that pushed the return even higher than that. One of these was the probability of the subjects collecting Earned Income Tax Credits in the future. They conclude:

We find that by expanding Medicaid to children, the government recoups much of its investment over time in the form of higher future tax payments. Moreover, children exposed to Medicaid collect less money from the government in the form of the Earned Income Tax Credit, and the women have higher cumulative earnings by age 28. Aside from the positive return on the government investment, the eligible children themselves also experience decreases in mortality and increases in college attendance.

To me it seems pretty intuitive that keeping children healthy makes them more likely to be healthy adults, and healthy adults are more likely to become able-bodied, working taxpayers. They also have longer lifespans. This study gives important, long-term data to support that intuition. Plus, it’s the right thing to do.

Pediatric Newsletter #15: Food Allergies, Gluten, and Pizza

March 29, 2015  |  General  |  No Comments

Welcome to the latest edition of my newsletter for parents about pediatric topics. In it I highlight and comment on new research, news stories, or anything else about children’s health I think will interest parents. In this particular issue I tell you about a couple of new findings about allergies in children, as well as some new information about gluten sensitivity. I have over 30 years of experience practicing pediatrics, pediatric critical care (intensive care), and pediatric emergency room care. So sometimes I’ll use examples from that experience to make a point I think is worth talking about. If you would like to subscribe, there is a sign-up form on the home page.

Big News About Peanut Allergies

This one made a big splash both in the medical news sites and in the general media. Peanut allergy is common. It has doubled in the past decade, now affecting between 1 and 3% of all children. And it can be a big deal for children who have it, even life-threatening. For years we recommended that children not be given peanut products early in life, especially if they are at risk (based on their other medical issues) for developing allergy. Unfortunately, avoiding peanuts in the first year of life doesn’t make a child less likely to develop the allergy. So what, if anything, can?

This recent, very well done study published in the prestigious New England Journal of Medicine is really ground-breaking. It took 4 to 11-month-old children at high risk for developing peanut allergy and divided them into 2 groups. One group got the “standard” approach — being told to avoid peanut exposure. The other group was fed peanuts 3 times per week. It was done in the form of either a peanut snack or peanut butter.

At age 5 years (the long follow-up time is a particularly strong feature of the study) the children who had been fed the peanuts had nearly a 90% reduction in the development of peanut allergy. This is a huge difference.

The study also was able to provide a scientific explanation for the difference. The children fed the peanuts developed protective antibodies that cancel out the ones that provoke the allergic response.

Washing Dishes by Hand May Reduce the Risk of Food Allergies

This report comes from Pediatrics, the journal of the American Academy of Pediatrics. There has been a long-standing theory about how allergies develop in children called the “hygiene hypothesis.”

The notion is that children, particularly in Western countries, are more prone to allergies (and asthma) because their exposure to microbes is delayed by our more sanitized environment.

In this study from Sweden, children in households that washed dishes by hand rather than using a dishwasher experienced a lower risk of subsequent allergies. The authors speculated that there was a causal association. They couldn’t prove that, but they also noted that early exposure to fermented foods and if the family bought food directly from farms also correlated with less allergies. I’m not totally convinced, but it is an interesting study worth thinking about. I expect to see more on the topic.

Does the Age at Which You Introduce Gluten Into Your Child’s Diet Affect Future Risk of Gluten Sensitivity?

Gluten sensitivity is in the news, with signs everywhere advertising “gluten free” as if this is always a good thing. I hear a lot of misconceptions about gluten sensitivity. Gluten is a protein found in grains such as wheat and barley. There is a condition, called celiac disease or sprue, in which a person can develop moderate or severe intestinal symptoms triggered by gluten. It is one of the eighty or so autoimmune diseases. The incidence of celiac disease in the US is about 0.7%. The risk of developing celiac disease is closely linked to a genetic predisposition to getting it. Importantly, if you don’t have the disorder, there is no benefit to eliminating gluten from your diet. In fact, the great majority of people who think they have sensitivity to gluten . . . don’t.

But for those children who do have a higher risk for developing celiac disease because of their genetic makeup it has long been a question if delaying gluten exposure will affect their chances of actually getting the disease. A good recent study gives an answer to that question, and the answer is no. There is no correlation.

If you think your child (or you) have problems with gluten there is a useful blood test that looks for a specific antibody. However, many people who have the antibody never get symptoms of celiac disease. The ultimate test is an intestinal biopsy.

My take-away conclusion is that all this gluten-free stuff you see in, for example, restaurants, is just the latest dietary fad. For over 99% of us there is no health benefit to avoiding gluten.

So How Much Pizza Do Teenagers Eat?

This is kind of a quirky item. If nothing else, it demonstrates how weird the medical literature can be sometimes. Every parent knows kids, teenagers in particular, mostly love pizza. A recent article in Pediatrics, a fairly respected journal, used food surveys to find how much pizza kids eat and the percentage of their daily calories they get on average from pizza.

The answer? The authors claim that in 2010 21% of kids ages 12-19 reported eating pizza sometime in the past 24 hours. That number is actually down from a similar survey in 2003. What about calories? For those kids who reported eating pizza, it accounted for about 25% of their daily calories, and that hasn’t changed. The authors primly suggest that we should make pizza more nutritious. I wish them luck with that. And I’m 63 years old and still like pizza.

SLAPPS in Medicine: Can medical researchers ever be at risk for being sued for libel?

March 5, 2015  |  General  |  No Comments

It has become a common technique for large companies or other powerful organizations, when they meet public opposition, to use a strategy called strategic lawsuits against public participation, or SLAPP for short. I have seen one of these in action. In the case I observed, a large development company wanted to obtain a parcel of public land by offering the US Forest Service a swap for an obviously inferior piece of land the company owned. Many citizens objected and organized against the proposal. Their actions had a reasonable chance of blocking the land swap. The company responded by suing the leaders of the citizens’ group — a SLAPP. The key concept of these suits is not that the instigators expect to win them They almost never do, even on the rare occasions when they make it to trial. But just the threat of a lawsuit and huge monetary damages has a chilling effect on ordinary people, who do not have armies of lawyers. It puts them through stress and, most importantly, great financial cost to contest the SLAPP. It has the effect of frightening off opposition.

Could a similar process happen in medical research? A recent example suggests that this is possible. The details are presented here. The authors ask this question:

Does fear of libel lawsuits influence what gets published in medical journals? We suggest it may, especially when the conclusions run counter to corporate interests.

The particular case involved a study in which the researchers investigated the relationship between TV advertisements for fast food and children’s perception of the product. As it happens, 99% of fast food advertising directed at children comes from only two companies: MacDonald’s and Burger King. The investigators concluded that the companies failed to comply with the guidelines of the Children’s Advertising Review Unit of the Better Business Bureau. The authors then submitted their findings to Pediatrics, a journal of the American Academy of Pediatrics. The manuscript review process stopped when the legal department of the Academy recommended that the names of the fast food companies be removed from the paper. The lawyers were concerned about being sued by one or both of these fast food giants. However, the authors believed that naming names was important and they withdrew the paper from consideration. Here is what they were told by the journal editor:

In the event that a defamation claim is brought as a result of the publication of this article, the publishing company could be named as defendants. Based on these findings and advice from counsel, we recommend the article not be published.

The article eventually was published by the journal PLOS One, with the company names included. This series of events raises important questions for medical research. Remember the point of SLAPPs is not to actually win a libel suit. Rather, it is to put the SLAPP target though trouble and expense sufficient to warn them off. Valid medical research cannot be libel. There is actually a 1994 court decision (Underwager v Salter) that states scientific disagreements should be decided in the scientific, not the legal arena. And truth is always a defense against libel.

I have no idea if this sort of thing is an isolated instance or happens more frequently. I have long been concerned that, with the decline of federal NIH support for medical research, industrial financial support carries the risk of compromising the work. This is what cannot be allowed to happen:

. . . any article that reaches negative conclusions about a company’s practices or products risks rejection, as it is company practice today to strategically threaten libel suits to ward off legitimate criticism.

This is serious issue, one all of us who use the medical research literature need to think about.

Vaccines, quarantines, and compulsion in supporting the public health

February 9, 2015  |  General  |  2 Comments

I posted a version this one last year, but the recent outbreak of measles has once again ignited the debate of just what the government has the right to do or not do in compelling individual actions in support of public health. This is an old question, and it’s worth considering it in historical context.

One aspect of the endless vaccine debate is the aspect of coercion some parents feel about requiring children to be vaccinated before they can go to school. The government mandates vaccination. But this isn’t really an absolute requirement. Although all 50 states ostensibly require vaccination, all but 2  (Mississippi and West Virginia) allow parents to opt out for religious reasons, and 19 states allow this for philosophical reasons. (See here for a list.) Still, in general vaccines are required unless the child has a medical reason not to get them, such as having a problem with the immune system. Is this an unprecedented use of state power? I don’t think it is.

In fact, historically there have been many examples of the government inserting itself into healthcare decisions of individuals and families in order to protect the public health. Some of these go back many years. Quarantine, for example, goes back to medieval times, centuries before the germs were discovered. It has since 1944 been a power of the federal government; federal agents may detain and send for medical examination persons entering the country suspected of carrying one of a list of communicable diseases. Quarantine has also been used by local and state governments, particularly in the pre-antibiotic era. Diphtheria is a good example, as you can see from the photograph above. Quarantine can be abused, and has in fact been abused in the past for discrimination against certain minority groups. A brief paper from the American Bar Association details some of those instances here. The paper even suggests that it should be abolished for these reasons. But the practice is a very old one.

Of course the government mandates many things for the protection of public health. Milk is pasteurized (although there are raw milk enthusiasts who object), water is purified, and dirty restaurants can be closed. Like quarantine, these measures restrict our personal freedom a little, but what about government-mandated medical treatment? That sounds a bit more like the situation with compulsory vaccination of children. As it happens, there are more recent examples of compulsory treatment, particularly involving tuberculosis.

A couple of decades ago I was involved in a case of a woman with active tuberculosis who refused to take treatment for it. Worse, her particular strain of TB was one highly resistant to many antibiotics, so if that spread it would represent a real public health emergency. The district judge agreed. He confined the woman to the hospital against her will so she could be given anti-TB medications until she was not longer infectious to others. At the time I thought this was pretty unusual. When I looked into it, though, I found that there have been many instances of people with TB being confined against their will until they were no longer a threat to others. The ABA link above lists several examples of this.

So it’s clear to me there is a long tradition of the state restricting personal freedom in the service of protecting the public health. Like everything, of course, the devil is in the details. To me the guiding principle is that your right to swing your fist ends where my nose begins.

Pediatric Newsletter #13: childhood sleep, concussions, and more

January 31, 2015  |  General  |  No Comments

Here is the latest of my more or less monthly newsletter on pediatric topics. In it I highlight and comment on new research, news stories, or anything else about children’s health that I think will interest parents. If you want to subscribe to it and get it in the form of an email each month there is a sign-up form at the very bottom of my home page.

 

New Study Shows How and Why Sleep Patterns Change During Adolescence 

Every Parent of a teenager knows that they tend to go to sleep later and are harder to rouse out of bed in the morning. It turns out that as elementary school children become early and then mid-teenagers theses changing sleep patterns are a normal result of the hormonal changes their bodies are going through.According the data in a new study, conducted with 94 children in all, children are programmed to get less sleep as they mature.

A typical 9-year-old fell asleep at 9:30 p.m. on a weekday upon first enrolling in the study and would wake up at 6:40 a.m. By age 11, the same child would go to sleep at 10 p.m. The net result for that child – and many others in the cohort of 38 children who joined the study at 9 or 10 years old – would be steadily less sleep every night.
The study is one of the few to track individual kids for longer than a year. It showed wide individual differences in these trajectories. For some children, the data show, the shift to a later bedtime without a later wakeup time was abrupt, possibly putting them at a greater disadvantage relative to their peers in school.

The American Academy of Pediatrics has suggested later school start times for teenagers, advising school start times no earlier than 8:30 for middle and high school students.

For Kids With Simple Concussions, a Couple of Days Rest is Enough

The past few years have brought increasing attention on concussions, particularly the long-term effects of repeated concussions. We need to take them seriously. But they are common, and the vast majority of children recover without any further brain issues. The optimal way to care for children following a concussion is still unknown, although there is one key principle: a child should not do anything that could lead to another head injury, such as returning to contact sports, until the symptoms of the concussion have resolved. Common symptoms are headache, vomiting, and difficulty concentrating.

Some authorities have long recommended extensive bed rest following a concussion. A new study indicates that this is not needed and does not help the brain heal any faster. In fact, the authors noted that children placed on strict bed rest tended to focus on their symptoms even more, which is not surprising to me.

You can read more about concussions — what they are, what they mean — in a blog post I wrote here.

 

Stress During Pregnancy Can Effect Fetal Development

This study is in mice, not people, but it has very suggestive findings. The bottom line was that pregnant mice who had high levels of stress hormones had smaller offspring, and low birth weight is an important marker for later problems in infants. The effect was not because the stressed mothers ate less — they actually ate more. Although the causes of low birth weight are many, it makes sense that a stressful environment for the fetus might be one of them.

 

Starting Serious Contact Football Before the Age of 12 Linked to Later Brain Problems

While we’re talking about concussions and head injury (see above) another important study in the journal Neurology found, at least in NFL players a correlation between later degenerative brain problems and the age at which the player first began to play. At least minor head injury is almost inevitable in football. It is likely that there are many injuries that don’t reach the level of concussion but which, over time, add up. The identification of age 12 as the threshold for increasing risk for later problems makes sense from what we know about brain development in children.

 

Small Screens in a Child’s Bedroom interfere With Sleep

We should probably file this one in the common sense department, but if you allow your child to have a small screen in the bedroom, such as from a smart phone, it will interfere with his or her sleep. I know we found that to be the case with my own son. I guess it’s good to know that research confirms that.

 

Massive Study Confirms Safety of Measles Vaccine

Measles is very much in the news these days after the outbreak of the infection in California, which was linked to higher numbers of unvaccinated children. An inevitable byproduct has been the resurrection of fear of the vaccine. Multiple past studies have debunked any links with autism or any other serious ailments. So this study is timely.

Researchers at Kaiser-Permanente studied a total of nearly 800,000 vaccine doses over 12 years and found no serious issues.