“Cowboy” doctors mostly increase costs and risks without benefiting patients

May 12, 2015  |  General  |  No Comments

Some months back I read an interesting interview with Jonathan Skinner, a researcher who works with the group at the renowned Dartmouth Atlas of Health Care. More than anyone else I can think of, the people at the Dartmouth Atlas have studied and tried both to understand and to explain the amazing variations we see in how medicine is practiced in various parts of the country. It turns out that specific conditions are treated in quite different ways depending upon where you live. Atul Gawande documented a detailed example of the phenomenon in an excellent New Yorker article here. A major determinant appears to be local physician culture, how we doctors “do things here.” The disturbing observation is that patient outcomes aren’t much different, just cost. Of course it’s more than cost. Doing more things to patients also increases risk, and adding risk without benefit is not what we want to be doing.

Skinner is interested in something else, a phenomenon he calls “cowboy doctors.” By this he means physicians who are individual outliers, who go against the grain by substituting their own individual judgements for those of the majority of their peers. In theory such lone wolf practitioners could go both ways. They could do less than the norm, but almost invariably they do more — more tests, more treatments, more procedures. Such physicians not only may put their patients at higher risk, they also add to medical costs. I have met physicians like that and have usually found them to be defiant in their nonconformity. A few revel in it. They maintain they are doing it for the good of their patients, but there is more than a little of that old physician ego involved. There is also the subtext of what many physicians feel these days, especially old codgers like me who have been practicing for 35 years: it is the tension between older notions of medicine as an art, a craft, and newer evidence-based, team driven practice. Skinner describes it this way:

It’s the individual craftsman versus the member of a team. And you could say, ‘Well, but these are the pioneers.’ But they’re less likely to be board-certified; there’s no evidence that what they’re doing is leading to better outcomes. So we conclude that this is a characteristic of a profession that’s torn between the artisan, the single Marcus Welby who knows everything, versus the idea of doctors who adapt to clinical evidence and who may drop procedures that have been shown not to be effective.

Leaving aside outcomes and moving on to costs, Skinner and his colleagues were quite surprised to discover how much these self-styled cowboys and cowgirls were adding to the nation’s medical bills. They found that such physicians accounted for around 17% of the variability in regional healthcare costs. To put that in dollars, it amounts to a half-trillion dollars. That is an astounding number.

So what we are looking at here is a dichotomous explanation for the huge regional variations in medical costs. On the one hand we have physicians who conform to the local culture, stay members of the herd and go along with the group, even if the group does things in a much more expensive way that confers no additional benefit to patients. On the other hand we have self-styled mavericks who scorn the herd and believe they have special insight into what is best, even if all the research shows they’re wrong.

I think what is coming from all this cost and outcome research is that best practice, evidence-based medicine (when we have that — often we don’t for many diseases) will be enforced by the people who pay the bills and professional organizations. Yes, some will bemoan this as the loss of physician autonomy and the reduction of medical practice to cookbooks and protocols. I sympathize with that viewpoint a little, especially since I am the son and grandson of physicians whose practice experience goes back to 1903. But really, there are many things we used to do that we know now are useless or even harmful. An old professor of mine had a favorite saying for overeager residents: “Don’t just do something — stand there!”

For those who would like to dive into the data and see the actual research paper from the National Bureau of Economic Research describing all this, you can read it here.

 

Recalling a public health triumph: the eradication of cretinism

April 30, 2015  |  General  |  No Comments

One of the words we don’t use any more is cretin; it’s long been a derogatory slur rather than a precise description of something. But a century ago cretinism actually meant a specific thing: a person, generally a child, who was severely damaged by a lack of thyroid hormone during early development, particularly fetal development. Now we call the condition congenital hypothyroidism. A few cases still exist, which is why we screen all newborns for thyroid function. But the overwhelmingly most common cause a century ago was hypothyroidism — low thyroid hormone — in pregnant women. The overwhelmingly most common cause of that was deficiency of iodine in the diet.

The thyroid gland sits in the front of your neck, just below your voice box (larynx). It has two lobes on either side connected by a little bridge. Its job is to make, store, and release thyroxine, or thyroid hormone. This hormone has several important functions, acting upon nearly every cell in the body in one way or another. It affects the metabolism of cells, how they use energy, and is key to cellular growth and development. The thyroid gland needs iodine to make thyroxine properly. A thyroid that is not making thyroxine properly may swell into a goiter, another thing that once was common and now is rare. There are various reasons adults may develop low thyroxine levels, become hypothyroid. These days this condition is easily treated by taking oral thyroid hormone every day. The problem for a baby developing in the womb is that a deficiency of thyroxine in the mother causes irreversible damage before the baby is born, and thus before we can give the infant thyroid hormone.

Congenital hypothyroidism is now rare in the developed world. Why? You can read the history lesson of why in a nice review here, but the reason is iodine supplementation of food, particularly salt. This is a fascinating example of several companies, particularly the giant Morton Salt Company, listening to the advice of medical experts and then just adding iodine to their product. This turned out to be an easy thing to do.

The result was an astounding public health triumph. Congenital hypothyroidism on the basis of iodine deficiency is still a problem in the developing world, but it has been eliminated from the developed world. To me it brings to mind the addition of fluoride to water and the subsequent dramatic reduction in dental caries in children. Interestingly, although I have a graduate degree in history of medicine, I am unaware of any organized efforts by people to resist iodized salt as there has occasionally been for fluoridated water. You can buy salt without iodine, although I don’t know why you would want to, but salt is found in nearly every food product that has been processed in any way, such as bread. So you can’t really avoid it.

Again, this is an example of a simple, well targeted population intervention, like vaccination, that conquered a disease that had plagued people for millennia.

Has Obamacare made it easier or harder to get a doctor’s appointment?

April 23, 2015  |  General  |  No Comments

One of the goals of the Affordable Care Act (aka Obamacare) was to increase access to primary care physicians. The notion is that if people have insurance it would be easier for them to get appointments with primary care physicians. This is because many physicians are unwilling to accept new patients who are uninsured. Further, a key component of the ACA was to increase physician reimbursement for Medicaid because this program was a major mechanism for expanding insurance coverage. Medicaid reimbursement has always been low — significantly lower than Medicare pays for the same encounter — so many physicians would not take it. The ACA drafters hoped higher reimbursement would entice these physicians to accept Medicaid. We don’t know if any of these assumptions are correct, but a recent study published in The New England Journal of Medicine suggests a positive impact.

The authors’ method was a bit sneaky, I suppose. They had trained field staff call physicians’ offices posing as potential patients asking for new appointments. They were divided into two groups; one group said they had private insurance, the other said they had Medicaid. The authors compared two time periods — before and after the early implementation of the ACA. A sample of states were compared to see if the rates of acceptance of new Medicaid patients was associated with a particular state increasing physician Medicaid reimbursement.

The results were not striking, but they suggest a significant positive trend. This is what the results showed, in the authors’ words:

The availability of primary care appointments in the Medicaid group increased by 7.7 percentage points, from 58.7% to 66.4%, between the two time periods. The states with the largest increases in availability tended to be those with the largest increases in reimbursements, with an estimated increase of 1.25 percentage points in availability per 10% increase in Medicaid reimbursements (P=0.03). No such association was observed in the private-insurance group.

Again, these are data from the early days of ACA implementation. But they are encouraging. One of the most important components of slowing the seemingly inexorable rise in healthcare costs is getting people good primary and preventative care. This keeps people with a chronic, manageable condition out of the emergency room and, one hopes, out of the hospital. This is particularly the case with common conditions like diabetes and asthma. For both of those disorders regular care by a primary care physician can spare patients much suffering and save many thousands of dollars.

I hope this kind of research continues as the ACA matures. It’s a good way to see if the overall goals are being met. Of course it raises a new challenge: making sure we have enough primary care physicians. Right now we don’t.

Measuring the economic good of Medicaid and CHIP over the long term

April 16, 2015  |  General  |  No Comments

The CHIP program (Children’s Health Insurance Program) has just been reauthorized by Congress. This is a program that provides health insurance for children of lower income families who still make too much income to qualify for Medicaid. Both CHIP and Medicaid provide essential, even life-saving healthcare for kids. That’s a good thing. A recent research study asked a deeper question: What are the long-term economic effects of providing this care, of keeping children healthy into adulthood? Their study doesn’t address the humanitarian aspects, which are huge, but rather the cold, hard economic ones.

The authors used the expansion of Medicaid and the implementation of CHIP that occurred in the 1990s to follow children who had obtained healthcare via those programs and were now adults. The bottom line is that well over half of those healthcare dollars spent were recouped in the form of taxes over the working lifetime of the subjects. Again, this doesn’t even take into account the global benefit to society of keeping people from suffering. In the words of the authors:

The government will recoup 56 cents of each dollar spent on childhood Medicaid by the time these children reach age 60. This return on investment does not take into account other benefits that accrue directly to the children, including estimated decreases in mortality and increases in college attendance.

There were several, less measurable multiplier effects that pushed the return even higher than that. One of these was the probability of the subjects collecting Earned Income Tax Credits in the future. They conclude:

We find that by expanding Medicaid to children, the government recoups much of its investment over time in the form of higher future tax payments. Moreover, children exposed to Medicaid collect less money from the government in the form of the Earned Income Tax Credit, and the women have higher cumulative earnings by age 28. Aside from the positive return on the government investment, the eligible children themselves also experience decreases in mortality and increases in college attendance.

To me it seems pretty intuitive that keeping children healthy makes them more likely to be healthy adults, and healthy adults are more likely to become able-bodied, working taxpayers. They also have longer lifespans. This study gives important, long-term data to support that intuition. Plus, it’s the right thing to do.

Pediatric Newsletter #15: Food Allergies, Gluten, and Pizza

March 29, 2015  |  General  |  No Comments

Welcome to the latest edition of my newsletter for parents about pediatric topics. In it I highlight and comment on new research, news stories, or anything else about children’s health I think will interest parents. In this particular issue I tell you about a couple of new findings about allergies in children, as well as some new information about gluten sensitivity. I have over 30 years of experience practicing pediatrics, pediatric critical care (intensive care), and pediatric emergency room care. So sometimes I’ll use examples from that experience to make a point I think is worth talking about. If you would like to subscribe, there is a sign-up form on the home page.

Big News About Peanut Allergies

This one made a big splash both in the medical news sites and in the general media. Peanut allergy is common. It has doubled in the past decade, now affecting between 1 and 3% of all children. And it can be a big deal for children who have it, even life-threatening. For years we recommended that children not be given peanut products early in life, especially if they are at risk (based on their other medical issues) for developing allergy. Unfortunately, avoiding peanuts in the first year of life doesn’t make a child less likely to develop the allergy. So what, if anything, can?

This recent, very well done study published in the prestigious New England Journal of Medicine is really ground-breaking. It took 4 to 11-month-old children at high risk for developing peanut allergy and divided them into 2 groups. One group got the “standard” approach — being told to avoid peanut exposure. The other group was fed peanuts 3 times per week. It was done in the form of either a peanut snack or peanut butter.

At age 5 years (the long follow-up time is a particularly strong feature of the study) the children who had been fed the peanuts had nearly a 90% reduction in the development of peanut allergy. This is a huge difference.

The study also was able to provide a scientific explanation for the difference. The children fed the peanuts developed protective antibodies that cancel out the ones that provoke the allergic response.

Washing Dishes by Hand May Reduce the Risk of Food Allergies

This report comes from Pediatrics, the journal of the American Academy of Pediatrics. There has been a long-standing theory about how allergies develop in children called the “hygiene hypothesis.”

The notion is that children, particularly in Western countries, are more prone to allergies (and asthma) because their exposure to microbes is delayed by our more sanitized environment.

In this study from Sweden, children in households that washed dishes by hand rather than using a dishwasher experienced a lower risk of subsequent allergies. The authors speculated that there was a causal association. They couldn’t prove that, but they also noted that early exposure to fermented foods and if the family bought food directly from farms also correlated with less allergies. I’m not totally convinced, but it is an interesting study worth thinking about. I expect to see more on the topic.

Does the Age at Which You Introduce Gluten Into Your Child’s Diet Affect Future Risk of Gluten Sensitivity?

Gluten sensitivity is in the news, with signs everywhere advertising “gluten free” as if this is always a good thing. I hear a lot of misconceptions about gluten sensitivity. Gluten is a protein found in grains such as wheat and barley. There is a condition, called celiac disease or sprue, in which a person can develop moderate or severe intestinal symptoms triggered by gluten. It is one of the eighty or so autoimmune diseases. The incidence of celiac disease in the US is about 0.7%. The risk of developing celiac disease is closely linked to a genetic predisposition to getting it. Importantly, if you don’t have the disorder, there is no benefit to eliminating gluten from your diet. In fact, the great majority of people who think they have sensitivity to gluten . . . don’t.

But for those children who do have a higher risk for developing celiac disease because of their genetic makeup it has long been a question if delaying gluten exposure will affect their chances of actually getting the disease. A good recent study gives an answer to that question, and the answer is no. There is no correlation.

If you think your child (or you) have problems with gluten there is a useful blood test that looks for a specific antibody. However, many people who have the antibody never get symptoms of celiac disease. The ultimate test is an intestinal biopsy.

My take-away conclusion is that all this gluten-free stuff you see in, for example, restaurants, is just the latest dietary fad. For over 99% of us there is no health benefit to avoiding gluten.

So How Much Pizza Do Teenagers Eat?

This is kind of a quirky item. If nothing else, it demonstrates how weird the medical literature can be sometimes. Every parent knows kids, teenagers in particular, mostly love pizza. A recent article in Pediatrics, a fairly respected journal, used food surveys to find how much pizza kids eat and the percentage of their daily calories they get on average from pizza.

The answer? The authors claim that in 2010 21% of kids ages 12-19 reported eating pizza sometime in the past 24 hours. That number is actually down from a similar survey in 2003. What about calories? For those kids who reported eating pizza, it accounted for about 25% of their daily calories, and that hasn’t changed. The authors primly suggest that we should make pizza more nutritious. I wish them luck with that. And I’m 63 years old and still like pizza.

SLAPPS in Medicine: Can medical researchers ever be at risk for being sued for libel?

March 5, 2015  |  General  |  No Comments

It has become a common technique for large companies or other powerful organizations, when they meet public opposition, to use a strategy called strategic lawsuits against public participation, or SLAPP for short. I have seen one of these in action. In the case I observed, a large development company wanted to obtain a parcel of public land by offering the US Forest Service a swap for an obviously inferior piece of land the company owned. Many citizens objected and organized against the proposal. Their actions had a reasonable chance of blocking the land swap. The company responded by suing the leaders of the citizens’ group — a SLAPP. The key concept of these suits is not that the instigators expect to win them They almost never do, even on the rare occasions when they make it to trial. But just the threat of a lawsuit and huge monetary damages has a chilling effect on ordinary people, who do not have armies of lawyers. It puts them through stress and, most importantly, great financial cost to contest the SLAPP. It has the effect of frightening off opposition.

Could a similar process happen in medical research? A recent example suggests that this is possible. The details are presented here. The authors ask this question:

Does fear of libel lawsuits influence what gets published in medical journals? We suggest it may, especially when the conclusions run counter to corporate interests.

The particular case involved a study in which the researchers investigated the relationship between TV advertisements for fast food and children’s perception of the product. As it happens, 99% of fast food advertising directed at children comes from only two companies: MacDonald’s and Burger King. The investigators concluded that the companies failed to comply with the guidelines of the Children’s Advertising Review Unit of the Better Business Bureau. The authors then submitted their findings to Pediatrics, a journal of the American Academy of Pediatrics. The manuscript review process stopped when the legal department of the Academy recommended that the names of the fast food companies be removed from the paper. The lawyers were concerned about being sued by one or both of these fast food giants. However, the authors believed that naming names was important and they withdrew the paper from consideration. Here is what they were told by the journal editor:

In the event that a defamation claim is brought as a result of the publication of this article, the publishing company could be named as defendants. Based on these findings and advice from counsel, we recommend the article not be published.

The article eventually was published by the journal PLOS One, with the company names included. This series of events raises important questions for medical research. Remember the point of SLAPPs is not to actually win a libel suit. Rather, it is to put the SLAPP target though trouble and expense sufficient to warn them off. Valid medical research cannot be libel. There is actually a 1994 court decision (Underwager v Salter) that states scientific disagreements should be decided in the scientific, not the legal arena. And truth is always a defense against libel.

I have no idea if this sort of thing is an isolated instance or happens more frequently. I have long been concerned that, with the decline of federal NIH support for medical research, industrial financial support carries the risk of compromising the work. This is what cannot be allowed to happen:

. . . any article that reaches negative conclusions about a company’s practices or products risks rejection, as it is company practice today to strategically threaten libel suits to ward off legitimate criticism.

This is serious issue, one all of us who use the medical research literature need to think about.

Vaccines, quarantines, and compulsion in supporting the public health

February 9, 2015  |  General  |  2 Comments

I posted a version this one last year, but the recent outbreak of measles has once again ignited the debate of just what the government has the right to do or not do in compelling individual actions in support of public health. This is an old question, and it’s worth considering it in historical context.

One aspect of the endless vaccine debate is the aspect of coercion some parents feel about requiring children to be vaccinated before they can go to school. The government mandates vaccination. But this isn’t really an absolute requirement. Although all 50 states ostensibly require vaccination, all but 2  (Mississippi and West Virginia) allow parents to opt out for religious reasons, and 19 states allow this for philosophical reasons. (See here for a list.) Still, in general vaccines are required unless the child has a medical reason not to get them, such as having a problem with the immune system. Is this an unprecedented use of state power? I don’t think it is.

In fact, historically there have been many examples of the government inserting itself into healthcare decisions of individuals and families in order to protect the public health. Some of these go back many years. Quarantine, for example, goes back to medieval times, centuries before the germs were discovered. It has since 1944 been a power of the federal government; federal agents may detain and send for medical examination persons entering the country suspected of carrying one of a list of communicable diseases. Quarantine has also been used by local and state governments, particularly in the pre-antibiotic era. Diphtheria is a good example, as you can see from the photograph above. Quarantine can be abused, and has in fact been abused in the past for discrimination against certain minority groups. A brief paper from the American Bar Association details some of those instances here. The paper even suggests that it should be abolished for these reasons. But the practice is a very old one.

Of course the government mandates many things for the protection of public health. Milk is pasteurized (although there are raw milk enthusiasts who object), water is purified, and dirty restaurants can be closed. Like quarantine, these measures restrict our personal freedom a little, but what about government-mandated medical treatment? That sounds a bit more like the situation with compulsory vaccination of children. As it happens, there are more recent examples of compulsory treatment, particularly involving tuberculosis.

A couple of decades ago I was involved in a case of a woman with active tuberculosis who refused to take treatment for it. Worse, her particular strain of TB was one highly resistant to many antibiotics, so if that spread it would represent a real public health emergency. The district judge agreed. He confined the woman to the hospital against her will so she could be given anti-TB medications until she was not longer infectious to others. At the time I thought this was pretty unusual. When I looked into it, though, I found that there have been many instances of people with TB being confined against their will until they were no longer a threat to others. The ABA link above lists several examples of this.

So it’s clear to me there is a long tradition of the state restricting personal freedom in the service of protecting the public health. Like everything, of course, the devil is in the details. To me the guiding principle is that your right to swing your fist ends where my nose begins.

Pediatric Newsletter #13: childhood sleep, concussions, and more

January 31, 2015  |  General  |  No Comments

Here is the latest of my more or less monthly newsletter on pediatric topics. In it I highlight and comment on new research, news stories, or anything else about children’s health that I think will interest parents. If you want to subscribe to it and get it in the form of an email each month there is a sign-up form at the very bottom of my home page.

 

New Study Shows How and Why Sleep Patterns Change During Adolescence 

Every Parent of a teenager knows that they tend to go to sleep later and are harder to rouse out of bed in the morning. It turns out that as elementary school children become early and then mid-teenagers theses changing sleep patterns are a normal result of the hormonal changes their bodies are going through.According the data in a new study, conducted with 94 children in all, children are programmed to get less sleep as they mature.

A typical 9-year-old fell asleep at 9:30 p.m. on a weekday upon first enrolling in the study and would wake up at 6:40 a.m. By age 11, the same child would go to sleep at 10 p.m. The net result for that child – and many others in the cohort of 38 children who joined the study at 9 or 10 years old – would be steadily less sleep every night.
The study is one of the few to track individual kids for longer than a year. It showed wide individual differences in these trajectories. For some children, the data show, the shift to a later bedtime without a later wakeup time was abrupt, possibly putting them at a greater disadvantage relative to their peers in school.

The American Academy of Pediatrics has suggested later school start times for teenagers, advising school start times no earlier than 8:30 for middle and high school students.

For Kids With Simple Concussions, a Couple of Days Rest is Enough

The past few years have brought increasing attention on concussions, particularly the long-term effects of repeated concussions. We need to take them seriously. But they are common, and the vast majority of children recover without any further brain issues. The optimal way to care for children following a concussion is still unknown, although there is one key principle: a child should not do anything that could lead to another head injury, such as returning to contact sports, until the symptoms of the concussion have resolved. Common symptoms are headache, vomiting, and difficulty concentrating.

Some authorities have long recommended extensive bed rest following a concussion. A new study indicates that this is not needed and does not help the brain heal any faster. In fact, the authors noted that children placed on strict bed rest tended to focus on their symptoms even more, which is not surprising to me.

You can read more about concussions — what they are, what they mean — in a blog post I wrote here.

 

Stress During Pregnancy Can Effect Fetal Development

This study is in mice, not people, but it has very suggestive findings. The bottom line was that pregnant mice who had high levels of stress hormones had smaller offspring, and low birth weight is an important marker for later problems in infants. The effect was not because the stressed mothers ate less — they actually ate more. Although the causes of low birth weight are many, it makes sense that a stressful environment for the fetus might be one of them.

 

Starting Serious Contact Football Before the Age of 12 Linked to Later Brain Problems

While we’re talking about concussions and head injury (see above) another important study in the journal Neurology found, at least in NFL players a correlation between later degenerative brain problems and the age at which the player first began to play. At least minor head injury is almost inevitable in football. It is likely that there are many injuries that don’t reach the level of concussion but which, over time, add up. The identification of age 12 as the threshold for increasing risk for later problems makes sense from what we know about brain development in children.

 

Small Screens in a Child’s Bedroom interfere With Sleep

We should probably file this one in the common sense department, but if you allow your child to have a small screen in the bedroom, such as from a smart phone, it will interfere with his or her sleep. I know we found that to be the case with my own son. I guess it’s good to know that research confirms that.

 

Massive Study Confirms Safety of Measles Vaccine

Measles is very much in the news these days after the outbreak of the infection in California, which was linked to higher numbers of unvaccinated children. An inevitable byproduct has been the resurrection of fear of the vaccine. Multiple past studies have debunked any links with autism or any other serious ailments. So this study is timely.

Researchers at Kaiser-Permanente studied a total of nearly 800,000 vaccine doses over 12 years and found no serious issues.

 

The mathematics and politics of allocation of organs for transplant

January 13, 2015  |  General  |  No Comments

We have a problem in this country with how precious organs for transplant are allocated. The problem has been brewing for years, and is well recognized in the transplant community, the physicians and institutions that perform them. Two recent opinion pieces review the issue well — here and here. Since PICUs such as mine are closely involved in the practice of organ transplant, both from the donor and the recipient sides, pediatric intensivists like me have a great interest in the process. Above all else, we want it to be fair, because the supply of organs always falls short of the need. Many patients die on the waiting list.

The way the system works now is “locals first.” The country is divided into 58 geographical zones called donation service areas, which are in turn grouped into 11 regions. When an organ becomes available, the system called the United Network for Organ Sharing (UNOS) first tries to match the organ with the most needy person in first the service area and then the zone. Transplanted organs need to match the recipient in several key ways or they will be rejected. If there is no patient match in either of these, the organ can be listed nationally for a match. If there is one, we have a sophisticated system in place to scramble the team at the distant facility to fly to the place where the donor is to pick up the organ and get it back in time to transplant it, although there are some constraints to timing depending upon the particular organ.

The boundaries of these zones and regions were drawn decades ago. The problem is that some geographic areas have longer lists of patients waiting for organs than do others, and different places also vary in how many organs for transplant they produce. So, even though there is a “sickest first” priority system, a less sick patient in a region with a shorter list and for whom an organ matches may get that organ ahead of a much sicker patient in a less fortunate region. Patients can also choose to be listed in a region where they don’t live, as long as they can be at the hospital within several hours. Steve Jobs, for example, chose to be listed for a liver transplant in Tennessee rather than where he lived in Northern California, which has an average waiting time 6 years, because he was more likely to get a new liver in Memphis, which has an average waiting time of 3 months. When the call came, he chartered a jet to fly him there in time.

This doesn’t seem fair. But there are strong political reasons for the debate going on in the transplant community over the issue. If the system is changed, some smaller transplant centers might close down and some regions could become net exporters of organs. For example, the head of the transplant program at the University of Kansas estimates that his institution would lose 30-40% of its transplant practice.

There are some ethical issues to consider, too. For one, an individual physician is responsible for the care of his or her patient. It’s personal. How can a surgeon say to one them that, although there is a match for an organ in the same city, that organ is going to go half-way across the country to a recipient to whom the surgeon has no medical duty other than the abstract social principle of fairness? (To be fair, though, justice is one of the four principles of medical ethics.)

From the ongoing debate it seems clear that the system will be revised. For institutions, there will be winners and losers. But for patients, which is after all why we do transplants, it will be fairer. From one of the essays:

One way or another, I believe, the U.S. organ-transplantation system needs to change. The availability of the benefits of organ transplantation should depend neither on a patient’s ability to charter a private jet nor on whether he or she is lucky enough to live near a hospital that, thanks to our “local first” system, has a relatively short waiting list. When it comes to lifesaving transplants, geography should not be destiny.

Some clues why exercise helps depression

Some clues why exercise helps depression

January 5, 2015  |  General  |  No Comments

Researchers have known for many years that regular exercise helps relieve some of the symptoms of depression, but there has been little cellular or biochemical data on why that might be so, other than the generalization that exercising just makes us feel better about ourselves. There are some new and interesting findings that shed some light on what may be happening. The graphic above, from this article, illustrates the details.

Tryptophan is an amino acid, one of the building blocks of proteins. It is normally metabolized, broken down, in the tissues, including muscle. The system that does this is called the kynurenine pathway because the product of this process is called kynurenine. This substance penetrates the brain, is itself broken down, yielding some molecules that have been implicated in several brain disorders, including depression. This is shown on the left side of the graphic.

On the right side of the graphic you see what happens with exercise. Muscle that is being actively exercised produces enzymes called KATs that take the kynurenine and, before it can penetrate into the brain, change into a similar substance called kynurenic acid. The latter substance does not go into the brain and activate depression-causing pathways.

I think this is quite interesting. It is always fascinating when we find biochemical and cellular explanations for something we’ve observed before but for which we had no explanation.