I would think by now that I wouldn’t have to write anything about the importance of child car seats. But I find I do, because I still see as I drive adults holding babies and toddlers over their shoulder, often while sitting in the front seat. This has been illegal in most places for many years, but it is still common and it is still stupid and dangerous. I also still see the results–several children each year come through the PICU who were unrestrained passengers in a car accident, and a few of them die. A recent CDC study found that around 600,000 children ride unrestrained at least once in a given year. Interestingly, and not surprisingly, children riding unrestrained are often in vehicles in which the driver is also not wearing a seat belt.
Here are some recent statistics on car seats and motor vehicle accidents. In 2015 nearly 59,000 children under the age of 5 were injured in motor vehicle accidents, 8% of them seriously, and about 1% died. This amounted to 471 children. Significantly, over one third of the children who died were unrestrained.
Most of us have been lectured to about these things, but I have found many parents have difficulty understanding notions of statistical risk. For example, one study showed 72% of parents were seriously afraid their child would be abducted by a stranger. That is, I suppose, a legitimate fear, but it is not very likely to happen; in fact, it is vanishingly unlikely. It is only one-fourth as likely as you getting struck by lightning.
My point is that parents should do what they can to reduce the chances of their child suffering harm: by all means tell your child about what to do when approached by strangers, but also please buckle them into a car seat, preferably in the back seat, when you drive anywhere with them, even a short distance.
You can find an excellent overview of all manner of car seats and how to use them here.
I’m being sarcastic, of course, but that’s often how it seems some days. Those are days when I’ve been busy at patients’ bedsides all day and then struggle to get my documentation done later, typically many hours later. I jot notes to myself as I go along, but it can be hard to recall at 5 PM just what I did and why at 8 AM.
It used to be very much the other way, and that wasn’t always a good thing either. Years ago I spent months going through patient charts from the era of 1920-1950. They were all paper, of course, and the hospital charts were remarkably thin, even for complicated patients. I recall one chart in particular. It was for a young child who was clearly deathly ill. The physician progress notes for her already prolonged stay in the hospital consisted of maybe 2 sheets of paper. Most of the daily notes were a single line. I could tell from the graphs of the child’s vital signs — temperature, pulse, breathing rates, and blood pressure — that one night in particular was nearly fatal. The note the next morning was written by a very famous and distinguished physician. I knew him in his retirement and he was a very loquacious man in person. His note after the child’s bad night was this: “mustard plaster did not work.” If I were caring for a patient like that today there would be just for that day and night multiple entries probably totally several pages on the computer screen.
Patient charts are burdened with several purposes that don’t always work together. The modern medical record as we know it was invented by Dr. Henry Plummer of the Mayo Clinic in the first decade of the twentieth century. Up until that time each physician kept his (only rarely her) case notes really as notes to themselves. When the multi-specialty group appeared, and Mayo was among the first, the notion of each physician have separate records for the same patient made no sense; it was far more logical to have a single record that traveled from physician to physician with the patient. That concept meant the medical record now was a means for one physician to communicate with another. So progress notes were sort of letters to your colleagues. You needed to explain what you were thinking and why. Even today’s electronic medical records are intended to do this, although they do it less and less well.
Now, however, the record is also the principal way physicians document what they did so they can get paid for it. Patient care is not at all part of that consideration. The record is also the main source for defending what you did, say in court, if you are challenged or sued. The result is that documentation, doctors entering things in the record, has eaten more and more of our time. Patients and families know this well and the chorus of complaints over it is rising. Doctors may only rarely make eye contact these days as they stare at a computer screen and type or click boxes. But we don’t have much choice if we are to get the crucial documentation done. That’s how we (and our hospitals) are paid and payers are demanding more and more complex and arcane documentation. I don’t know what the answer is, but I do think we are approaching a breaking point. We are supposed to see as many patients as we can. But the rate-limiting step is documentation.
To some extent we brought this on ourselves. In our fee-for-service system physicians once more or less said to payers: “We did this — trust us, we did it — now pay us for it.” I can’t think of a formula more guaranteed to cause over-utilization or even outright fraud. But there is only so much time in the day. In my world an ever smaller proportion of it is spent actually with the patient.
Anyone who has worked in medicine for a long time well understands the power of the statement coming from an experienced person: “This kid looks sick.” That person could be a physician or nurse. Years of experience does tend to give one a sort of sixth sense for when to worry something serious is going on that just hasn’t shown itself fully yet. Seasoned parents can often provide the same perspective. A fascinating recent article pertaining to this appeared in Critical Care Medicine, the journal of the Society of Critical Care Medicine, entitled “What faces reveal: a novel method to identify patients at risk of deterioration using facial expressions.” It suggests an empiric perspective for studying just how this phenomenon may work. It’s not about children, but the findings could easily apply to pediatric patients.
The authors include experts in empirical evaluation of facial expressions, broken down into something called “action units.” This is a scientific field I have to say I had no idea even existed. They used video recordings of 34 patients identified by nursing as potentially, but not yet, deteriorating clinically. The patients were then followed in time to identify those who ended up in intensive care for deterioration and what their faces were doing just before that. They also used a standard measure in the UK for deterioration termed the National Early Warning Score. This is based on objective measures such as heart rate, respiratory rate, level of consciousness, and other things that can be measured. The video recordings were analyzed by observers trained in this sort of thing but who were blinded to who deteriorated and who didn’t to see if subtle facial signs predicted this. You can look at the paper for the minute details, but some of the most useful distinguishing features were overall head position and what the person was doing with their eyes. I sure have seen that aspect in action. For example, a very useful observation when evaluating a child with respiratory distress is to look into their eyes: Are they paying attention to anything besides breathing? Can you distract them?
The authors provide some visual illustrations of what they are talking about, including this famous painting (the AU categories are some of their analytical tools):
Painters have been capturing face expressions since antiquity. The painting “The Dead Christ Mourned” by Annibale Carracci (1560-1609) is striking in its composition. Carracci showed the same facial expression in the dead Christ and Madonna, clearly displaying . . . AU 15 (lip corner depression), AU 43 (eye closure), AU 51 (lateral position of head), and AU 25 (lips parted).
The authors think their methods might be incorporated into standard evaluation systems. Maybe. What I think is their work validates what we have known for years. When experienced clinicians look at a patient, they unconsciously incorporate into their assessment what they have gleaned after years of looking at sick people and what happens to them.
Here’s another interesting example. Separating the very ill and liable to deteriorate from the not-so-sick is a perennial challenge in the emergency department setting, particularly in pre-verbal children. Untold numbers of research studies have tried to come up with something, anything, perhaps some blood test, that could help in this sifting process. Not surprisingly, it turns out the most useful measure for children is for the most experienced person in the room to say: “That kid looks sick.” When you hear that, believe it.
Anyway, I find this work fascinating as an example of how cross-disciplinary research can work, and I applaud whichever author first thought of it. I believe the article is behind a paywall; if anyone can’t get access to it and wants a copy, let me know via the contact form on my homepage.
(I posted a version of this little essay some years ago at the request of Maggie Mahar, but I think it’s an important issue that’s worth dusting off and putting out there again.)
We want competent physicians, but we also want compassionate ones. How do we get them? Is it nature or is it nurture? Is it more important to search out more compassionate students, or should we instill compassion somehow in the ones we start along the training pipeline? I think the answer lies in nurturing what nature has already put there.
My background is in pediatric critical care, which I have practiced for thirty-five years. Throughout most of my career I have taught medical students, residents, and fellows. So I have seen young physicians as they made their way as best they could through the long training process. I also served on a medical school admissions committee for some years and interviewed many prospective students, so I have had the opportunity to see and speak with them before the medical education system even got hold of them. I think the main principle to keep before us is not so much to figure out a way to teach compassion, but rather to devise ways such that the training process does not reduce, or even extinguish, the innate compassion all humans have toward one another. Unfortunately, our current way of doing things does not do a very good job at that task. But I do not think our present state of affairs is anyone’s fault. We are hobbled by our success. Some historical background is helpful, I think, to explain what I mean.
When my grandfather graduated from medical school in 1901 he had only a few tools to help the sick. He could do useful things to help injuries mend. He had the newly discovered techniques of aseptic surgery, as well as ether to allow him to do it painlessly. Other than that, though, he did not have much – narcotics to relieve pain, powdered digitalis leaf to help a failing heart, and a few other things. Mostly, though, he had bagful of useless nostrums. Some of them were even harmful. Because he had little to offer, compassion figured prominently in whatever therapy he did. It had to.
When my father graduated from the same medical school in 1944, things were better. Surgery had advanced further from his father’s day, although only brave surgeons entered the chest cavity. There was sulfa, and penicillin soon became available, working miracles with previously deadly infections. Streptomycin and later drugs made the scourge of tuberculosis treatable. He soon had some drugs to treat high blood pressure, which by then had killed his father, plus a rapidly enlarging stock of other useful drugs to put in the black bag he took on house calls. But there were still many things for which he could do nothing. For a heart attack he gave some morphine to take away the pain and then waited to see what happened. If a cancer could not be removed surgically, he had nothing to offer. Although my father’s black bag held more than his father’s had contained, compassion was still a crucial part of my father’s armamentarium. As for his father, it had to be.
I graduated from medical school in 1978. If scientific medicine was just spreading its wings during my father’s training, I experienced it in full flight. By then our medical-industrial complex had rolled out nearly all of the varieties of therapies we have still, although of course we have polished and improved them. What has happened, I think, is not that we have become less compassionate on purpose, but that we came to act as if we no longer needed the compassion of my father or my grandfather’s era, now that we had so many really useful and exciting therapies to offer.
I also think one other historical change is key to understanding how our young doctors react to the experience of seeing death and dying. In my grandfather’s era, it was an unusual person, even an unusual child, who had not personally seen someone die. Children and young adults saw how those around them behaved and reacted to death. If they became doctors, both they and their patients had shared this common experience, so both knew how to act. I saw death for the first time when I was sixteen on my very first day working as an orderly in our local hospital. I was giving a bath to an old man; he looked at me oddly, and then he was dead. None of my friends or schoolmates had ever seen such a thing. I still recall it vividly. I also remember well how helpful the nurses, all women in their fifties or sixties, were to me afterwards. I watched them wash the body, a once sacramental task now largely done by nurses in hospitals instead of families in their homes. They were respectful, but matter-of-fact as they went about it. After all, it was a natural thing.
I think compassion for others is innate in all of us, although it is stronger in some than in others. All of us possess an inner light. Perhaps that opinion makes my theology show, but I think it is fair to say our medical school selection process already skews toward selecting students more compassionate than the average person. We need to encourage that quality, certainly, but that is not the key issue in my mind; mainly we need to prevent medical training from driving it into the background, belittling it, or even snuffing it out. So I do not think we need so much to ponder how to teach compassion as we need to find ways of letting students’ natural humanity shine through. For medical educators, that would seem to me to be good news. Framed that way, it ought to be doable – but how?
There are many things in medicine that can be taught with the old “see one, do one, teach one” model that those of us older than fifty remember. We also remember never seeing a faculty attending physician in the hospital at night, because, after sundown, the place belonged to the residents. Even during the day, attending physicians were more likely to be found in their offices or their research laboratories than out and about on the wards. I learned how to intubate a baby and place an umbilical artery catheter from my senior resident, who had learned the year before from her senior resident. But my senior resident was not much help when a premature baby died; she was as much at sea as I was. All she had learned about that from her senior resident was to cultivate a sort of hard-boiled persona. We aspired to it partly because it gave us a mental escape hatch in those situations. But mainly it was because nobody showed us any other way.
How to show that other way? In my mind, there is no substitute for senior, seasoned physicians demonstrating, in the moment, how to let out our own innate empathy and compassion. Good, experienced physicians are comfortable admitting their medical ignorance and failures to families; nothing terrifies residents more than that. When they see it in action, students and residents respond with a version of: “That’s why I became a doctor.” Structurally, medical education has already made great strides in the right direction. We now have rules for resident supervision that involve much more oversight, even at night, than I ever had. This was done mostly for patient safety, I think, with education as a secondary and really unintended consequence.
So the opportunities are there – we just need to implement them better. For example, after an unsuccessful resuscitation and a death, the folks with the grey hair should spend as much time discussing with students and residents the psychic dimensions of the death as they do the sequence of medical decisions. Most of my colleagues already do that to varying degrees, but it should be an expectation.
We should never again send a resident, alone and emotionally at sea, to comfort a grieving family without backup. We do not do that for complicated invasive procedures; we should not do it for this other, equally important task either. Certainly some organized instruction – seminars, discussion groups, lectures and the like – can be part of the process. But the training curriculum is already stuffed with subjects. Taking residents by the hand and leading them through these experiences does not require another fat syllabus. It only takes a little time. If we want to foster compassion in our students we should ourselves show them compassion for the situations we put them in. We should let their innate, inner compassion and empathy find an outlet and breathe free.
The recent prominence of the MeToo movement has shined a light at many places in our society where insidious or even obvious sexism against women has long gone unremarked. Even when noticed it’s just shrugged off as the way things are. In honor of this MeToo was named Person of the Year for 2017 by Time Magazine. Medicine is no exception to this pervasive problem. A very interesting recent essay in the New England Journal of Medicine examines why this is and what we could do about it.
It’s well documented women are vastly underrepresented in leadership positions in medicine, such as full professors and department heads. This is in spite of the fact the proportion of women to men in medical schools is roughly equal and has now been so for over 15 years. Last year the number of women admitted to medical school even slightly outnumbered men. This graph shows the trends over the last 50 years.
In spite of the steadily increasing proportion of women in medicine the culture of medicine has not caught up. Certainly one can postulate the number of women in leadership positions will increase because typically these positions are held by physicians at mid-career or older; it may take time to generate women physicians with sufficient quantities of grey hair. But I’m not so sure about that. Note from the graph the number of women has been close to that of men for nearly 2 decades. My own field of pediatrics has been at least equal in the proportion of men to women for decades, and for the last decade or so the number of women pediatricians has actually been larger than men. So if it were just a matter of time in rank women should have caught up, at least in pediatrics. Yet this hasn’t really happened. Why is this? One thing observers point to is that women are more likely to interrupt their careers for child-bearing and other family reasons. At least in academic medicine such pauses in one’s medical career can be huge set-backs. My answer to that is, so what? Change expectations of what an academic medical career means. That would actually be a good thing. Along with the author of the essay, I think the answer clearly runs deeper; women physicians are simply not respected to the same degree as are their male colleagues, not by the medical system and apparently not by the public. That’s how deeply the sexism is ingrained. The essayist offers an example of this phenomenon.
A recent study of speaker introductions at internal medicine grand rounds revealed that even when women are acknowledged as physicians, they are more likely than men to be introduced informally: women were referred to by their professional titles 49% of the time, as compared with 72% for male speakers. This finding has important implications. Calling women by first names in a setting in which men are referred to by formal, professional titles is a tacit acknowledgment that women are perceived as less important, even as their contributions are publicly recognized during grand rounds.
I’ve been practicing medicine for 40 years now and have long noticed women physicians are far more likely to be addressed by their first names, even by those who rank below them in the hierarchy. Of course the fact the majority of nurses continue to be women can be a bit confusing to patients who make assumptions. Yet this occurs constantly in spite of today’s large and obvious name badges and prominent labels on coats identifying women physicians. We cannot change patients’ attitudes much, although I gently correct them when they make this mistake. But we can change our own behavior. We can also give equal pay for equal work. It’s well documented women physicians make significantly less money than do men for doing the same thing.
There is another fascinating aspect to this issue. There is some research suggesting women physicians provide overall better care, possibly by being more likely to adhere to evidence-based medicine standards. Some observers have added to that explanation the higher likelihood of women physicians to work in a collaborative manner with the rest of the care team. The study examined 30 day hospital readmission and mortality rates for a large number of Medicare patients. The differences in patient outcomes between women and men physicians were significant and persisted across multiple disease categories. That’s pretty strong stuff.
The same issue of the New England Journal also provided a vignette of one of the most famous of women physicians, Dr. Helen Taussig. Dr. Taussig more or less invented the specialty of pediatric cardiology and her name remains attached (with second billing!) to a common pediatric cardiac surgical procedure, the Blaylock-Taussig shunt. The essay author wonders:
Since that time, how many Helen Taussigs have we lost to discrimination, harassment, and marginalization? And how many more will we lose if things don’t change?
Forty years ago I was fortunate to have been trained by 2 extremely gifted women who took different approaches to the obstacles they faced. Both possessed spines of steel and they needed them. My fellowship mentor overcame first polio and then the grinding annoyance of belittlement at an extremely stodgy medical center, one actually renowned for its male stodginess. Her progression to full professor was inordinately delayed. She was often assumed to be some sort of social worker. Because she covered several clinical services it was her habit to wear her various pagers on a cord around her neck. Incredibly, I met one physician who assumed she was “some kind of beeper repair lady.” She was a perpetual winner in the resident polls for teacher of the year; the department chair finally told the residents they had to select someone else for a change. And, of course, her patients adored her. The higher-ups . . . not so much, as the kids say today. She was known to seek them out in their comfortable lairs and make them less comfortable by confronting them in her calm yet firm way. Another of my mentors took a quite different approach. She was one of the giants of pediatrics and was among the founders of neonatology. No one messed with her because she met sexism head on, wielding a figurative 2 by 4 that she used to whack, among others, the chief of surgery on occasion. When necessary she could swear like a sailor. Tough doesn’t even begin to describe her. She succeeded and thousands of premature babies benefited.
These women took very different strategies dealing with sexism. And, as was said of Senator Elizabeth Warren, they persisted. But the thing is, it need not have been that way. That’s the point.
Vaccines have been hailed by virtually all medical experts, as well as medical historians, as the among the greatest triumphs of public health to occur in the past two centuries. Yet since Jenner first proposed vaccination for smallpox using the vaccinia, or cowpox, virus there have been both skeptics of its effectiveness and people who thought it was dangerous. That is, they had the risk/benefit ratio of vaccination exactly backwards, believing risk high and benefit low. They also often ridiculed the entire procedure, even from the beginning, as this 18th century cartoon shows — Jenner is the fat gent kneeling by the cow.
Against this constant background of vaccine denial, things changed two decades ago when Andrew Wakefield published his now notorious claim of an association between the MMR (measles, mumps, and rubella) vaccine and autism. The claim has not only been soundly refuted in a large number of well-controlled population studies, but Wakefield himself has been stripped of his UK medical license for unethical and fraudulent practices related to his publication. The paper itself was retracted by the journal Lancet, an extraordinary thing. Wakefield left the UK and moved to Texas. But the damage had been done. Vaccine hesitancy increased, not just for MMR but for all vaccines. The rise of social media, particularly Facebook and Twitter, appears to have amplified the effect. But did it? We do know there is more anti-vaccine noise, but has this resulted in decreased uptake of vaccines? Most importantly, has this led to an increase in vaccine-preventable diseases?
Measles offers a good example to examine because, not only did it feature in Wakefield’s original claim, but measles is highly infectious with a high attack rate among susceptibles and the vaccine is highly protective. In the pre-vaccine era the attack rate for measles was at least 95% and most persons had had it by early adulthood. It is not a trivial illness; the death rate is around 1 per 1,000 cases and there is a substantial risk for complications and life-long disability. It does seem clear from epidemiological work that decreased vaccine prevalence has been linked to measles outbreaks in Europe and the USA. But are these isolated pockets in the population or part of a larger trend?
Before examining if these thankfully still isolated instances represent some broader trend it’s worth looking closer at vaccine denial. A key problem is that we don’t know if such denial is more common now than in the past or if today’s media environment has just made it noticeably noisier. This interesting study from the UK examines the profile of the typical vaccine denialist. Recurring themes found among surveys of such people is a belief in many conspiracy theories, suspicion of authority, and feelings of disillusionment and powerlessness. Particularly interesting to me was the first of these. In general, conspiracy theories are attempts to explain events as the secret acts of powerful, malevolent forces. For people who believe these things, particularly those who participate in social media, it is a simple, even natural thing to add vaccine denial to their stock of other conspiracy theories. There is also often a general animus toward mainstream medical practice in general.
So, to address my question in the title: What do we know about if vaccine denialism has affected overall vaccination rates in the USA? I’m pleased to note that recent reports from the CDC that at least cover the past five years indicate not much has changed. There are definitely pockets of low rates, and it’s interesting how measles seems to find those places where herd immunity has dropped sufficiently low to allow disease to break out. This is an abject lesson for all of us. The figures are compiled by the CDC from vaccination records from the individual states. Here is what they found about vaccine uptake for MMR, DTaP (diphtheria, tetanus, and pertussis), and varicella vaccines among children entering kindergarden.
During the 2016–17 school year, kindergarten vaccination coverage for MMR, DTaP, and varicella vaccine each approached 95%, and the median exemption rate among children attending kindergarten was 2%; these rates have been relatively consistent since the 2011–12 school year.
The legal principle that the state may compel vaccination to attend public school for the safety of other children was established over a century ago. All states allow some exceptions, although they vary in the specific categories allowed. The number of children who had some sort of exemption from vaccination has been steady, as the CDC notes. There are medical reasons for a child not to receive vaccines, but most of the exemptions are for religious or philosophical reasons as determined by the parents. California recently caused quite a stir among the vaccine denialist world by eliminating the philosophical exemption option if a child wanted to attend public school. They filed a spate of lawsuits against the state, all of which have been denied.
When I started reading about this subject I had been discouraged by the headlines from Europe and California. But the extensive CDC compilations remind us that, in spite of all the sturm und drang in social and other media, the overwhelming majority of Americans support vaccination.
I’ve been practicing pediatric critical care for over 35 years. Like many of my colleagues in my age cohort, when I started there was no formal certification process and few formally organized training programs. Those of us who were interested just started doing it and learned a lot of it on the job. Most of us came to critical care after training first in other pediatric subspecialties, such as pediatric anesthesiology, pulmonology (lung diseases), or cardiology.
Few medical subspecialties can point to a founding mother or father, but in pediatric critical care it’s fair to say our founder was Dr. Jack Downes of the University of Pennsylvania and Children’s Hospital of Philadelphia. Dr. Downes began the first designated PICU in 1967. He was a young faculty member who had trained in pediatric anesthesia who took his experience working in the polio wards to bring together in one place all critically ill children, rather than scattering them all over the hospital as had been the previous practice. His original six bed PICU has now ballooned to over a hundred beds and remains one of the premier PICUs in the country. Its contribution to the subspecialty is immense, both for what was discovered there and for the large number of prominent pediatric intensivists Dr. Downes has trained. He recently sat down for an interview to describe his and our specialty’s journey.
Dr. Downes’ fundamental observation is really very much common sense: if you bring together the sickest children in one spot the physicians, nurses, and respiratory therapists who care for them are much more likely to get very good at what they do. Mortality statistics have reflected this. Specialized training also really matters. Yet our subspecialty is unusual in medicine in that we’re not confined to a single organ system, as most (although not all) are; we are generalists for the very sick child, caring for all aspects of their illness or injury. I did my general pediatric residency at Vanderbilt University Children’s Hospital in the 1970s. We had a room we called the PICU with excellent nurses but we had no formally trained pediatric intensivists — none. This is quite astonishing in retrospect because even at the time Vanderbilt was one of the leading pediatric facilities in the nation. Yet it had no intensivists because such people hardly existed. I believe Vanderbilt has now 20 of them or more. The development of pediatric critical care as a formal subspecialty of pediatrics has meant we have lowered mortality and morbidity of critically ill children dramatically. And it all began with Jack Downes. One key footnote to all this is that his initial experience came with polio, a disease which we have eliminated from our country thanks to vaccination.
Most physicians are increasingly forced to grapple with the problem of shortages in generic drugs. These are drugs for which the patent has expired and any company can make them. Certainly for those of us in the PICU it is a chronic problem because the majority of drugs we use are injectable medications that have been generic for many years. Hardly a week goes by I don’t receive a notification from the hospital pharmacy there is a nation-wide shortage of multiple vital drugs. On occasion we have been down literally to our last vial or two of a key drug. Hospitals often have to scramble to find them and sometimes share with each other. Injectables, medications we give intravenously or intramuscularly, are a particular problem because production costs are high compared with just making a pill or capsule that can be put in a bottle.
There are companies that specialize in making generics, but there are fewer of them than previously. This raises the possibility of price gouging when only a couple, or even one company markets a drug. Patients who need the drug may be forced to pay enormous prices if the company jacks up the price. There was a big media splash when so-called “Pharma-Bro” Martin Shkreli started a drug company and obtained the license to market pyrimethamine; he then raised the price from $13.50 to $750 per pill. This is an old drug to treat parasites, rarely used these days, with one main exception: immunocompromised patients with infections from toxoplasmosis. For them they’ll die without it. (Shkreli himself ended up going to jail for fraud related to something else.) Besides financial shenanigans, it’s also easy to see what could happen if only one or two facilities are making needed medications and their production facility goes down. This recently happened with makers of intravenous solutions, which are heavily concentrated in Puerto Rico. Hurricane Maria in 2017 devastated production and it’s still not back up completely.
Medications, especially generics, are vital to our medical infrastructure and our current system is not the best way to ensure a steady supply. A recent opinion piece in the New England Journal of Medicine discusses the problem and suggests an innovative solution. The authors’ idea is to create nonprofit entities with the mission of marketing generic drugs. Their profits, rather than going to stockholders, would go to production costs.
A nonprofit generic-drug manufacturer, which cannot sell equity shares, can initially be funded by philanthropic contributions. It can contract with existing manufacturing facilities or, if necessary, establish its own facilities and rely on guaranteed purchases by institutional partners, such as hospitals, health plans, and government agencies. These institutions, which need uninterrupted access to generic drugs and have a financial incentive to purchase them at reasonable prices, will provide a stable revenue source for the manufacturer.
Institutions such as hospitals can predict with good accuracy what their needs will be for common generics and could enter into contracts with producers to guarantee minimum purchases. This would protect the nonprofit from going under if a for-profit company were to start making the drug, then drastically drop the price to drive the first company out of business, then raise the price again; such a thing has happened in the past. The governing board of such a nonprofit could include some of its major clients, ensuring the company adhered to its core principle of producing generic drugs at an affordable cost.
In fact, such an entity is already underway, called Project Rx. It consists of a consortium of hospitals and health plans. The members are some big players, including Intermountain Healthcare, Trinity Health, and the Veterans Affairs system. The authors conclude:
The complex nature of market failures for generic drugs implies that a single alternative business model cannot address all aspects of this problem. We believe that Project Rx may drive other nonprofit and for-profit manufacturers to enter generic-drug markets, compete among themselves, and collectively improve market efficiency and broaden access to generic drugs.
I find the proposed scheme to be innovative and workable. I wish the Rx Project well.
It’s been a while since I’ve written about concussions in children, so I want to share with you some updates on the subject. The term concussion itself is centuries old, but even forty ago when I was in training the actual definition of concussion was vague. What was usually meant was that the patient got hit on the head and either lost memory or consciousness briefly, or at least wasn’t quite himself for some period of time afterward. These days we’re more precise than that, but concussion is still an inexact term. This is mainly because of our ignorance of the subtleties of how the brain works and how it responds to injury. Estimates are there are around three million concussions in children each year.
The formal definition of concussion is a transient interruption in brain function. By implication, various scans of the brain, such as CT scans or MRI scans, show no abnormalities. Since all the imaging studies are normal, defining concussion is necessarily imprecise. I’m sure one day we’ll have some kind of test that detects the reason for the symptoms of concussions, but right now we don’t have such a thing — concussion is an entirely clinical diagnosis, meaning there’s no specific test for it. The list of symptoms that can come from a concussion is a long one. Headache, dizziness, vomiting, and ringing in the ears are common. Various behavioral changes are also common, such as lethargy, difficulty concentrating, and irritability. The overwhelming majority of children who suffer a concussion, especially a mild one, recover completely. But around a fifth or so of children who have had severe concussions continue to have problems many months afterward.
There are several traditional systems for grading concussions. A commonly used one was published in 1997 the American Academy of Neurology. It was based on a grading system that ranged from Grade I (no loss of consciousness) up to Grade III (loss of consciousness, no memory of the event). You will still see this system quoted in many places but in 2013 the Academy revised their guidelines to stress the continuous spectrum of concussions and focus on the neurological examination of the child rather than memory or not of events. A major focus of the new guidelines was on sports and when an athlete can safely return to play, a common practical issue for young athletes. They emphasized that the younger the child, the more conservative the approach should be. Children who suffer a concussion are much more likely to suffer another, and potentially much more severe brain injury if they have a blow to the head before the symptoms of the first one have completely cleared. So-called second impact syndrome is a particular fear; I have seen a death from that. This is the important concept: a concussion is a form of brain injury. Some experts want to discard the term concussion completely in favor of something like mild traumatic brain injury. Old sports terms, such as “having your bell rung,” tend to downplay this reality. The wealth of recent research about chronic traumatic encephalopathy in football players, even those at the college level, demonstrates the long-term risks of repetitive blows to the head, even those not sufficiently severe to cause immediate symptoms. It is important to know that research on various kinds of helmets have not shown any benefit, at least yet. What has been shown is that early removal from contact sports makes concussions heal faster. Contact sports like football and hockey carry the highest risk, but as the image above shows, other sports like soccer and volleyball are often associated with blows to the head. And in those sports no helmets are worn. Here’s the bottom line, from Dr. Jeffrey Kutcher of the University of Michigan Medical School:
If in doubt, sit it out, . . . Being seen by a trained professional is extremely important after a concussion. If headaches or other symptoms return with the start of exercise, stop the activity and consult a doctor. You only get one brain; treat it well.
The American Academy of Neurology has an excellent resource page here, where you can find much useful information about concussions.
A recent article and accompanying commentary in the journal Pediatrics describe what we currently know about children who have died from influenza over the past decade or more. The Centers for Disease Control (CDC) has collected information about this since the 2003 – 2004 influenza season. In that first report there were 153 deaths. Since then there have been at least 100 influenza deaths annually among children. Several characteristics have not changed. About half of the deaths occur in children who were otherwise normal; that is, they had no underlying chronic condition that would predispose them to having more severe cases. Although the median age was 6 years, mortality was highest among the youngest children — those younger than 6 months. Most of the children who died, over 70%, had not been vaccinated against influenza. This is not a completely surprising finding since influenza vaccine is recommended for children 6 months or older. However, an important way to transmit the benefit of at least some immunity to infants and very young children is to vaccinate pregnant women since some protective antibody crosses over from mother to infant and lasts for several months at least. Currently only about a third of pregnant women are vaccinated.
Influenza is a stubborn and wily virus, traits that make designing a highly effective vaccine challenging. Its natural reservoir is several species of birds. It has 3 main subtypes, with the most serious disease typically coming from influenza A. The virus replicates itself in a way that results in frequent mutations, causing what is termed antigenic drift. This means the virus has the property of changing rapidly, making it something of a moving target for developing a vaccine against it. This is why our influenza vaccine changes every season in an effort to keep up with the rapid alterations. Immunity to last year’s version often doesn’t help much for this year. This also help explain why some years influenza is more mild, other years more deadly. There have been several severe pandemics from influenza when a particularly nasty version emerges, the most severe being in 1918-1919. That one killed over 50 million people.
These things make influenza vaccine one of our less effective vaccines. It reduces the incidence and severity of disease, but it cannot eliminate it outright, such as other vaccines directed against stable viruses like polio and smallpox can. Influenza is also notorious for paving the way for secondary infection from bacteria that happen to be resident in the respiratory system, such as staphylococci and streptococci. Many of the deaths are from such secondary infections. You can read much more about the details of the virus and its vaccine here and here.
In the USA the influenza season runs from fall until early spring. The graph above shows the past season’s trends in both documented and presumed cases (“influenza like illness”). The horizontal axis shows week of the year. The vaccine is based upon the best guess of experts who survey viral trends around the world. Some years they guess better than other years. It usually is available in October. The current vaccine is a killed one, meaning that, whatever you have read on the internet, you cannot get the infection from the vaccine. Two doses are recommended for children who have not previously received it (specifics here). Like all pediatric experts, I highly recommend it. Health care workers such as me are required to get it. I agree with the author of the commentary cited above that our best approach for reducing the death rate in children too young to get the vaccine themselves is increasing the vaccination rate in pregnant women.