So how does one train to be a physician? The first step is to obtain a four-year undergraduate degree at a college or university. This was the most fundamental change in post-Flexner medical training; before that time, many medical schools had little or no requirement for previous education, some not even demanding a high school diploma. Today the prospective physician’s baccalaureate degree can be in anything (mine happens to be in history and religion), but all medical schools require premedical course work in biology, chemistry, biochemistry, physics, and often mathematics. As a result of these science-heavy requirements, most premedical students choose to major in one of the sciences.
The next step is to gain admission to medical school. This traditionally has been a difficult thing to accomplish, although admission statistics for individual schools are hard to interpret because virtually all students apply to several schools, often more than ten. In general, a medical school applicant’s overall chances of being admitted to medical school has fluctuated between 25 and 35 percent over the last several decades. One thing that has changed is drop-out rate. Fifty years ago, many students did not complete the course; these days, drop-out rates are extremely low.
Medical school generally lasts four years, at the end of which time the graduate is properly addressed as “doctor.” However, the new doctor is one in name only, because no state will allow her to practice medicine independently without further training. Medical licenses are in fact granted by the individual states, and their requirements vary, but all demand at least one year of supervised on-the-job training beyond medical school. Fifty years ago, many physicians stopped their training after doing that single year of training — called an internship — because that was all a physician needed to obtain a medical license and begin working as a general practitioner. These days virtually no one stops after one year, because nearly all physicians require more training just to find a job. You will still hear doctors in their first year out of medical school referred to as interns, but the term does not mean much now.
Medical students receive a standard training curriculum that varies little between the various medical schools; this is enforced by the organization that accredits medical schools. Toward the end of their four years, however, students generally do get some freedom to select courses geared toward what specialty they choose for their residency, the term for the several years of practical training they get after medical school. The usage comes from the fact that medical residents once actually lived in the hospital; these days, even though resident workweeks average eighty hours or so, no one literally lives in the hospital.
Residencies come in the standard broad categories of areas of expertise like internal medicine, pediatrics, surgery, and obstetrics and gynecology, as well as specialties like radiology, neurology, dermatology, and psychiatry. There are in total twenty-four recognized medical specialties, each of which sets its own requirements for the residents training in their respective fields. (You can read more about each individual specialty here.) Medical science has expanded sufficiently that a medical student who wishes to specialize in not being a specialist — that is, who wants to take care of all sorts of patients — must do a residency in family practice.
Residency lasts from three to five years after medical school, depending upon the specialty. At the end of training, the resident takes an examination. Passing it makes her “board-certified” in the field; someone who has completed the residency requirement but has not yet passed (or has failed) the examination is called “board-eligible.” Some physicians choose to continue their training even further beyond residency, to subspecialize in things like cardiology, infectious diseases, or hematology.
The person you encounter when you bring your child to her doctor’s appointment has thus spent at least eleven years getting ready to meet you: four years in college, four years in medical school, and three to five years in residency. That person has also spent much of that time being initiated, perhaps indoctrinated, into a culture, a worldview, that is shared by most physicians. It is a culture foreign to that of many nonphysicians. Its attributes come primarily from the way physicians have been trained since Flexner’s reforms of medical education a century ago. Knowing about this time-honored system will help you understand your child’s physician, and understanding improves communication. More about that in later posts.
A couple of conversations I’ve had with patients’ families over the past month have made me realize that many folks don’t know how our system produces a pediatrician, a radiologist, or a surgeon. And a lot of what people know is wrong. Physicians are so immersed in what we do that we forget that the process is a pretty arcane one. Just what are the mechanics of how doctors are trained? Understanding your physician’s educational journey should help you understand what makes him or her tick. As it turns out, a lot of standard physician behavior makes more sense when you know were we came from. This post concerns some important history about that.
Most physicians in the nineteenth century received their medical educations in what were called proprietary medical schools. These were schools started as a business enterprise, often, but not necessarily, by doctors. Anyone could start one, since there were no standards of any sort. The success of the school was not a matter of how good the school was, since that quality was then impossible to define anyway, but of how good those who ran it were at attracting paying students.
There were dozens of proprietary medical schools across America. Chicago alone, for example, had fourteen of them at the beginning of the twentieth century. Since these schools were the private property of their owners, who were usually physicians, the teaching curriculum varied enormously between schools. Virtually all the teachers were practicing physicians who taught part-time. Although being taught by actual practitioners is a good thing, at least for clinical subjects, the academic pedigrees and skills of these teachers varied as widely as the schools — some were excellent, some were terrible, and the majority were somewhere in between.
Whatever the merits of the teachers, students of these schools usually saw and treated their first patient after they had graduated because the teaching at these schools consisted nearly exclusively of lectures. Although they might see a demonstration now and then of something practical, in general students sat all day in a room listening to someone tell them about disease rather than showing it to them in actual sick people. There were no laboratories. Indeed, there was no need for them because medicine was taught exclusively as a theoretical construct, and some of its theories dated back to Roman times. It lacked much scientific basis because the necessary science was itself largely unknown at the time.
As the nineteenth century progressed, many of the proprietary schools became affiliated with universities; often several would join to form the medical school of a new state university. The medical school of the University of Minnesota, for example, was established in 1888 when three proprietary schools in Minneapolis merged, with a fourth joining the union some years later. These associations gave medical students some access to aspects of new scientific knowledge, but overall the American medical schools at the beginning of the twentieth century were a hodgepodge of wildly varying quality.
Medical schools were not regulated in any way because medicine itself was largely unregulated. It was not even agreed upon what the practice of medicine actually was; there prevailed at the time among physicians several occasionally overlapping but generally distinct views of what the real causes of disease were. All these views shared a basic fallacy — they regarded a symptom, such as fever, as a disease in itself. Thus they believed relieving the symptom was equivalent to curing the disease.
The fundamental problem was that all these warring medical factions had no idea what really caused most diseases; for example, bacteria were only just being discovered and their role in disease was still largely unknown, although this was rapidly changing. Human physiology — how the body works — was only beginning to be investigated. To America’s sick patients, none of this made much difference, because virtually none of the medical therapies available at the time did much good, and many of the treatments, such as large doses of mercury, were actually highly toxic.
There were then bitter arguments and rivalries among physicians for other reasons besides their warring theories of disease causation. In that era before experimental science, no one viewpoint could definitely prove another wrong. The chief reason for the rancor, however, was that there were more physicians than there was demand for their services. At a time when few people even went to the doctor, the number of physicians practicing primary care (which is what they all did back then) relative to the population was three times more than it is today. Competition was tough, so tough that the majority of physicians did not even support themselves through the practice of medicine alone; they had some other occupation as well — quite a difference from today.
In sum, medicine a century ago consisted of an excess of physicians, many of them badly trained, who jealously squabbled with each other as each tried to gain an advantage. Two things changed that medical world into the one we know today: the explosion of scientific knowledge, which finally gave us some insight into how diseases actually behaved in the body, and a revolution in medical education, a revolution wrought by what is known as the Flexner Report.
In 1910 the Carnegie Foundation commissioned Abraham Flexner to visit all 155 medical schools in America (for comparison, there are only 125 today). What he found appalled him; only a few passed muster, principally the Johns Hopkins Medical School, which had been established on the model then prevailing in Germany. That model stressed rigorous training in the new biological sciences with hands-on laboratory experience for all medical students, followed by supervised bedside experience caring for actual sick people.
Flexner’s report changed the face of medical education profoundly; eighty-nine of the medical schools he visited closed over the next twenty years, and those remaining structured their curricula into what we have today—a combination of preclinical training in the relevant sciences followed by practical, patient-oriented instruction in clinical medicine. This standard has stood the test of time, meaning the way I was taught in 1974 was essentially unchanged from how my father was taught in 1942.
The advance of medical science had largely stopped the feuding between kinds of doctors; allopathic, homeopathic, and osteopathic schools adopted essentially the same curriculum. (Although the original homeopathic schools, such as Hahnemann in Philadelphia, joined the emerging medical mainstream, homopathic practice similar to Joseph Hahnemann’s original theories continues to be taught at a number of places). Osteopathy maintains its own identity. It continues to maintain its own schools, of which there are twenty-three in the United States, and to grant its own degree—the Doctor of Osteopathy (DO), rather than the Doctor of Medicine (MD). In virtually all respects, however, and most importantly in the view of state licensing boards, the skills, rights, and privileges of holders of the two degrees are equivalent.
Clinical research — finding out which treatments help and which ones don’t (or even make the situation worse) — is tough research to do. In the laboratory a scientist can control conditions so that only one thing changes, isolating the effect of a particular thing. Clinical research is different because humans are complicated. The researcher tries to control the situation as much as possible, but ultimately she is comparing one dissimilar human to another one.
The result is that a lot of clinical studies, such as interventions in which researchers give patients this or that medicine and then try to find out if it worked, are underpowered. This means the studies aren’t powerful enough to answer the simple question: does this treatment help? Usually the reason for the lack of power is that, for all but situations that show extreme differences between the groups, you need a lot of patients in the study to demonstrate any difference. Sometimes this means researchers need to enroll thousands of patients in the study.
Recognizing this problem, the concept of “meta-analysis” was devised. The idea is that one can take a bunch of underpowered studies and lump the information together. This can create, in effect, a single study with enough power to answer the question. Critics compared meta-analysis to making a silk purse from a sow’s ear — trying to take a lot of poor studies and make a good study from them. This can be a problem. But if you’ve ever taken your child to the doctor for treatment of croup, you and your child have been the beneficiaries of what meta-analysis can accomplish.
Croup is caused by swelling of the airway from a virus (see the link above for details), and corticosteroid medicines reduce swelling. So it seemed logical to try them for croup. But although some of the early studies suggested steroids helped, they were all underpowered to answer the question for sure. Then somebody did a meta-analysis with the data and showed steroids probably helped. This information then led other researchers to spend the large amount of time and effort to do some fully-powered studies. The results? Steroids, by mouth, injection, or even inhaled, help relieve the symptoms of croup.
So in this case it was a silk purse all along.
When I started training in pediatrics nearly 35 years ago it was common practice when an infant or child needed something done that was going to be painful, anxiety-producing, or both, the child was often merely held (or tied) down. Looking back on it now, it reminds me of the 19th century, a time when somebody might just be given a stick to bite down on. I wonder how we could have been in the same place with children a century later.
To be fair, there were several reasons we did things that way. Chief among them was the notion — one we now know to be false — that children (infants in particular) did not feel pain in the same way as older persons. The other reason was that we simply didn’t have available many of the medications we have now to counteract pain and anxiety, and the few that we had had not been studied much in children.
Things are much different now. We have a menu of things we can use to prevent pain, ranging from numbing cream we can put on the skin to lessen (or even eliminate) the pain of a needle stick to powerful, short-acting anesthetic drugs we can use to put the child into a deep (and brief) slumber. We have reliable ways of greatly reducing or eliminating both pain and anxiety when a child needs medical procedures as varied as an MRI scan or some stitches in the scalp.
Most doctors who do these procedures are well aware of these things. But if you run across one who doesn’t seem to be, don’t be shy about speaking up and asking what can be done to make your child more comfortable.
The principle of autonomy is one of the four guiding principles of medical ethics, the others being beneficence, nonmaleficence, and justice. It means that patients have the right to decide what is done to their own bodies. For children under eighteen, the age of majority, this means their parents decide for them. What happens when parents refuse a treatment that their child’s doctors recommend? (The right of a minor child himself to refuse such treatment is an interesting and knotty related issue.)
If the doctors believe the parents are not acting in the child’s best interest, they can go to court and try to convince a judge that the court should take temporary custody of the child and appoint a guardian who will allow the treatment. I have been involved in cases like that from time to time. Usually they involve parents who, often for religious reasons, refuse a fairly standard medical treatment. A common example is a blood transfusion in a family that belongs to the Jehovah’s Witnesses. The medical treatments at issue are generally standard, well-accepted ones.
But what if the treatment the doctors want to do is a complicated, high-risk one? Perhaps a treatment that was once a highly experimental one, but which is now more mainstream, although not entirely so? What then? Do the parents have to allow the treatment or risk having the courts take custody of their child?
A recent article in the Lahey Clinic Medical Ethics Journal addresses just such a situation — surgery for an uncommon condition known as hypoplastic left heart syndrome (HLHS). This condition is where a child is born missing a functioning left ventricle, a key pumping chamber of the heart. Several decades ago we had no treatment for the condition — babies were kept comfortable, but they all died within a few weeks of life. Then a surgical procedure to treat this condition was devised by Dr. Norwood in 1981. The outcomes from this procedure for the first few years were dreadful, with most children not surviving. Over time, however, heart surgeons got better at doing it and the science of pediatric intensive care advanced considerably, so the majority of children now survive the initial surgery.
But what is in store for them is at least one more major surgical procedure, called the Fontan procedure, which, if all goes well, allows them to live at least through childhood and usually to adolescence at least. Many do well subsequently, although it is common to need additional surgeries. However, for many children with HLHS, their heart fails and they then require a heart transplant to survive. Most children on the waiting list for a heart transplant die before they get one.
The article from the Lahey Clinic Ethics Journal asks if it is ethical for parents, once they have learned all about this complicated and high-risk series of surgeries, to refuse and allow their infant to die. In other words, is the surgical treatment of HLHS so mainstream that doctors should go to court if parents refuse? I know cardiologists who think so, and the author of the article describes such a situation. But I also know several cardiologists who say they would never choose the surgery for their own baby. These are doctors who are in the trenches and know exactly what the Norwood procedure and its subsequent course can mean in suffering for a child. They would not put their child through that. They feel it is preferable to allow a baby to die than to subject a child to years of often painful treatments, only to have a high risk of dying as an older child or adolescent.
I don’t know what I would do. I’m too old to have any more children myself, but I could have a grandchild in the future who is born with HLHS. There is no easy answer to this question. Many medical treatments, bone marrow transplant for example, are now standard after years as experimental treatments. Even if surgery for HLHS crosses that murky divide between experimental and standard, there are others that will confront us with the same question.
For HLHS, I agree with the essayist in the article: I think parents should be allowed to refuse the treatment.
Medical ethics is something we deal with frequently in the PICU. It may sound esoteric, but generally it isn’t. Even so, it can be complicated. Complicated or not, it’s also something all of us should know a little about. This is because, in fact, many of us will encounter its issues quite suddenly and unexpectedly with our loved ones, or even ourselves.
So what are the accepted principles of medical ethics? There are four main principles, which on the surface are quite simple. They are these:
1. Beneficence (or, only do good things)
2. Nonmaleficence (or, don’t do bad things)
3. Autonomy (or, the patient decides important things)
4. Justice (or, be fair to everyone)
The first of these principles, beneficence, is the straightforward imperative that whatever we do should, before all else, benefit the patient. At first glance this seems an obvious statement. Why would we do anything that does not help the patient? In reality, we in the PICU are frequently tempted to do (or asked to do by families or other physicians) things that are of marginal or even no benefit to the patient. Common examples include a treatment or a test we think is unlikely to help, but just might.
There is a long tradition in medicine, one encapsulated in the Latin phrase primum non nocere (“first do no harm”), which admonishes physicians to avoid harming our patients. This is the principle of nonmaleficence. Again, this seems obvious. Why would we do anything to harm our patients? But let’s consider the example of tests or treatments we consider long shots — those which probably won’t help, but possibly could. It is one thing when someone asks us to mix an innocuous herbal remedy into a child’s feeding formula. It is quite another when we’re considering giving a child with advanced cancer a highly toxic drug that might treat the cancer, but will certainly cause the child pain and suffering.
Our daily discussions in the PICU about the proper action to take, and particularly about who should decide, often lead us directly to the third key principle of medical ethics, which is autonomy. Autonomy means physicians should respect a patient’s wishes regarding what medical care he or she wants to receive. Years ago patients tended to believe, along with their physicians, that the doctor always knew best. The world has changed since that time, and today patients have become much more involved in decisions regarding their care. This is a good thing. Recent legal decisions have emphasized the principle that patients who are fully competent mentally may choose to ignore medical advice and do (or not do) to their own bodies as they wish.
The issue of autonomy becomes much more complicated for children, or in the situation of an adult who is not able to decide things for himself. Who decides what to do? In the PICU, the principle of autonomy generally applies to the wishes of the family for their child. But what if they want something the doctors believe is wrong or dangerous? What if the family cannot decide what they want for their child? Finally, what if the child does not want what his or her parents want — at what age and to what extent should we honor the child’s wishes? As you can see, the simple issue of autonomy is often not simple at all.
The fourth key principle of medical ethics, justice, stands somewhat apart from the other three. Justice means physicians are obligated to treat every patient the same, irrespective of age, race, sex, personality, income, or insurance status.
You can see how these ethical principles, at first glance so seemingly straightforward, can weave themselves together into a tangled knot of conflicting opinions and desires. The devil is often in the details. For example, as a practical matter, we often encounter a sort of tug-of-war between the ethical principles of beneficence and nonmaleficence — the imperative to do only helpful things and not do unhelpful ones. This is because everything we do carries some risk. We have different ways of describing the interaction between them, but we often speak of the “risk benefit ratio.” Simply put: Is the expected or potential benefit to the child worth the risk the contemplated test, treatment, or procedure will carry?
The difficult situations, of course, are those painted in shades of grey, and this includes a good number of them. In spite of that, thinking about how these four principles relate to each other is an excellent way of framing your thought process.
If you are interested in medical ethics, there are many good sites where you can read more. Here is a good site from the University of Washington, here is a link to the President’s Council on Bioethics (which discusses many specific issues), and here is an excellent blog specifically about the issues of end of life care maintained by Thaddeus Pope, a law professor who is expert in the legal ramifications. If you want a really detailed discussion, an excellent standard book is Principles of Biomedical Ethics, by Beauchamp and Childress.
Intuitively, we all know that an overworked nurse can’t give each patient the care he or she needs; if you stretch nursing staffing too thin, bad things will happen. But do we have any idea what the appropriate staffing level is? There’s a fascinating recent study in the New England Journal of Medicine that tries to get at the answer to this key question. The investigators studied adult patients, deliberately excluding children from their analysis, but I see no reason why the results would be different for children.
The first thing to note about the study is that it took place in a hospital that already had an outstanding safety record, with far less deaths than would have been predicted. Yet even in this outstanding institution, patients who were cared for during shifts when the nurses were overworked and short-staffed had a higher risk of death. The risk was also cumulative: the more such high-risk nursing shifts a patient was exposed to, the higher the risk of death.
Besides looking at simple short-staffed shifts, those for which there were just plain not enough nurses working, the study looked at another variable — patient turnover. This is significant because admitting new patients, dismissing patients from the hospital, and transferring them to another part of the hospital generates an enormous amount of administrative work for the nurses, time that otherwise could go to bedside care.
I find this to be a compelling study. All of us who do hospital medicine know that some shifts are busier than others, and that during such busy shifts the nurses have less time for each patient. This can be irritating to patients and their families. But now we know it can actually be dangerous. I also think the study emphasizes the fact that the administrative burden on the nurse of getting patients into, out of, and around the hospital is huge. I’ve long felt that much of busywork of that process is unnecessary; now we know that it, too, can be dangerous. As the authors point out:
Our finding that below-target nurse staffing and high patient turnover are independently associated with the risk of death among patients suggests that hospitals, payers, and those concerned with the quality of care should pay increased attention to assessing the frequency with which actual staffing matches patients’ needs for nursing care.
I’ve written before about how poor children and children without health insurance are far more likely to need PICU care than are more affluent children. For example, although children on Medicaid account for 20 – 25% (depending upon the state) of children in America, about half of all children in America’s PICUs are on Medicaid. Once in the PICU, though, do the poorer kids have worse outcomes than the richer kids? Does their chronically disadvantaged situation set them up for being more difficult to treat and cure?
I’ve been looking for information about this crucial question for some time and recently found some disturbing data about it, in the form of an article in the journal Pediatric Critical Care Medicine (volume 7, pages 2-6, 2006). You need a subscription to the journal to get the article, but I’ll summarize its important findings for you.
First, the study confirmed that children without insurance are far more likely to suffer critical illness: ” . . . far more serious illness and injuries were associated with uninsured children admitted to the PICU.” But did that make it more likely that these children would suffer worse outcomes, or even make it more likely for them to die?
Unfortunately, uninsured children did have poorer chances of survival. In fact, they were three to four times more likely to die in the PICU. Why was that? The answer was not that they received different care in the PICU once they got there; the answer was that they were much sicker to start with. Compared to children with either private insurance or public assistance (Medicaid), the uninsured children came into the PICU in much worse shape, with far worse derangements in their physiological state. Most likely their parents, fearful of the cost, delayed bringing them to the hospital until sometimes it was too late to save them.
What can we learn from this? Lack of health insurance kills children. That is both a tragedy and a terrible indictment of how we presently care for America’s children.
I’ve worked in several PICUs over the years. Some were as large as 36 beds (which counts as pretty large in the PICU world), and some were as small as 4 beds. Inevitably, larger PICUs can offer services that smaller ones cannot. This is particularly the case with more specialized services, like some kinds of surgery and access to super-specialists. When I’ve been in a smaller unit, there have been times when I’ve needed to transfer children to a larger one so they could get these more esoteric services. When I’ve been in a larger unit, I’ve received transfers of kids like that. Would these children who needed transfer have been better off going to the larger PICU in the first place?
The dilemma for smaller PICUs is that they can never become as experienced in caring for children with rare conditions, and it is hard for someone working in one of the smaller units to keep their skill levels up. Research has shown, not surprisingly, that physicians who do the same thing a lot are better at doing it than physicians who don’t do it so often. On the other hand, transferring a child from a local, smaller PICU to a bigger one is often hard on families, since often the larger unit is in another city — sometimes in another state. And many PICU problems can be handled just fine in a smaller place, nearer to home.
The process of transferring a critically ill child — by ambulance, helicopter, or airplane — carries risks, too. These risks are not just those inherent in traffic or flight. I can tell you from personal experience that no matter how much supplies and equipment you bring on the transport, you still can’t recreate a PICU. And the simple working environment of a transport vehicle, especially a helicopter, is cramped and noisy — far from optimal. So sometimes a critically ill child is safer staying where they are, at least until they can be made more stable.
What to do? As pediatric intensivists, we are sort of feeling our way as we figure this out. Most smaller PICUs have formal or informal relationships with larger units to which they can send children they cannot handle. But these relationships are a patchwork across the nation — we simply don’t know the ideal size for a PICU. When PICUs began several decades ago they were rare, found only in large children’s hospitals. In those days people’s expectations were different about what smaller community hospitals needed to provide. In today’s world, we believe all children should have access to the same life-saving PICU care. So smaller hospitals began to open PICUs to provide that care as best they could. Someday PICU care may be truly regionalized, with formal relationships between big and small units in the region, complete with standardized criteria for appropriate care at one unit or the other. We don’t have anything like that yet.
What parents should realize is that there are differences between what a smaller and a larger PICU can do. If your child has a particularly unusual or difficult problem, it is never inappropriate to ask your child’s doctor if transfer to a larger unit makes sense.
Our bodies are mostly water — about 60% water, in fact. This varies a little with age and sex, but it is a good rough estimate. Of that 60% water, about a third of it is outside the body’s cells, so-called extracellular fluid, and two thirds of it is inside the body’s cells, so-called intracellular fluid. The easiest way to remember this is the “60/40/20″ rule: total body water is 60% of our weight, intracellular water is 40% of total body weight, and 20% is extracellular water. Water can move back and forth between these compartments as needed. Dehydration is a relative deficit of body water, and children are especially prone to developing it.
Dehydration results from loss of body water being greater than replacement of it. Our bodies lose fluid constantly: our kidneys must make a minimum amount of urine to stay functional; we lose water as sweat; and our breath, being fully humidified (saturated with water) takes water from our bodies. These are called obligatory, or insensible water losses. Children have a proportionately higher insensible losses than do adults because their ratio of surface area to body mass is higher. So, compared with adults, children need to take in a relatively larger amount of water to keep all those body compartments full.
What causes dehydration in children? The most common causes are those that increase losses, such as diarrhea, vomiting, or rapid breathing from some respiratory problem. Sick children also tend to take in less fluid, so decreased intake of fluids also contributes.
How can we tell if a child is dehydrated? The most common early sign is decreased urine production, since the kidneys respond to the problem by conserving water. The urine also becomes more concentrated because there is less water in it. As a rule of thumb, the kidneys of a child weighing 10 pounds normally puts into her bladder about 1-2 teaspoons of urine per hour. What a parent can tell, for infants, is if the baby is wetting diapers at the rate she usually does.
As dehydration becomes more advanced, there are other signs we look for. These include a decrease in weight (because so much of our body is water), listlessness, poor color with a doughy feel to the skin, and a more rapid heart rate.
Most cases of dehydration can be treated with increased oral fluids, but sometimes, particularly if the child is too listless to drink, we use intravenous fluids for a day or so until the child is better. We have a good rough guide, based on body weight, about how much fluid a child needs to keep from becoming dehydrated when they are sick.
A common version of the formula divides children into three categories: less than twenty-five pounds, twenty-five to fifty pounds, and over fifty pounds. The first group needs about a half teaspoon of fluid each hour for each pound of body weight. This means a ten-pound child needs five teaspoons an hour, which is a little more than two- thirds of an ounce, or roughly about two ounces every three hours. A twenty-pound child then needs twice that—about four ounces every three hours.
The second group of children, those weighing twenty-five to fifty pounds, need about four to six ounces every three hours. Children weighing over fifty ponds need about six to eight ounces every three hours.
A cup of juice is usually about four ounces and a large glass closer to eight ounces. So offering you child something to drink every three to four hours should keep them well hydrated.
You can read an excellent discussion about dehydration in children here.