Life on the bell curve

Opinion by Mindy Perkins
May 18, 2015, 5:43 p.m.

Pedometers, nutrition facts, FitBit: We’re saturated with apps and devices for keeping track of our individual habits so that we can live more healthful lives. We can monitor how much we sleep or exactly how many calories we eat in an effort to conform to numerical recommendations derived from measurements of populations and statistical analyses of the resulting data.

While the guidelines are valuable and have life-saving clinical applications, focusing too much on numbers can divorce us from the reality we’re attempting to understand. For one, bias is inherent in what we choose to measure: Since it’s impossible to measure everything, we have to pick the characteristics we think are most likely to relate to what we want to understand, which can lead us to overlook other contributing factors or alternate explanations. Then there’s also the question of what we do with the statistics we generate. In making diagnoses, we have to choose where to draw the line between “normal” and “abnormal,” to decide what worries us and what doesn’t. We reason that if we can quantify, we can control–a principle we should rethink in light of probabilistic uncertainties and individual variation.

A probability distribution tells you how common a characteristic is in a population. In other words, if you choose an individual from a population and measure some characteristic, how likely are you to get a certain value?

Sometimes we use distributions to determine how justified we are in making assumptions, like associating tall athletes with basketball. Yet just as not all tall athletes play basketball, not all basketball players are tall. Distributions deal with averages across populations. In everyday life, we deal with individuals. As Daniel Kahneman writes in “Thinking, Fast and Slow,” “Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case.”

The principle behind personalized medicine is to tailor treatment to the patient–to recognize that people are unique, with singular genomes and circumstances that influence their health, and thus the tests and therapies that might be most effective for them. Implementing personalized medicine on a wide scale will require a significant amount of time, effort, individual data collection and processing.

But there’s another question lurking at the fringes: Will personalized medicine be used to recognize individual variation, or to treat individual variation? Is having a genetic propensity for a higher-than-average blood pressure an individual characteristic or a risk? Can we even tell the difference except by using population statistics about the effects of high blood pressure?

Where we distinguish “variable” from “problematic” extends beyond issues of physical wellness. For example, one possible explanation for the apparent increase in autism spectrum disorders (ASD) is ontological: Maybe we’re just recognizing more things as belonging on the spectrum than we did before. If we have indeed broadened the definition of ASD, then where does the spectrum “start?” What exactly are we considering to be “normal” human behavior? Could we continue to broaden the spectrum to include any slight deviation from “normal?”

These same questions apply to other facets of mental function and personality as well. When do you stop being melancholy and start being depressed? When do you go from being easily distracted to having ADD? Take the argument to its extreme and we could classify every personality quirk as a mental disorder in need of treatment. What are we actually treating? Are we lumping people who don’t need help with those who genuinely do?

Maybe this is all alarmist. But the issue is especially relevant in light of two considerations: (a) how much data we now collect about ourselves, which increases the opportunities we have to quantify our characteristics and compare them to numerical averages; and (b) an inclination in the medical system to do too much rather than too little.

In a recent article in the New Yorker, Atul Gawande notes that there is a tendency in American society to err on the side of over-diagnosis and overcompensation–that we’re so afraid of missing something potentially harmful that we’ll go to extreme measures to address any “abnormalities.” Statistics play a vital role in this process: For every medical test, there’s some probability it will give a false positive or a false negative; for every measurement, there’s some probability of error from noise; for every symptom, there’s some probability it actually indicates a disease; for every treatment, there’s some probability it won’t work. And there is some probability that the measured phenomenon is due to individual variation from the norm that doesn’t actually pose a problem.

We need statistics to make society and medicine safer and more effective. We need baseline values so that we know when there are problems. Yet ultimately we cannot control every number we measure, and maybe we shouldn’t try. Averages may be calculated from individuals, but individuals can’t be calculated from the average.

 

Contact Mindy Perkins at mindylp ‘at’ stanford.edu. 

Mindy Perkins ‘15 is an opinions columnist for The Stanford Daily. As a proud Coloradoan and electrical engineering major, her ultimate goal is to apply engineering techniques to researching animals, as well as to draw inspiration from the natural world for engineering applications. In her free time, she enjoys writing, playing the viola and piano and drawing animals, dinosaurs and dragons. You can reach her at [email protected].

Login or create an account