Thursday, Feb. 28, 2008
The Science of Experience
By John Cloud/Tallahassee
The other day, a nurse at Florida State University in Tallahassee responded to an alarm in a hospital room where a patient named Stan D. Ardman lay gravely ill. Ardman's blood pressure had dropped precipitously, and when the nurse came in, Ardman wheezed and said, "I'm very nauseous and dizzy ... Having trouble breathing."
"O.K.," the nurse responded. "I'm Thomas. I'll be taking care of you." (Thomas is a pseudonym for a nurse in his mid-20s.) Then, in a tone of uncertainty, Thomas said under his breath, "Nauseous and dizzy?"
Ardman moaned, and his heart monitor squalled urgently.
Thomas looked at Ardman's chart, riffled through a book describing prescription drugs, searched a couple of drawers and accidentally dropped something on the floor. Ardman was already receiving a drip of dopamine, a compound that treats low blood pressure. Merely increasing the dosage of dopamine would almost certainly raise Ardman's blood pressure, relieve his nausea and dizziness, and bring him out of crisis.
But Thomas missed that simple solution. Instead, he asked Ardman if he had chest pain. "I'm just nauseous and dizzy," the patient replied. Just then, the monitor made an ominous noise indicating that Ardman's pressure was plummeting further. Thomas vacillated.
"Think out loud," another nurse pleaded to Thomas.
"Uh," Thomas mumbled. "Not sure."
And then he made a fatal mistake. He decided to give Ardman epinephrine, a drug that would certainly raise the patient's blood pressure but that, in combination with the dopamine Ardman had already received, would also spike his heart rate and possibly kill him. Sure enough, after epinephrine was administered, the patient lost consciousness and drifted toward death -- although just before he died, the simulation ended.
Stan D. Ardman isn't a real person but a robot simulator ("standard man") used to train medical personnel. Thomas, who is just out of nursing school, was participating in a Florida State study designed to compare the performance of novice nurses like him against that of more experienced ones. The results were surprising. After Thomas left, I watched a nurse with more than 25 years' experience go through the same simulation. At first, when the monitor indicated a drop in blood pressure, Monica (also a pseudonym) coolheadedly began to identify possible treatments. Within seconds she noticed Ardman's dopamine drip, and she knew it was the answer. "She's so fast," said James Whyte IV, an assistant professor at Florida State's School of Nursing who was controlling the robot from a hidden room where we sat watching.
Still, Monica didn't know the robot's weight, which she would need to measure the dopamine increase. She moved to pick up Ardman's chart, which listed his weight, but just then the simulator's blood pressure dropped radically, prompting Monica to make the same error that Thomas had made: she went for epinephrine. After the drug sent Ardman into ventricular tachycardia, Monica was fast enough to shock him with the defibrillator. But this time poor Mr. Ardman died before the experiment ended. The expert had killed Ardman even faster than the novice had.
In making the case that she would be a better President than Barack Obama, Hillary Rodham Clinton never forgets to summon the argument that she has more experience. But as the Florida State simulations show, experience doesn't always help. In fact, three decades of research into expert performance has shown that experience itself -- the raw amount of time you spend pursuing any particular activity, from brain surgery to skiing -- can actually hinder your ability to deliver reproducibly superior performance.
How can that be? It is widely accepted that mastering most complex human endeavors requires a minimum of 10 years' experience. The 10-year rule was posited as long ago as 1899, when Psychological Review ran a paper saying it takes at least that long to become expert in telegraphy. The modern study of expert performance began in 1973, when American Scientist published an influential article by researchers Herbert Simon and William Chase saying chess enthusiasts had to play for at least 10 years before they could win international tournaments. (Bobby Fischer was an exception; he played for nine years before becoming a grand master at 16.)
The 10-year rule explains, in an obvious and intuitive way, why the novice nurse Thomas failed his simulation: he had completed only two years of training, and he got rattled. "It's funny the things that anxiety can do to people," Whyte, the nursing professor, said, as Thomas ignored the drip. Monica, by contrast, instinctively looked up to see what medications were on the line. But then she made the same error as her inexperienced counterpart. Why?
While 10 years is a necessary minimum to achieve expertise in most fields, it doesn't guarantee success. As Anders Ericsson writes in the introduction to the 901-page Cambridge Handbook of Expertise and Expert Performance (2006), "The number of years of experience in a domain is a poor predictor of attained performance." Ericsson, 60, is a professor at Florida State who moved to the U.S. from his native Sweden in 1976 to study with Simon, co-author of the seminal chess paper. (Simon went on to win a Nobel Prize in economics for his work on decision-making.) Today Ericsson runs Florida State's Human Performance Laboratory, where Thomas and Monica participated in the robot simulations.
Ericsson, a large, gentle man with unkempt salt-and-pepper hair and a button on his jacket missing, has become the world's leading expert on experts, a term he distinguishes from "expert performers" -- those individuals, possessing both experience and superior skill, who tend to win Nobel Prizes or international chess competitions or Olympic medals. Ericsson notes that some entire classes of experts -- for instance, those who pick stocks for a living -- are barely better than novices. (Experienced investors do perform a little ahead of chance, his studies show, but not enough to outweigh transaction costs.)
Experts tend to be good at their particular talent, but when something unpredictable happens -- something that changes the rules of the game they usually play -- they're little better than the rest of us. Chess grand masters can recall almost entire chessboard layouts from their games (approximately 25 pieces, compared with an average of four for novices), but when chessmen are randomly arranged on a board, those grand masters can recall the placement of only about six pieces. Similarly, experienced actors remember script lines much better than novices do, but they are no better at remembering material other than scripts.
Ericsson's primary finding is that rather than mere experience or even raw talent, it is dedicated, slogging, generally solitary exertion -- repeatedly practicing the most difficult physical tasks for an athlete, repeatedly performing new and highly intricate computations for a mathematician -- that leads to first-rate performance. And it should never get easier; if it does, you are coasting, not improving. Ericsson calls this exertion "deliberate practice," by which he means the kind of practice we hate, the kind that leads to failure and hair-pulling and fist-pounding. You like the Tuesday New York Times crossword? You have to tackle the Saturday one to be really good.
Take figure-skating. For the 2003 book Expert Performance in Sports, researchers Janice Deakin and Stephen Cobley observed 24 figure skaters as they practiced. Deakin and Cobley asked the skaters to complete diaries about their practice habits. The researchers found that elite skaters spent 68% of their sessions practicing jumps -- one of the riskiest and most demanding parts of figure-skating routines. Skaters in a second tier, who were just as experienced in terms of years, spent only 48% of their time on jumps, and they rested more often. As Deakin and her colleagues write in the Cambridge Handbook, "All skaters spent considerably more time practicing jumps that already existed in their repertoire and less time on jumps they were attempting to learn." In other words, we like to practice what we know, stretching out in the warm bath of familiarity rather than stretching our skills. Those who overcome that tendency are the real high performers.
Experience is not only insufficient for expert performance; in some cases, it can hurt. Highly experienced people tend to execute routine tasks almost unconsciously -- think of Monica immediately glancing up to see Ardman's dopamine drip -- and they retrieve the information they need quickly, rarely pausing to apply rules. Driving is a good example. In a 1991 paper in the journal Ergonomics, a team of researchers found that while new drivers and truly expert drivers (members of Britain's Institute of Advanced Motorists) checked their mirrors often and applied their brakes early, regular drivers with 20 years' experience rarely checked their mirrors and braked much later. Experience in a particular task frees space in your mind for other cognitive pursuits -- wondering what's for dinner, answering your cell, singing along with Justin Timberlake -- but those things can distract you from the accident you're about to have. Experience can also lead to overconfidence: a study in the journal Accident Analysis & Prevention found that licensed race-car drivers had more on-the-road accidents than controls did.
Which is not to say that, if elected, Clinton or John McCain would drive the country off a cliff -- or that Obama, as a comparative novice, would be more cautious and less burdened by his habits. But the study of experience does indicate that the more seasoned candidates wouldn't automatically outperform Obama as President. On the other hand, Ericsson's conclusion that deliberate practice leads to better performance might favor the punctilious, famously diligent Clinton.
The Cambridge Handbook concludes that great performance comes mostly from deliberate practice but also from another activity: regularly obtaining accurate feedback. In a 1997 study published in the journal Medical Decision Making, researchers found that only 4% of interns had known a group of elderly patients for more than a week; by comparison, nearly half the highly experienced attending physicians had known the patients for more than six months. But even with the advantages of years of medical experience and months of knowing the patients, the attending physicians were no more accurate than the interns at predicting the patients' end-of-life preferences, a crucial factor in determining whether a patient has a good death. It was attention to the patients' feelings and values that mattered, not having more knowledge of their diseases. And in the end, determining which of the presidential candidates pays more attention to your concerns requires not adding up their years of experience but a far more complex calculation: deciding what their experiences have led them to truly value.