Why is it that journalists seem to generally pick up on some new study of human cognition and act as if no earlier studies exist? Take this example, where Bob Sullivan and Hugh Thompson combine their talents to tell us about new research on the impacts of interruptions:
Do interruptions make us dumber? Quite a bit, according to new research by Carnegie Mellon University’s Human-Computer Interaction Lab.
There’s a lot of debate among brain researchers about the impact of gadgets on our brains. Most discussion has focused on the deleterious effect of multitasking. Early results show what most of us know implicitly: if you do two things at once, both efforts suffer.
In fact, multitasking is a misnomer. In most situations, the person juggling e-mail, text messaging, Facebook and a meeting is really doing something called “rapid toggling between tasks,” and is engaged in constant context switching.
As economics students know, switching involves costs. But how much? When a consumer switches banks, or a company switches suppliers, it’s relatively easy to count the added expense of the hassle of change. When your brain is switching tasks, the cost is harder to quantify.
There have been a few efforts to do so: Gloria Mark of the University of California, Irvine,found that a typical office worker gets only 11 minutes between each interruption, while it takes an average of 25 minutes to return to the original task after an interruption. But there has been scant research on the quality of work done during these periods of rapid toggling.
Hold on. What about the work of Jason Watson and David Strayer who researched “supertaskers.” They studied 200 subjects in a controlled fashion, and determined that 2.5 percent could in fact drive a car in a difficult simulation while performing a complex set of cognitive tasks (so-called OSPAN tasks). Those researchers stated (see Supertaskers: Profiles In Extraordinary Multitasking Ability):
Supertaskers are not a statistical fluke. The single-task performance of supertaskers was in the top quartile, so the superior performance in dual-task conditions cannot be attributed to regression to the mean. However, it is important to note that being a supertasker is more than just being good at the individual tasks. While supertaskers performed well in single-task conditions, they excelled at multi-tasking.
This research is continuously overlooked, especially when someone comes up with some results that seem to confirm the conventional wisdom that a) multitasking is impossible, b) people are bad at task switching, and 3) it can’t be learned.
I bet there is an interesting overlap between multitasking and recovering from interruptions.
There is lots of evidence that the brain is plastic and people can get better at all sorts of challenges that involve something like multitasking through exposure. As I wrote several years ago: everyone can multitask successfully to some degree, and our ability to multitask is a combination of innate and learned behaviors.
Matt Richtel, Hooked on Gadgets, and Paying a Mental Price
Technology use can benefit the brain in some ways, researchers say. Imaging studies show the brains of Internet users become more efficient at finding information. And players of some video games develop better visual acuity.[… much of the technical discussion in the article is spread all over]
At the University of Rochester, researchers found that players of some fast-paced video games can track the movement of a third more objects on a screen than nonplayers. They say the games can improve reaction and the ability to pick out details amid clutter.
“In a sense, those games have a very strong both rehabilitative and educational power,” said the lead researcher, Daphne Bavelier, who is working with others in the field to channel these changes into real-world benefits like safer driving.[…]
Other research shows computer use has neurological advantages. In imaging studies, Dr. Small observed that Internet users showed greater brain activity than nonusers, suggesting they were growing their neural circuitry.
The notion that our abilities are fixed after puberty, or limited by our genes, is simply wrong.
Even in this most recent example of journalistic oversimplification, the evidence is ambiguous. Sulliven and Thompson conducted a test on the impacts of interruption on performance. As usual, the obsession with individual performance misses the real findings. They had some professors design an experiment to see what the impact of interruptions was. Note that they completely ignore the Watson and Strayer study, and never consider for a second that multitasking, like almost all human abilities, is distributed on a bell curve, with most people having average capabilities, and a few per cent having high capabilities.
Bob Sullivan and Hugh Thompson, Brain, Interrupted
To simulate the pull of an expected cellphone call or e-mail, we had subjects sit in a lab and perform a standard cognitive skill test. In the experiment, 136 subjects were asked to read a short passage and answer questions about it. There were three groups of subjects; one merely completed the test. The other two were told they “might be contacted for further instructions” at any moment via instant message.
During an initial test, the second and third groups were interrupted twice. Then a second test was administered, but this time, only the second group was interrupted. The third group awaited an interruption that never came. Let’s call the three groups Control, Interrupted and On High Alert.
We expected the Interrupted group to make some mistakes, but the results were truly dismal, especially for those who think of themselves as multitaskers: during this first test, both interrupted groups answered correctly 20 percent less often than members of the control group.
In other words, the distraction of an interruption, combined with the brain drain of preparing for that interruption, made our test takers 20 percent dumber. That’s enough to turn a B-minus student (80 percent) into a failure (62 percent).
But in Part 2 of the experiment, the results were not as bleak. This time, part of the group was told they would be interrupted again, but they were actually left alone to focus on the questions.
Again, the Interrupted group underperformed the control group, but this time they closed the gap significantly, to a respectable 14 percent. Dr. Peer said this suggested that people who experience an interruption, and expect another, can learn to improve how they deal with it.
But among the On High Alert group, there was a twist. Those who were warned of an interruption that never came improved by a whopping 43 percent, and even outperformed the control test takers who were left alone. This unexpected, counterintuitive finding requires further research, but Dr. Peer thinks there’s a simple explanation: participants learned from their experience, and their brains adapted.
Somehow, it seems, they marshaled extra brain power to steel themselves against interruption, or perhaps the potential for interruptions served as a kind of deadline that helped them focus even better.
The interesting question here, it seems to me, is what is the overall performance across the group when they believe they might be interrupted? It increases. And people who are interrupted get better at recovering from interruptions, too.
So, in the business setting, where the expectation of being interrupted is common, people adopt some cognitive approach where they perform better, and as a result, the performance of the group as a whole goes up. Yes, individuals who are interrupted have a decrease in performance — although this is likely to vary by the degree of supertaskers in the group — but overall, that expectation improves things in general. This suggests that it is beneficial to have people expect to be interrupted, because that expectation improves performance for the group. In a sense. remaining aware that you are connected to others, and that they may call on you for help, keeps you sharp.
This is a corollary for Boyd’s Law, which I originally formulated in 2003, and now state in this form:
The value of a person in a social network can be measured by the increase in connections the person makes or that others make with that person, and the resulting increase in network communications.
The takeaway from this is not at all what the journos involved would have you believe. No surprise there.
Sorry for the interruption!