BBC “Brain Training” Experiment: the Good, the Bad, the Ugly

You may already have read the hundreds of media articles today titled “brain training doesn’t work” and similar, based on the BBC “Brain Test Britain” experiment.

Once more, claims seem to go beyond the science backing them up … except that in this case it is the researchers, not the developers, who seem responsible.

Let’s recap what we learned today.

The Good Science

The study showed that putting together a250px-ClintEastwood variety of brain games in one website and asking people who happen to show up to play around for a grand total of 3-4 hours (10 minutes 3 times a week for 6 weeks) didn’t result in meaningful improvements in cognitive functioning. This is useful information for consumers to know, because in fact there are websites and companies making claims based on similar protocols. And this is precisely the reason SharpBrains exists, to help both consumers (through our book) and organizations (through our report) to make informed decisions. The paper only included people under 60, which is surprising, but, still, this is useful information to know.

A TIME article summarizes the lack of transfer well (we are going to refer to the Time article several times below, it was one of the best today):

“But the improvement had nothing to do with the interim brain-training, says study co-author Jessica Grahn of the Cognition and Brain Sciences Unit in Cambridge. Grahn says the results confirm what she and other neuroscientists have long suspected: people who practice a certain mental task — for instance, remembering a series of numbers in sequence, a popular brain-teaser used by many video games — improve dramatically on that task, but the improvement does not carry over to cognitive function in general.”

The Bad Science

The study, which was not a gold standard clinical trial, angeleyescleef1.thumbnailcontained obvious flaws both in methodology and in interpretation, as some neuroscientists have started to point out. Back to the Time article:

“Klingberg (note: Torkel Klingberg is a cognitive neuroscientist who has published multiple scientific studies on the benefits of brain training, and founded a company on the basis of that published work)…criticizes the design of the study and points to two factors that may have skewed the results.

On average the study volunteers completed 24 training sessions, each about 10 minutes long — for a total of three hours spent on different tasks over six weeks. “The amount of training was low,” says Klingberg. “Ours and others’ research suggests that 8 to 12 hours of training on one specific test is needed to get a [general improvement in cognition].”

Second, he notes that the participants were asked to complete their training by logging onto the BBC Lab UK website from home. “There was no quality control. Asking subjects to sit at home and do tests online, perhaps with the TV on or other distractions around, is likely to result in bad quality of the training and unreliable outcome measures. Noisy data often gives negative findings,” Klingberg says.”

More remarkable, a critic of brain training programs had the following to say in this Nature article:

“I really worry about this study — I think it’s flawed,” says Peter Snyder, a neurologist who studies ageing at Brown University’s Alpert Medical School in Providence, Rhode Island.

…But he says that most commercial programs are aimed at adults well over 60 who fear that their memory and mental sharpness are slipping. “You have to compare apples to apples,” says Snyder. An older test group, he adds, would have a lower mean starting score and more variability in performance, leaving more room for training to cause meaningful improvement. “You may have more of an ability to see an effect if you’re not trying to create a supernormal effect in a healthy person,” he says.

Second, the “dosage” was small, Snyder said. The participants were asked to train for at least 10 minutes a day, three times a week for at least six weeks. That adds up to only four hours over the study period, which seemed modest to Snyder.

The Ugly Logic

Let’s think by analogy. Aren’t thetucogbu1.thumbnail BBC-sponsored researchers basing their very broad claims on this type of faulty logic?

  1. We have decided to design and manufacture our first car ever
  2. Oops, our car doesn’t work
  3. Therefore, cars DON’T, CAN’T and WON’T work
  4. Therefore, ALL car manufacturers are stealing your money.
  5. Case closed, let’s all continue riding horses. Why change?

Klingberg points out this too, stressing to TIME that the study “draws a large conclusion from a single negative finding” and that it is “incorrect to generalize from one specific training study to cognitive training in general.”

Posit Science (SharpBrains materials have been critical of Posit Science’ claims in the past, but in this case I couldn’t agree more with what they are saying), tries to debunk the debunker:

“This is a surprising study methodology,” said Dr. Henry Mahncke, VP Research at Posit Science. “It would be like concluding that there are no compounds to fight bacteria because the compound you tested was sugar and not penicillin.”

We do need serious science and analysis on the value and limitations of scalable approaches to cognitive assessment, training and retraining. There are very promising published examples of methodologies that seem to work (which the BBC study not only ignored but directly contradicted), mixed with many claims not supported by evidence. What concerns me is that this study may not only manage to confuse the public even more, but to prevent much needed innovation to ensure we are better equipped over the next 5-10 years than we are today.

Resources:

Previous SharpBrains articles: