I was recently asked on Twitter if I thought brain training games like Lumosity were “any good.” My short Twitter response was that the answer is YES, NO, and MAYBE. Here is a more detailed explanation from my perspective as a psychologist, researcher and maker of serious games.
When is the answer “YES”?
Arthur Rubenstein, often considered the greatest pianist of his generation, was apparently approached on the streets of New York City by someone who asked him how they could get to Carnegie Hall. The story goes that Arthur Rubenstein replied “Practice, practice, practice!”
And so it, goes. If you want to get better at something, you have to practice.
Video games are great tools that offer endless opportunities to practice and learn. In contrast, in traditional schooling approaches, you take a test, you don’t do so well, you get a low score, you feel defeated and diminished, and then you move on to the next assignment. You’re not allowed to take the test again and master the material. No wonder kids love video games. They love the FUN of mastering challenges and the feeling of accomplishment! As far as I’m concerned, video games prepare people much better for the real world in which success is based on learning from failures, mastering challenges, and reaching goals despite the number of times they failed.
The ability to practice and get better is at the core of what is fun about games and why serious games can be used to teach and train.
Now let’s turn to brain games. They do the same thing. They present players with cognitive tasks in a game format that players can take multiple times to improve their score on these tasks. So it seems at face value to make sense that you can improve your cognitive functioning by playing these games.
The cognitive tasks that are in brain games have been around for quite a while. Psychologists have used them for years to assess things like attention and memory. For example the Stroop Test has been around since John Ridley Stroop first published his seminal paper on the Stroop Effect in 1935. Brain games often include Stroop Tests among the many cognitive tasks in their lineup.
Quite unfortunately for the players, the Stroop Test is not about how many addictive caramel waffle cookies you can eat in Holland (stroopwafels…YUM!). It is about how quickly you can correctly name the colors of words. It is not as easy as it sounds. You can give a try yourself.
Take a look at the graphic below and as quickly as possible, say out loud the COLORS on left hand side and then the COLORS (not the words) on the right hand side.
It was probably really easy for you to name the colors on the left hand side where the colors corresponded to the words. But you probably struggled to name the colors on the right hand side when the colors differed from the meaning of the words. Pretty much everyone struggles with naming the colors on the right hand side because reading and gathering meaning from words is a much more ingrained and practiced habit than naming colors. The brain, through years of practice and learning, is automatically ready for you to process the meaning of word first. This ingrained process interferes with your intention to disregard the meaning of the word and focus on only the color that you see.
Psychologists count how many errors people make in naming the colors on the right hand side. They also measure how much longer it takes people to name the colors on the right compared to the left hand side. This information gives them an idea of how well someone can focus their attention, processes information and inhibit unwanted responses. People who don’t do so well on this task compared to other people might also have difficulties in the real world staying focused on boring tasks or in keeping their mouth shut when a compelling but socially unacceptable thought comes to mind.
With brain games you basically do tasks like the Stroop Test and get a score on how well you did based on errors and how long it takes you to respond. You can then try to improve your score which is often indicated as your game IQ (BPI with Lumosity) or age (with Brain Age). When you start a game like Brain Age, your mentor, may tell you that your brain age is something like 80 years old. Your fear of being an old person with an old brain probably motivates you (even if you are very old!) to practice many times to improve your performance.
This game mechanic of feedback loops and motivation incentives engages players to practice cognitive tests like the Stroop Test over and over. With practice, players improve and their game IQ or brain age improves.
In sum, brain training games are really good at getting you to practice a cognitive test like the StroopTest and improve your performance on THAT cognitive test. So YES! Brain games can improve your performance on THAT cognitive task. They are “good” in this way without a doubt.
However, the assumption is that as your score improves, you are improving your ability to focus your attention, process information, and inhibit unwanted responses not just on this task but in other areas as well.
The question is whether or not taking a test again and again and improving one’s score in brain games indicates a REAL improvement.
Are performance improvements on a cognitive task “REAL” improvements?
Claims that brain games actually improve cognitive abilities that translate to other domains of functioning can be viewed as a “Testing Effect.” The phenomenon of practicing a test and getting a higher score on the test as a result of that practice is known as a “Testing Effect” in research design.
For example, I can take an IQ test and get a score of 100. Then I can figure out what I did wrong and take the IQ test again and again until I get a score of 115. The question is then, do I REALLY have a higher IQ? Or does my new IQ of 115 simply reflect the fact that I took the IQ test multiple times. Most people would say that my IQ improved because I practiced the test but I’m still as smart (or dumb) as I was before I practiced and got better on the IQ test.
We can do things in research to help us rule out testing effects. I know research is boring for most people but please indulge me to talk a bit about research design.
If I am evaluating the impact of an intervention on outcomes, I want to make sure that the improvements I see in test performance are a result of the intervention and not the result of having previously taken a test. This is one of many reasons why it is VERY important to have a control comparison group in evaluations of interventions. If there is an improvement in performance simply due to having taken a test before and not the intervention, you will see a similar improvement in performance in the control group that is no different from the intervention group. If the intervention “works” you still might see a testing effect in the control group with improved scores, but the improvement in the intervention group should be much bigger. Get the picture?
A good study of brain games that used a control group
A very large scientific study conducted through the BBC that was published in Nature had very convincing findings. It showed that if one trained and improved their performance cognitive tasks in brain games, one did not show improvements in similar tasks of those cognitive abilities when presented with tasked they were not trained on. D’oh!
This video does a really nice job of summarizing the results. Please also pay attention to what the participants in the research thought the game was helping them do and how surprised they were by the results (see 0:15 for people’s self-report of how they thought they did when they played the games and their surprise when they see the scientific evidence later).
The results showed that the apparent cognitive gains on tasks that people were trained on did not transfer to cognitive tasks they were not trained on. Humph, it looks like a testing effect. This looks like a big NO for brain games being “good.”
Brain games and the research on them point out some problematic thinking when people design games and when they set out to do research on them.
Big Problem #1: Using in-game metrics to show efficacy
They use in-game metrics to show that their game “works.” They say that improvements on cognitive tasks in their game means that they improve in the real world without measuring what happens in the real world. Or they say that increases in a physical skill like balance on the Wii balance board means that these people don’t trip and fall in the real world. The problem is that they didn’t go out into the real world and see if these people had better balance there as well.
Listen. I love games and I believe in the power of serious games. But I could give a @!#’s !@# about what people do in a game. I care about what the game actually leads them to do in the real world. And yes it is difficult to measure things in the real world, but can we at least give it a try?
Big Problem #2: Using self-report qualitative assessments to show efficacy
Big Problem #2 happens when researchers attempt to get at what is going on in the real world by asking people, “Did you think the game made you change the way you do things in the real world?” When a significant number of people say yes and believe it very strongly, the researchers then conclude that their game “works.” I don’t believe these statements if they are not supported by similar changes in observable objective behaviors or physiological changes compared to placebo control group. There is just too much pressure on people to say positive things about an intervention in an experimental situation and to be very bad at objectively evaluating their own performance (have you watched the auditions for “Idols” on TV recently?). It is not that people lie, and they do that sometimes to make themselves feel good, but they also have cognitive biases to focus on evidence that supports their wishes and views of themselves. And like I said before, if people self-report something but the objective evidence says something else, people will always go with the objective evidence, not the other way around. (Note: You can read my 2008 study of Re-Mission to see how the objective measure of adherence told a different story than the self-report measures of adherence and decide for yourself which measure you believe.)
Also, think of the BBC study discussed above. If you saw the video, there probably would have been an intervention effect shown in the study if the researchers only asked for the self-assessment of the impact of the games on the people playing the games. Great. But objective measures did not bear it out much to their surprise! And by the way, the study would not have been published in Nature nor would it have been considered as scientifically rigorous by most scientists who read it if they only evaluated the games based on subjective self-reports.
Caveat. Now, the BBC was just one study. There are other studies that look at brain training games. The findings are mixed and the scientific quality of the studies varies. There are also many, many anecdotal reports that brain games work and transfer to other areas of real life. These are usually put out by the makers of the game.
BUT, as much as I bring up criticism of brain games and the tactics they use to get people to believe they work, I am actually not fully convinced that they don’t work. Science is probabilistic and depends on converging evidence to help us understand how things work. There simply have not been enough good studies of brain games to convince me they don’t work (or do work for that matter). The time is now to do a few really good studies to gather more good evidence to get at whether or not brain games are any good.
In the meantime, I will probably still enjoy playing a brain game now and then with the hope it is helping to keep my brain young. But I would probably enjoy them even more if saw some more really good research coming out examining its claims in the “real world.”
Research, research, research!!!