I was recently asked on Twitter if I thought brain training games like Lumosity were “any good.” My short Twitter response was that the answer is YES, NO, and MAYBE. Here is a more detailed explanation from my perspective as a psychologist, researcher and maker of serious games.
When is the answer “YES”?
Arthur Rubenstein, often considered the greatest pianist of his generation, was apparently approached on the streets of New York City by someone who asked him how they could get to Carnegie Hall. The story goes that Arthur Rubenstein replied “Practice, practice, practice!”
And so it, goes. If you want to get better at something, you have to practice.
Video games are great tools that offer endless opportunities to practice and learn. In contrast, in traditional schooling approaches, you take a test, you don’t do so well, you get a low score, you feel defeated and diminished, and then you move on to the next assignment. You’re not allowed to take the test again and master the material. No wonder kids love video games. They love the FUN of mastering challenges and the feeling of accomplishment! As far as I’m concerned, video games prepare people much better for the real world in which success is based on learning from failures, mastering challenges, and reaching goals despite the number of times they failed.
The ability to practice and get better is at the core of what is fun about games and why serious games can be used to teach and train.
Now let’s turn to brain games. They do the same thing. They present players with cognitive tasks in a game format that players can take multiple times to improve their score on these tasks. So it seems at face value to make sense that you can improve your cognitive functioning by playing these games.
The cognitive tasks that are in brain games have been around for quite a while. Psychologists have used them for years to assess things like attention and memory. For example the Stroop Test has been around since John Ridley Stroop first published his seminal paper on the Stroop Effect in 1935. Brain games often include Stroop Tests among the many cognitive tasks in their lineup.
Take a look at the graphic below and as quickly as possible, say out loud the COLORS on left hand side and then the COLORS (not the words) on the right hand side.
It was probably really easy for you to name the colors on the left hand side where the colors corresponded to the words. But you probably struggled to name the colors on the right hand side when the colors differed from the meaning of the words. Pretty much everyone struggles with naming the colors on the right hand side because reading and gathering meaning from words is a much more ingrained and practiced habit than naming colors. The brain, through years of practice and learning, is automatically ready for you to process the meaning of word first. This ingrained process interferes with your intention to disregard the meaning of the word and focus on only the color that you see.
Psychologists count how many errors people make in naming the colors on the right hand side. They also measure how much longer it takes people to name the colors on the right compared to the left hand side. This information gives them an idea of how well someone can focus their attention, processes information and inhibit unwanted responses. People who don’t do so well on this task compared to other people might also have difficulties in the real world staying focused on boring tasks or in keeping their mouth shut when a compelling but socially unacceptable thought comes to mind.
This game mechanic of feedback loops and motivation incentives engages players to practice cognitive tests like the Stroop Test over and over. With practice, players improve and their game IQ or brain age improves.
In sum, brain training games are really good at getting you to practice a cognitive test like the StroopTest and improve your performance on THAT cognitive test. So YES! Brain games can improve your performance on THAT cognitive task. They are “good” in this way without a doubt.
However, the assumption is that as your score improves, you are improving your ability to focus your attention, process information, and inhibit unwanted responses not just on this task but in other areas as well.
The question is whether or not taking a test again and again and improving one’s score in brain games indicates a REAL improvement.
Are performance improvements on a cognitive task “REAL” improvements?
Claims that brain games actually improve cognitive abilities that translate to other domains of functioning can be viewed as a “Testing Effect.” The phenomenon of practicing a test and getting a higher score on the test as a result of that practice is known as a “Testing Effect” in research design.
For example, I can take an IQ test and get a score of 100. Then I can figure out what I did wrong and take the IQ test again and again until I get a score of 115. The question is then, do I REALLY have a higher IQ? Or does my new IQ of 115 simply reflect the fact that I took the IQ test multiple times. Most people would say that my IQ improved because I practiced the test but I’m still as smart (or dumb) as I was before I practiced and got better on the IQ test.
We can do things in research to help us rule out testing effects. I know research is boring for most people but please indulge me to talk a bit about research design.
If I am evaluating the impact of an intervention on outcomes, I want to make sure that the improvements I see in test performance are a result of the intervention and not the result of having previously taken a test. This is one of many reasons why it is VERY important to have a control comparison group in evaluations of interventions. If there is an improvement in performance simply due to having taken a test before and not the intervention, you will see a similar improvement in performance in the control group that is no different from the intervention group. If the intervention “works” you still might see a testing effect in the control group with improved scores, but the improvement in the intervention group should be much bigger. Get the picture?
A good study of brain games that used a control group
A very large scientific study conducted through the BBC that was published in Nature had very convincing findings. It showed that if one trained and improved their performance cognitive tasks in brain games, one did not show improvements in similar tasks of those cognitive abilities when presented with tasked they were not trained on. D’oh!
This video does a really nice job of summarizing the results. Please also pay attention to what the participants in the research thought the game was helping them do and how surprised they were by the results (see 0:15 for people’s self-report of how they thought they did when they played the games and their surprise when they see the scientific evidence later).
The results showed that the apparent cognitive gains on tasks that people were trained on did not transfer to cognitive tasks they were not trained on. Humph, it looks like a testing effect. This looks like a big NO for brain games being “good.”
Brain games and the research on them point out some problematic thinking when people design games and when they set out to do research on them.
Big Problem #1: Using in-game metrics to show efficacy
They use in-game metrics to show that their game “works.” They say that improvements on cognitive tasks in their game means that they improve in the real world without measuring what happens in the real world. Or they say that increases in a physical skill like balance on the Wii balance board means that these people don’t trip and fall in the real world. The problem is that they didn’t go out into the real world and see if these people had better balance there as well.
Listen. I love games and I believe in the power of serious games. But I could give a @!#’s !@# about what people do in a game. I care about what the game actually leads them to do in the real world. And yes it is difficult to measure things in the real world, but can we at least give it a try?
Big Problem #2: Using self-report qualitative assessments to show efficacy
Big Problem #2 happens when researchers attempt to get at what is going on in the real world by asking people, “Did you think the game made you change the way you do things in the real world?” When a significant number of people say yes and believe it very strongly, the researchers then conclude that their game “works.” I don’t believe these statements if they are not supported by similar changes in observable objective behaviors or physiological changes compared to placebo control group. There is just too much pressure on people to say positive things about an intervention in an experimental situation and to be very bad at objectively evaluating their own performance (have you watched the auditions for “Idols” on TV recently?). It is not that people lie, and they do that sometimes to make themselves feel good, but they also have cognitive biases to focus on evidence that supports their wishes and views of themselves. And like I said before, if people self-report something but the objective evidence says something else, people will always go with the objective evidence, not the other way around. (Note: You can read my 2008 study of Re-Mission to see how the objective measure of adherence told a different story than the self-report measures of adherence and decide for yourself which measure you believe.)
Also, think of the BBC study discussed above. If you saw the video, there probably would have been an intervention effect shown in the study if the researchers only asked for the self-assessment of the impact of the games on the people playing the games. Great. But objective measures did not bear it out much to their surprise! And by the way, the study would not have been published in Nature nor would it have been considered as scientifically rigorous by most scientists who read it if they only evaluated the games based on subjective self-reports.
Caveat. Now, the BBC was just one study. There are other studies that look at brain training games. The findings are mixed and the scientific quality of the studies varies. There are also many, many anecdotal reports that brain games work and transfer to other areas of real life. These are usually put out by the makers of the game.
BUT, as much as I bring up criticism of brain games and the tactics they use to get people to believe they work, I am actually not fully convinced that they don’t work. Science is probabilistic and depends on converging evidence to help us understand how things work. There simply have not been enough good studies of brain games to convince me they don’t work (or do work for that matter). The time is now to do a few really good studies to gather more good evidence to get at whether or not brain games are any good.
In the meantime, I will probably still enjoy playing a brain game now and then with the hope it is helping to keep my brain young. But I would probably enjoy them even more if saw some more really good research coming out examining its claims in the “real world.”
Research, research, research!!!
I did some research a few years back with Dr Kawashima’s Brain Training. REferences here:
1. Miller, D.J. & Robertson, D.P. (2010). Using a games-console in the primary classroom: effects of ‘Brain Training’ programme on computation and self-esteem. British Journal of Educational Technology 41 (2), 242-255.
2. Miller, D.J. & Robertson, D.P. (2011). Educational benefits of using games consoles in a primary classroom: a randomised controlled trial. British Journal of Educational Technology, 42 (5), 850-864.
Here is a video case study of what we did in the initila smal scale trial (btw we were asked by the BBC for our research to be featured in their discussion/programme but it was in the proces ofg being publsihed so we couldn’t include it.)
Hopefully you find this of use. @derekrobertson
Hi Derek, Thank you for sharing the references! Is the full text of the articles available anywhere on the web to share? If not, would you mind sending me a copy of the articles? My address is firstname.lastname@example.org.
Thanks so much!
p.s. I love the video case study!
Liking your blog!
Hmm, so if I understand Derek correctly, doing maths at school was useful after all 😀
It is hard to generalize the results of maths and english interventions using ‘cognitive ability’ tests. The whole schooling system doesn’t change your cognitive ability that much. But does make you smarter! (I presume..)
Also, be careful not to run into the whole-task / part-task segmentation difference.
Try thinking of the last time you had to teach (or explain) something to someone. Chances are you had to split the task at hand into smaller pieces. This is called ‘task segmentation’. As illustration, I recently moved to Amsterdam, and wanted to laminate part of my new floor. First I had to remove the old flooring, then roll-out the underfloor foaming, then start laying the laminate across one edge, etc. As these are all tasks that can be performed integrally in the authentic situation, this is ‘whole-task segmentation’.
Now in the schooling system, this is really hard to do — the earlier in education, the harder. That’s of course because they can’t be sure what jobs their pupils are going to have to perform, forcing to aim at a generic subset. Some pupils may become military, such as the friendly colleague who sent me this link (thx Jur!), others lawyers, such as my dear sister, or maybe even some serious gaming researchers like Pam and me :D. In order to prepare for all these different jobs, the tasks have to be segmented using a different approach. Enter schooling systems as we all know them. The tasks are divided orthogonally into clusters of skills that are way more generic, such as maths & grammar.
Sam Besselink (TNO, Netherlands)
Hi, thanks for your article, informative and funny too:) As for the Stroop effect brain game I found this one to be most challening: http://gamentis.com/colors/colors.html, you may want to try.
Where have you been all my life, Pam? I’m so glad to find someone who can apply basic educational research concepts to the question of whether computer games are effective at enhancing specific learning outcomes.
Time on task – that’s what we used to call the variable you mention in your comments about “practice, practice, practice”. I have maintained for 35 years that research that compares computer-mediated teaching with any other intervention and doesn’t control for time on task tells us nothing. There is all this verbiage about motivation and engagement but rarely does the point get made that the learner is spending **more time** practicing the target skill. The fact that the delivery medium is a computer may be completely irrelevant.
Teaching for transfer/active knowledge – when I was at Atari in 1979 we knew game players were improving their eye-hand coordination with a joy stick. What we didn’t know was that there would be jobs for drone pilots 30 years later. What we’re seeing now is work environments being modified to be more like video games rather than video game skills being transferred out of the game environment. Does that suggest that these skills don’t transfer very well so we have to provide “assistive technologies” to the somewhat handicapped incoming workforce? What does the kid who plays Oregon Trail really learn? I suspect it has more to do with analyzing game parameters than American history.
So please keep up the good work. Bringing computers into the classroom didn’t save American (or British) education and bringing in poorly designed games won’t either.