Some Shortcomings Around Game/Simulation Research
Another question related to serious games and immersive learning simulations:
“What are the shortcomings in the scholarship/research around
Serious Games/Immersive Learning Simulations?“
One of the biggest shortcomings is how comparative research is conducted. Everyone wants to do a study comparing serious games/immersive learning simulations with classroom or online instruction to see which is better. Please, we’ve done comparative research studies for decades and always found the same result, well designed instruction, no matter what the medium is effective.
Also, another interesting aspect is how the studies decide what is “effective instruction.” Most of the time one group of learners experiences an online game/simulation and one group experiences the same content taught in a classroom and then BOTH groups are given a multiple choice test to see what they have learned.
To me this is completely the wrong approach, a multiple choice test is designed to assess linear content, it is designed to assess classroom knowledge memorization. Of course the results are going to be higher for the group that was taught in a linear fashion because that is how you are assessing their “competence.”
Instead, what I’d like to see, is an authentic assessment. If you are going to put someone in a game/simulation to teach customer service skills or leaderships skills, the way to assess is to put them in a customer service or leadership position and see how they do.
I saw a study once comparing a linear elearning course and an experience within the virtual world of Second Life. The lesson, presented in both formats, was about the attributes of a sustainable “green” house. Each lesson wanted students to learn about the green house with such things as solar panels and waterless hot water heaters. But the final assessment was not to build a green house or even enter into an actual house to identify the elements of green. Instead it was multiple choice. So, naturally, it favored the linear elearning course but my argument was, and still is, that the evaluation provided a bias toward learning and memorization and not experience, action or behavior change. And, ultimately, in an academic or corporate setting, instruction is really about experience, action or behavior change ideally obtained through knowledge and applied experimentation.
So, in terms of the shortcomings of the research, I would like to see more studies assessed in a more authentic environment if they are going to be a comparison of games/simulations and classroom instruction.
Another shortcoming is that to do effective research, one must isolate variables and hold certain things constant so that valid conclusions can be drawn. So a study is done with an elearning course that has audio and then the exact same course without audio and then learners are quizzed and test results are compared so the researcher can definitely say if audio impacted learning or not and to what extent.
However, how do you take a game/simulation that uses audio as a critical element of feedback and compare a version with audio and without? You can’t the value of a game/simulation is the combination of many variables interacting together for the entire Gestalt. So it inherently becomes difficult to try to isolate variables that make a game/simulation educational or effective for learning.
Without isolation of variables, research is difficult and drawing conclusions becomes extremely problematic so then research falls into the realm of self-reporting which has its own set of validity problems.
Some how I’d like to see a way of looking at the entire game/simulation experience and evaluating that has a whole. I think cognitive load theory might have some insights but it still has some gaps.
I think novice/expert research insights are helpful in this area and that everything together, site, sounds, feedback, social aspects, competition, motivation, self-efficacy, self-regulation, meta-cognition, all of these elements work simultaneousness to make a game/simulation truly effective and drawing conclusions from the statistical tests for that many variables would be almost ridiculous.
Having said all that, one other problem that is a non-problem, is that everyone says “there is so little research around the effectiveness of games/simulations for learning and little indication that games/simulations are better than other forms of instruction.”
Not so, there is a large body of research surrounding games/simulations and it is getting bigger all the time. The National Science Foundation has funded research on dozens of educational games with research as a condition of funding, the US Department of Education has and continues to fund researching in to gaming, the ADL has recently published two meta-analysis studies on games and research, the US military has done studies on the effectiveness of simulations, the Federation of American Scientists have conducted studies in this area…private education companies have done research.
So, I really believe the research is out there, it is just not as well known.
And, finally, I ask people to look at the research related to stand up instruction. Look at the studies of how much learning, retention, recall and self-efficacy is a result of classroom instruction and you’ll be underwhelmed at the effectiveness of classroom instruction. Yet, inspite of the research about the relative ineffectiveness of classroom instruction, that technology is overwhelmingly the dominate form of instruction in the world.