Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education,67, 156-167. doi:10.1016/j.compedu.2013.02.019
EDU 800 Critical Review 1
- Identify the clarity with which this article states a specific problem to be explored.
The problems and challenges associated with the use of DGBL, or Digital Game Based Learning was clearly articulated in the article. Erhel and Jamet first explored how several learning theories apply to the study to find out the effects of using various types of instructions and feedback in conjunction with DGBL’s. They expressed how the two experiments which they conducted and were outlined in the reading, would demonstrate whether or not DGBL with enhanced instructions would lead to better cognitive results whether they were explicitly applied to the learning factors or to the entertainment factors with regard to motivation. They also expressed, with the help of many relevant theorists, whether feedback in DGBL scenarios would promote better learning. This type of study is important to further the use of game based learning. Since software and hardware technology is available to build powerful simulations, this type of research will go a long way toward enhancing the systems in place today.
- Comment on the need for this study and its educational significance as it relates to this problem.
They effectively defined the problem in the context of how there has been little research available to study virtual learning environments and determine how they affect motivation and engagement, deep learning, whether in the form of a competitive game or a simulation. Since this is an emerging capability for teaching and learning, it is a relatively new approach. However, non-digital game based learning has been studied, so there is a significant amount of research available to draw from. The authors in this article built and created new knowledge of DGBL by performing these two experiments. By applying learning theory to game development for educational purposes, the content can become more compelling and valuable.
- Comment on whether the problem “researchable”? That is, can it be investigated through the collection and analysis of data?
The problems presented in this article, particularly determining how variations of DGBL can influence such things as deep learning versus surface learning, whether specific or general instructions can affect learning using DGBL’s, and how different question types and feedback can aid in such things as memorization and comprehension, are definitely researchable. The authors demonstrated through their experiments that by establishing hypotheses, screening and selecting subjects for the study via pre-testing, and maintaining control variables, that the DGBL can be studied, and that data gathered can be used to draw conclusions about how instructions and feedback enhance learning when using DGBL’s or simulations. The study, while short, provided usable data which were analyzed in rudimentary fashion, but formed a basis for future exploration of DGBL efficacy in learning, as compared to other digital multimedia based learning.
Theoretical Perspective and Literature Review (about 3 pages)
- Critique the author’s conceptual framework.
The authors drew from many scholarly articles to establish and justify the need for further research on this relatively new type of learning. Their conceptual framework involved first defining digital games, how they can be used for both education and entertainment. They also contrasted learning with serious games environments (SGE) and digital games to conventional media such as classroom learning, and explored how various scholars have reported that games can have a positive effect, or no effect on learning and motivation. The approach and methodology on using and analyzing the use of digital games and simulations for learning established a way to frame the study of their effect on cognition and learning, in order to further study something that was in need of a framework.
- How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?
The authors effectively and frequently introduced references in their literature review, to relevant learning theories from theorists and researchers who have explored such things as learning and motivation utilizing new media (Liu), those who explored motivation and education from a self-determination perspective (Deci), learning from computer games (Tobias & Fletcher), health games researchers such as Lieberman, Vogel’s work on simulation and games, Ames & Archer’s work on achievement goals in the classroom, among many others.
- Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?
The literature review provided in this article, while drawing upon several other scholarly articles and theorists, only thoroughly explores and summarizes one of the two experimental hypotheses, namely that how using instructions may improve the learning effectiveness of digital games. The other experiment which explored the efficacy of using feedback in gamified digital education, was not explored until the experiment conclusion was discussed. This exclusion of the theoretical background for the hypothesis for the 2nd experiment demonstrates that the authors should have focused on one or the other in this article, and performed additional experiments in another study.
- Evaluate the clarity and appropriateness of the research questions or hypotheses.
The authors established a compelling case for whether digital learning games can be compared to conventional media, but they did not create a specific experiment that compared DGBL to conventional learning. They intermingled the discussion of the experiments up front with the review of literature. While interesting and useful, the establishment of the hypothesis that DGBL can be better than conventional learning did not conclude with proof either way that DGBL is better than conventional learning. The authors did, however, point out that there are contradictory studies that come out with both positive and neutral aspects of DGBL.
The article appeared to assume that DGBL is superior to conventional learning, but only set out to experiment as to whether enhancing DGBL with instructions and feedback would lead to better learning than without these. The authors did provide results for the 1st experiment that tied back to the original hypothesis and extended upon it with the 2nd experiment. They stated the hypothesis for the 2nd experiment in the discussion concluding the study.
Research Design and Analysis (about 3 pages)
- Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.
While the research and experiments were based on the use of DGBL, the authors may have been better off settling with more thoroughly performing the first experiment, with more thoughtful ways to select the sample group. Then, perhaps, by following up this study by exploring and researching the feedback factor on DGBL. The overall study design, first in experiment 1, regarding the use of instructions in DGBL, involved 3 phases, which showed that the authors wanted to hone in on the issue at hand. All of the study participants were screened in the first phase and those with too much prior knowledge based upon the results of a pre-test were disqualified to participate. The study measured Avoidance versus Approach in terms of simulations of people with one of 4 different disease presentations. The hypothesis that was to find whether certain types of instructions (entertainment versus educational) will aid in cognition and learning of the subject matter, was addressed by the first experiment. The second experiment presented the hypothesis about how KCR (knowledge of correct response) feedback in educational games can reduce redundant cognitive processes. As mentioned earlier, there was not a literature review of the background theories regarding feedback. They did provide some references to literature such as Cameron & Dwyer’s work on the effects of online gaming on cognition. The article seemed to be testing what Cameron & Dwyer studied with regard to how using different feedback types affected achievement of learning objectives.
- Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.
The article utilized a fairly sound way of selecting participants based upon demographic factors such as age, gender and college attendance. For example, they first establishing generalized categories based upon age (i.e. young adults 18-26, length of time in their college programs, and decided to develop filters to exclude of medical students. However, in the 2nd experiment, the breakdown in terms of gender had many more female participants (16 male and 28 female) than in the 1st experiment. This inconsistency evidenced that the two experiments were not cohesively designed to work together. The 2nd experiment tested the addition of KCR to the first experiment, but they did not maintain consistency in the sampling methods and choice of participants.
- Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).
The experiments involved utilizing ASTRA, a multimedia learning environment. This simulated learning environment presented an avatar as a stand-in for a real instructor, and presented the case studies of the disease presentations on a simulated television monitor. It was an adequate representation of the situation, but was more a facsimile or model of a real world instructor presenting on a screen. This may have provided a simulated association with an actual teacher and the interactions therein, to the subjects in the study. In addition, it provided learning and entertainment instructions for the student to review while viewing the simulation. The use of a combination of full-motion animation with text enabled a richer cognitive environment than if it were just a screen with text. The methods involved utilizing pre-tests, a recall quiz, and questionnaires on knowledge gained after the simulation concluded. with the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures). The 2nd experiment utilized mostly the same instrumentation and technology as the first, but interjected additional content in order to test if KCR feedback promoted the learner’s emotional investment. By providing popup windows with immediate feedback about the student’s responses, the 2nd experiment was testing whether better cognition, comprehension and learning occurred because of it.
- Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.
The measures that were applied from the results of the mostly quantitative data, were appropriate for the simple experiment. For the first experiment, the authors provided data on the prior knowledge pre-test, including mean and standard deviation analysis. In addition, they provided similar statistical analysis on the data collected on the recall quiz, knowledge questionnaire, motivation questionnaire, and measured the intrinsic motivation of the subjects. However, the authors provided a different type of analysis result for the second experiment. While they designed the ratings for the second experiment by applying similar statistical analysis such as mean scores on the paraphrase-type questions versus inference-type questions, the results were presented in a tabular form instead of a narrative form as they did in the first experiment. In addition, they utilized statistical measures such as ANOVA, SD, and means to measure the motivation questionnaires and provided the results in a tabular form, which reflected how the performance versus mastery goals compared, highlighting the differences in the goal avoidance and goal approach.
- Critique the author’s discussion of the methodological and/or conceptual limitations of the results.
In the general discussion for the article, the authors summarized the results, outlining their use of learning science, and expressed how their hypotheses were confirmed by the data they acquired through the experimentation. They pointed out that they were able to derive a value-added result, building on the first experiment, to the 2nd experiment. They accomplished their goal of combining the knowledge of how education or entertainment instructions contribute to learning in DGBL of the 1st experiment, with the KCR feedback in the 2nd, but they may have incorporated too complex of a methodology and process to arrive at their results. They subsequently acknowledged that the effects of the types of questions they utilized did not yield what they expected and pointed out that they may need to perform future studies.
- How consistent and comprehensive are the author’s conclusions with the reported results?
The 1st experiment yielded the conclusion that learning instructions were more effective than entertainment instructions with regard to encouraging better comprehension, cognition and learning, which was what they were looking for in their original hypothesis. This was not, however, comprehensively explored. With regard to the 2nd experiment, the results were viewed by the authors to confirm the effect of feedback on DGBL was to provide deeper learning and cognitive processing. The authors also concluded that DGBL overall enhanced memorization, and that the study aligned and was consistent with some of the other studies they cited on such things as cognitive load theory.
- How well did the author relate the results to the study’s theoretical base?
The authors maintained their commitment to utilizing DGBL, but finding how to enhance its effectiveness by adding value to the experience by providing meta instructional content before the training module commenced, in the form of providing instructions to the student. They also found that providing feedback during the gameplay, that the students had a more intense experience with regard to their cognitive results, memorization and learning overall. The authors related the new findings about DGBL to their opening review of literature on motivation, game learning theory.
- In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?
The significance of this research study is that it will advance the knowledge base in learning science, of the gamification of educational modules. As the authors admitted, they need to perform further studies in the future. However, after analyzing the effects of instructions and subsequent enhancement with various types of feedback will provide game/simulation developers and designers to implement changes to their software based upon the practical results of this study.