EDU800 Week 8 Supplemental Annotation

Means. (2010). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. U.S. Department of Education Office of Planning, Evaluation, and Policy Development Policy and Program Studies Service.

This article, a study for the U.S. Department of Education, examined the state of K-12 online education by examining existing articles which did empirical studies of online and blended learning, comparing and contrasting it to face-to-face (FTF), measuring student outcomes, utilizing rigorous research design.  The authors, Means, looked at historical aspects and the evolution in the use of eLearning.  They found that the popularity of online stemmed from the flexibility, time-and-place advantages, cost-effectiveness, and the ability to instruct larger groups of students in an efficient manner.  In the literature search, 50 independent effects that for meta-analysis were identified.  They stated how online learning, as a subset of distance learning, utilizes newer technology in addition to the traditional video and TV based education, which for the most part simply were stand-ins for FTF.  Since online education entails many web-based technologies, multimedia, collaboration tools and other new techniques, it was sufficiently different from traditional distance learning.  The research questions for the meta-analysis involved effectiveness of online vs. FTF, whether simply supplementing FTF with online enhanced learning, the practices associated with effective online learning, and conditions that influence effectiveness of online learning.  The authors did a comprehensive literature search and review on online learning, and performed meta-analysis on the findings.  The searches were limited to only online, those that utilized random-assignment or controlled quasi-experimental designs, and focused on studies which objectively measured student learning rather than teacher perceptions of learning or course quality, for example.  Many of the studies they found examined the influences of media such as video, on the learning experience and subsequent assessments of learning with the use of technology for such things as asynchronous communication, synchronous technologies, as well as online testing.  They referred to how online technologies can be used to expand and support learning communities (Bransford, Brown and Cocking 1999; Riel and Polin 2004; Schwen and Hara 2004; Vrasidas and Glass 2004).  They also found that learning was enhanced by online learning because of increased multimedia interactions, leading to better reflective analysis of the content.  In addition, they concluded that effectiveness of online learning may be different for K-12 than for adult learners in undergraduate studies.  They also considered such conditions as demographics, teacher qualifications, and accountability to government regulations when doing their analyses.

The study, while doing meta-analysis of online vs. FTF instruction, gave some key insights into the state of research into online education such as that there were not many published studies of online learning effectiveness for K–12 students.  However, from the available research they found that student performance in online was slightly better than FTF, that success in learning outcomes were 20%higher for online students, but they acknowledged that the two types of education were considerably different in terms of time students spent on tasks.  Many of the studies did not try to normalize the study of online learning through drawing equivalents to pedagogical approaches, curricula, and time spent learning.  They also found that by comparing purely online with hybrid or blended modalities yielded similar learning outcomes.  However, as with FTF, if we want to maximize the value of an online learning experience, there should be active learning components in the course.  With the massive amount of research data that the authors of this study were trying to aggregate, it appeared to be difficult for them to come to cohesive conclusions.  The articles they looked at were all over the place, but the study does come up with some consensuses of knowledge, but more questions such as whether online can replace FTF, what pedagogies can be transferred into the online learning spaces, and to what degree should the courses be balanced between asynchronous and synchronous activities.  Online modalities enable a better way to transmit or broadcast information by enabling any computer to be a portal to the information provided by instructors.  The replicability and ability to efficiently deliver content is a key advantage to the design of online learning versus FTF.

This is a great big picture study that can help students of learning science to sift through and curate the studies of online versus FTF, how knowledge can be disseminated, acquired, created, and learned through many empirical examinations of online scenario’s.  The study provides a conceptual framework for studying online learning.  It gives us ways to build upon the masses of research, albeit mixed with various states of technology inclusion, so that we can anticipate future opportunities to design new learning environments.  If we know the historical background, we can then decide how in the future we can implement such things as technology mediated instruction, new types of synchronous methodologies and techniques, and to enhance virtual environments to approach the experience and advantages of FTF learning.



EDU800 Week 8 Annotation

Hrastinski, S. (2009). A theory of online learning as online participation. Computers & Education, 52(1), 78–82.

The article by Hrastinski provides a theory of student participation in distance learning and CSCL (Computer Supported Collaborative Learning) environments, how this participation actually drives learning in online education environments, how the social aspects of participation have positive effects on achievement and learning by providing further learning opportunities outside of the virtual classroom.  He also compares online learning to more traditional learning.  Also, he discusses how the inter student and teacher interaction and cooperation helps improve learning in online settings.  He provides a literature review on such things as constructivism, which aims to transfer knowledge objects from teacher to learner, in the construction of knowledge.  Participation activities in online courses also involve developing, establishing, and nurturing social relationships, dialogue and discourse, utilization of various tools, and involves activities that engage students.  However, online learners are usually physically isolated from other learners, the instructor and the source of the content.  He also provides data on a study by Morris, Finnegan and Sz-Shyan that measured learning outcomes (i.e. perceived learning, grades on tests, quality of assignment completion) based upon such variables as number of discussion posts, seconds spent viewing content pages and discussions.  In addition, the author reflects on the types of interactions that learners have (i.e. learner to instructor, content and to other learners).  He also refers to how Haythornthwaite and Kazmer found that support systems were important, such as from family, and colleagues.  Also, he ties the article to Wenger’s definition of participation, involving sense of and attachment to community, which becomes cyclical (Palloff and Pratt) because participation drives attachment to the community and attachment to the community drives a higher likelihood to help others.

The participation activities utilized in online learning contribute to student satisfaction and retention.  When students take part directly, synchronously and asynchronously in online learning activities, the participation duration is finite, so the student then has then go off to individually integrate and synthesize when they are not online.  This may appear to be a completely independent learning model, but students in online courses continue their learning outside the classroom through social interactions with other students and the instructor.  Equipped with psychological tools (language, engaging activities) physical tools such as the Internet, hardware, software, the LMS, the student has opportunities to perform work such as reading and writing (doing), interact with others (talking), reflect on the content (thinking), make choices and judgements on the content and experience (feeling), and be part of a group socially (belonging).  The social interactions form interdependency and intimacy that is generated among students in online courses through participation.  This builds trust, shared values and a sense of belonging.

This article is essential reading for researchers in learning science who want to understand the effects of various types of participation by students in online learning settings.  It gives the some theoretical footing on the value of such things as discussion boards, interactive assignments and online content, with respect to constructing knowledge.  It also gives a rationale as to how the social aspects of online learning can provide reinforcement and strengthen the acquisition and retention of knowledge through interactions.  These interactions occur between the students, instructors, social support structures such as family members and work colleagues.  The article also may help researchers and teachers to develop and/or locate and implement tools for online and distance learning.

EDU800 Critical Review #1: Digital Game-Based Learning: Impact of instructions and feedback on motivation and learning effectiveness. By Erhel and Jamet

Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education,67, 156-167. doi:10.1016/j.compedu.2013.02.019

EDU 800 Critical Review 1

  1. Identify the clarity with which this article states a specific problem to be explored.

The problems and challenges associated with the use of DGBL, or Digital Game Based Learning was clearly articulated in the article.  Erhel and Jamet first explored how several learning theories apply to the study to find out the effects of using various types of instructions and feedback in conjunction with DGBL’s.  They expressed how the two experiments which they conducted and were outlined in the reading, would demonstrate whether or not DGBL with enhanced instructions would lead to better cognitive results whether they were explicitly applied to the learning factors or to the entertainment factors with regard to motivation.  They also expressed, with the help of many relevant theorists, whether feedback in DGBL scenarios would promote better learning.  This type of study is important to further the use of game based learning.  Since software and hardware technology is available to build powerful simulations, this type of research will go a long way toward enhancing the systems in place today.

  1. Comment on the need for this study and its educational significance as it relates to this problem.

They effectively defined the problem in the context of how there has been little research available to study virtual learning environments and determine how they affect motivation and engagement, deep learning, whether in the form of a competitive game or a simulation.  Since this is an emerging capability for teaching and learning, it is a relatively new approach.  However, non-digital game based learning has been studied, so there is a significant amount of research available to draw from.  The authors in this article built and created new knowledge of DGBL by performing these two experiments.  By applying learning theory to game development for educational purposes, the content can become more compelling and valuable.

  1. Comment on whether the problem “researchable”? That is, can it be investigated through the collection and analysis of data?

The problems presented in this article, particularly determining how variations of DGBL can influence such things as deep learning versus surface learning, whether specific or general instructions can affect learning using DGBL’s, and how different question types and feedback can aid in such things as memorization and comprehension, are definitely researchable.  The authors demonstrated through their experiments that by establishing hypotheses, screening and selecting subjects for the study via pre-testing, and maintaining control variables, that the DGBL can be studied, and that data gathered can be used to draw conclusions about how instructions and feedback enhance learning when using DGBL’s or simulations.  The study, while short, provided usable data which were analyzed in rudimentary fashion, but formed a basis for future exploration of DGBL efficacy in learning, as compared to other digital multimedia based learning.

Theoretical Perspective and Literature Review (about 3 pages)

  1. Critique the author’s conceptual framework.

The authors drew from many scholarly articles to establish and justify the need for further research on this relatively new type of learning.  Their conceptual framework involved first defining digital games, how they can be used for both education and entertainment.  They also contrasted learning with serious games environments (SGE) and digital games to conventional media such as classroom learning, and explored how various scholars have reported that games can have a positive effect, or no effect on learning and motivation.  The approach and methodology on using and analyzing the use of digital games and simulations for learning established a way to frame the study of their effect on cognition and learning, in order to further study something that was in need of a framework.

  1. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?

The authors effectively and frequently introduced references in their literature review, to relevant learning theories from theorists and researchers who have explored such things as learning and motivation utilizing new media (Liu), those who explored motivation and education from a self-determination perspective (Deci), learning from computer games (Tobias & Fletcher), health games researchers such as Lieberman, Vogel’s work on simulation and games, Ames & Archer’s work on achievement goals in the classroom, among many others.

  1. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?

The literature review provided in this article, while drawing upon several other scholarly articles and theorists, only thoroughly explores and summarizes one of the two experimental hypotheses, namely that how using instructions may improve the learning effectiveness of digital games.  The other experiment which explored the efficacy of using feedback in gamified digital education, was not explored until the experiment conclusion was discussed.  This exclusion of the theoretical background for the hypothesis for the 2nd experiment demonstrates that the authors should have focused on one or the other in this article, and performed additional experiments in another study.

  1. Evaluate the clarity and appropriateness of the research questions or hypotheses.

The authors established a compelling case for whether digital learning games can be compared to conventional media, but they did not create a specific experiment that compared DGBL to conventional learning. They intermingled the discussion of the experiments up front with the review of literature.  While interesting and useful, the establishment of the hypothesis that DGBL can be better than conventional learning did not conclude with proof either way that DGBL is better than conventional learning.  The authors did, however, point out that there are contradictory studies that come out with both positive and neutral aspects of DGBL.

The article appeared to assume that DGBL is superior to conventional learning, but only set out to experiment as to whether enhancing DGBL with instructions and feedback would lead to better learning than without these.  The authors did provide results for the 1st experiment that tied back to the original hypothesis and extended upon it with the 2nd experiment.  They stated the hypothesis for the 2nd experiment in the discussion concluding the study.

Research Design and Analysis (about 3 pages)

  1. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.

While the research and experiments were based on the use of DGBL, the authors may have been better off settling with more thoroughly performing  the first experiment, with more thoughtful ways to select the sample group.  Then, perhaps, by following up this study by exploring and researching the feedback factor on DGBL.  The overall study design, first in experiment 1, regarding the use of instructions in DGBL, involved 3 phases, which showed that the authors wanted to hone in on the issue at hand.  All of the study participants were screened in the first phase and those with too much prior knowledge based upon the results of a pre-test were disqualified to participate.  The study measured Avoidance versus Approach in terms of simulations of people with one of 4 different disease presentations.  The hypothesis that was to find whether certain types of instructions (entertainment versus educational) will aid in cognition and learning of the subject matter, was addressed by the first experiment.  The second experiment presented the hypothesis about how KCR (knowledge of correct response) feedback in educational games can reduce redundant cognitive processes.  As mentioned earlier, there was not a literature review of the background theories regarding feedback.  They did provide some references to literature such as Cameron & Dwyer’s work on the effects of online gaming on cognition.  The article seemed to be testing what Cameron & Dwyer studied with regard to how using different feedback types affected achievement of learning objectives.

  1. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.

The article utilized a fairly sound way of selecting participants based upon demographic factors such as age, gender and college attendance.  For example, they first establishing generalized categories based upon age (i.e. young adults 18-26, length of time in their college programs, and decided to develop filters to exclude of medical students.  However, in the 2nd experiment, the breakdown in terms of gender had many more female participants (16 male and 28 female) than in the 1st experiment.  This inconsistency evidenced that the two experiments were not cohesively designed to work together.  The 2nd experiment tested the addition of KCR to the first experiment, but they did not maintain consistency in the sampling methods and choice of participants.

  1. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).

The experiments involved utilizing ASTRA, a multimedia learning environment.  This simulated learning environment presented an avatar as a stand-in for a real instructor, and presented the case studies of the disease presentations on a simulated television monitor.  It was an adequate representation of the situation, but was more a facsimile or model of a real world instructor presenting on a screen.  This may have provided a simulated association with an actual teacher and the interactions therein, to the subjects in the study.  In addition, it provided learning and entertainment instructions for the student to review while viewing the simulation.  The use of a combination of full-motion animation with text enabled a richer cognitive environment than if it were just a screen with text.  The methods involved utilizing pre-tests, a recall quiz, and questionnaires on knowledge gained after the simulation concluded.  with the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).  The 2nd experiment utilized mostly the same instrumentation and technology as the first, but interjected additional content in order to test if KCR feedback promoted the learner’s emotional investment.  By providing popup windows with immediate feedback about the student’s responses, the 2nd experiment was testing whether better cognition, comprehension and learning occurred because of it.

  1. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.

The measures that were applied from the results of the mostly quantitative data, were appropriate for the simple experiment.  For the first experiment, the authors provided data on the prior knowledge pre-test, including mean and standard deviation analysis.  In addition, they provided similar statistical analysis on the data collected on the recall quiz, knowledge questionnaire, motivation questionnaire, and measured the intrinsic motivation of the subjects.  However, the authors provided a different type of analysis result for the second experiment.  While they designed the ratings for the second experiment by applying similar statistical analysis such as mean scores on the paraphrase-type questions versus inference-type questions, the results were presented in a tabular form instead of a narrative form as they did in the first experiment.  In addition, they utilized statistical measures such as ANOVA, SD, and means to measure the motivation questionnaires and provided the results in a tabular form, which reflected how the performance versus mastery goals compared, highlighting the differences in the goal avoidance and goal approach.

  1. N/A
  2. Critique the author’s discussion of the methodological and/or conceptual limitations of the results.

In the general discussion for the article, the authors summarized the results, outlining their use of learning science, and expressed how their hypotheses were confirmed by the data they acquired through the experimentation.  They pointed out that they were able to derive a value-added result, building on the first experiment, to the 2nd experiment.  They accomplished their goal of combining the knowledge of how education or entertainment instructions contribute to learning in DGBL of the 1st experiment, with the KCR feedback in the 2nd, but they may have incorporated too complex of a methodology and process to arrive at their results.  They subsequently acknowledged that the effects of the types of questions they utilized did not yield what they expected and pointed out that they may need to perform future studies.

  1. How consistent and comprehensive are the author’s conclusions with the reported results?

The 1st experiment yielded the conclusion that learning instructions were more effective than entertainment instructions with regard to encouraging better comprehension, cognition and learning, which was what they were looking for in their original hypothesis.  This was not, however, comprehensively explored.  With regard to the 2nd experiment, the results were viewed by the authors to confirm the effect of feedback on DGBL was to provide deeper learning and cognitive processing. The authors also concluded that DGBL overall enhanced memorization, and that the study aligned and was consistent with some of the other studies they cited on such things as cognitive load theory.

  1. How well did the author relate the results to the study’s theoretical base?

The authors maintained their commitment to utilizing DGBL, but finding how to enhance its effectiveness by adding value to the experience by providing meta instructional content before the training module commenced, in the form of providing instructions to the student.  They also found that providing feedback during the gameplay, that the students had a more intense experience with regard to their cognitive results, memorization and learning overall.  The authors related the new findings about DGBL to their opening review of literature on motivation, game learning theory.

  1. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?

The significance of this research study is that it will advance the knowledge base in learning science, of the gamification of educational modules.  As the authors admitted, they need to perform further studies in the future.  However, after analyzing the effects of instructions and subsequent enhancement with various types of feedback will provide game/simulation developers and designers to implement changes to their software based upon the practical results of this study.

EDU800 Week 7 Annotation

Mishra, P., & Koehler, M.J. (2006). Technological pedagogical content knowledge: A framework for integrating technology in teacher knowledge. Teachers College Record, 108(6), 1017-1054.

This article discusses TPCK, which involves 4 factors in teaching, in the context of professionally developing teachers. The TPCK framework is an attempt formalize the examination of teaching components for many purposes including understanding the common elements in all teaching. By breaking the teaching practice into these components. By modeling what Shulman originally formulated, the PCK construct, Mishra and Koehler have captured a way to fit teacher practices into a simple context by categorizing the interplay between content, pedagogy and technology. The article gives examples of learning technology by design such as through making movies, redesigning educational websites, and faculty development of online course design. The model shows us how the overlapping occurs so that we can adjust and tweak the course, curricula or program as needed. It can lend us the tools to evolve content and the ways we deliver it in a more agile fashion, since in the technological world we find today demands learning modules to adapt to new innovations, discoveries and accounts of a particular subject matter.

When integrating technology with the learning environments, the TPCK model gives us a theoretical way to approach the design and implementation of teaching and training for various levels of education, and for preparing teachers. With increasing use of digital technological tools, rather than just blackboard, chalk, paper and pencils, we need a way to model the complexities of the interaction among the factors. The article addresses how we can design learning environments, leveraging the practical experiences of seasoned teachers and giving us tools to create, test and implement compelling and effective lessons, modules, courses, sessions and even entire programs. Since teachers not only have to deliver content which they are expert at, using varied technology and pedagogy, they need to have a way to develop their skills beyond just being a practitioner and SME. Teachers are the best problem solvers for the challenges facing them in classrooms, so the TPCK model provides the framework for them to decouple the components involved in an instance of teaching. For example, since a teacher has volumes of content knowledge, they may not be as good at conveying the content, so they need to be trained in improving and enhancing their use of learning technology and develop better pedagogical skills. One particular approach that involved design was making videos. There are very powerful and capable technologies to enable teachers to design and develop video content, so the skills in doing so must be taught. So teachers need to develop production skills in authoring, editing, creating storyboards and implementing video for delivery online, through open repositories such as YouTube, but also for integrating it into local resources such as the LMS. Teachers have always had to be multidisciplinary in that even in a highly technical engineering or math course, they need to also reinforce knowledge of writing and reading, problem solving, even social skills. Now, we see that beyond the main content and the ancillary content, teachers need to be technologists and develop their technology skills in using devices such as video/still cameras, scanners, web authoring tools, software development, image editing, infrastructure issues such as technology in smart classrooms, file systems and servers, networks, databases and knowledge bases, data mining, big data analytics. These skills are becoming essential to design of learning environments, especially for online and blended courses. In addition, since digital technology content is very replicable, we can develop and design consistent content and leverage that to deliver and apply iterative revisions as new content emerges and old content is retired or replaced.

Since teachers are researchers for what works in their classroom, this article can help us find ways to design and build learning experiences and to integrate the content, technology and pedagogy. By finding where a particular instance of instruction fits into the framework of TPCK, through empirical research, we can see where it needs to be improved, where the components need to be integrated or disintegrated. We can use research methodologies to examine all of the combinations, intersections and integrations of the triad of components, as in the CK (Content Knowledge), PK (Pedagogical Knowledge), PCK (Pedagogical Content Knowledge), TK (Technology Knowledge), TCK (Technology Content Knowledge), TPK (Technology Pedagogical Knowledge), and the TPCK (Technological Pedagogical Content Knowledge). Each of the three main components can be improved on a case-by-case basis for any teacher, through traditional professional development, but the added value of professionally developing the integration of the three is where applying the TPCK framework can have the most impact. In addition, we can further understand how changing or adding to one dynamically affects the overall balance and effectiveness of the learning experience. Finally, the TPCK framework gives us a usable scientific way to design, configure, apply, integrate, analyze, situate, contextualize, couple/decouple, improve the quality of, understand the relationships among, and transform our knowledge of the key components of technology, pedagogy and content.

EDU800 Week 6 Supplemental Annotation

Using video in teacher education

Brophy, J. E. & Gamoran Sherin, M. (2004). Using video in teacher education. Chapter 1:  NEW PERSPECTIVES ON THE ROLE OF VIDEO IN TEACHER EDUCATION, Amsterdam: JAI.

This article first gives a historical perspective over the past 40 years of the role that video has played in teacher education, and how video affords teacher training and education, examines the effectiveness of using video for teacher learning, with some mixed results.  It shows how video can be an innovative way to teach teachers to teach, and to develop teachers professionally.  The historical background shows the evolution of capability of analog video of the past, to digitized video currently, to deployment of it via the Internet in what the author calls video networks.  The author points out that as video costs continue to go down, it is more accessible and useful both in real and virtual classrooms and for use in teacher professional development.  It helps to provide an alternative to live environments and may contribute to teacher motivation. It also outlines various uses for video in teacher education such as using it to teach at a scaled-down, micro-level, or “Microteaching”  where the class size or duration was smaller and needed different instructional strategies.  Microteaching provides new opportunities for teachers to conduct whole-class discussions, and became a standard way to deliver teacher education.  Another technique used in teacher education is interaction analysis, or lesson analysis, where teachers used video to observe and analyze student-teacher interactions.   In addition, video can be used to model expert teaching, enabling developing teachers to examine how more experienced  teachers think instead of just observing their behavior.  In additon, the article provided a number of ways to leverage video for student learning.  Such things included video-based cases, including narratives, analysis and subsequent discussions which provided novice teachers with rich instances of problems to decipher and solve within the classroom environment.  In addition, the article refers to the integration of video into hypermedia, enabling the video to be delivered in a more intuitive manner, paralleling the ways in which people think.  Another key point is how the author states that video is an immutable/unchangeble, lasting and permanent record, which can be viewed and reviewed many times, relieving the teacher from having to remember everything which happened in a scenario

This chapter is an interesting account of how video, now predominantly digital, can be used not only to teach students, but to teach teachers.  Because of the flexibility of video embedded in hypermedia or delivered through the Internet, this technology has a far reaching effect.  Video is a powerful medium and now as it is carried on networks, captured in so many settings and environments, it is a key way to learn.  Video also can provide a clear account of the classroom environment for study, as well as content-based instruction.  It shows that video, by modeling expert teaching scenario’s to newer teachers, can be an effective way to empower teachers in training to emulate and replicate good teaching practices.  Novice teachers can learn new strategies to become better pedagogists.  The use of these video-based strategies could lead to innovative new techniques since they allow building off of rich examples, leveraging and scaffolding to higher value and more effective techniques.  And, the analysis and review of mentor teacher examples can give the novice teacher ways to become expert teachers, through reflection and discussion and through practical implementation of these strategies.  It can allow the developing teacher to observer and analyze student interactions and classroom practices.

Video can play a great role in educational research, learning science and the study of educational technology, and should be a key part of the suite of media and data sources we use for research.  Since researchers are by nature teachers, using video for teaching teachers is instructive for researchers to use video for research.  This article provides a perspective on seeing how video has been used, is now used and can be used in the future for teaching purposes to both teacher/learners and students.  Since video is a permanent record of events and occurrences of teaching and learning situations, researchers can use it as a focused study aid and tool for doing research.  It can be used for data gathering of the sort which defies just quantitative data and qualitative data through interviews and the like.  Through having a video record, new systems and approaches can be invented to perform research with video-based data.  In addition, since new technologies for searching image and video content are being developed by Google and others, this article will serve as a reminder of where we came from in terms of using video for basic training purposes, to video becoming a major source of research studies.


EDU800 Week 6 Annotation

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science (pp. 349-366). Mahwah, NJ: Lawrance Erlbaum Associates.

This article explores how designed video technology can be a powerful factor in learning, and how it can be embedded in multimedia environments.   It also gives suggestions for educational researchers and instructional designers to use video for assessment.  It provides a framework for using video in multimedia contexts and describes how different genres of video can support different types of learning.   It outlines four common learning outcomes of utilizing video for learning:  (1) Seeing, in which video enables students to see things they haven’t seen in person, and gives opportunities to have the learner to leverage the visual medium to absorb large amounts of information without the logistical challenges and dilemmas that verbal only content presentation (i.e. text) provides.  (2) Engaging, in which the video can help to draw a learner in and keep them involved, providing a cognitive context leveraging the visual senses.  The authors also compare how video exploits the extrinsic and intrinsic motivations to learn, emphasizing how learning is inherently intrinsic, but the extrinsic value of absorbing an engaging video event can contribute to engagement.  It also gives opportunities for learners to leverage prior knowledge, providing an anchor to build meaning from future learning experiences.  (3) Doing, in which models of skills and behaviors desired by the learner, can be visually rendered and/or simulated.  The student’s attitude is affected and skills can be acquired through viewing, emulating and practicing these models.  As the learner builds on previous knowledge through these models, how their knowledge, skills and capability matures can be assessed dynamically.  (4) Saying, in which students can verbalize what they’ve learned from the video, demonstrating the effectiveness of new knowledge acquisition.  The learner’s ability to verbalize facts and explain their newly acquired knowledge can be assessed from.  The effectiveness of the knowledge transfer is much more pronounced when the learner has prior knowledge to decipher and decode what the video presents.

Video can be a powerful tool for the learning sciences, when designed well while providing content for learning activities, and for assessing their effectiveness.  For example, practitioners can embed video to support learning, whether it is newly designed and created or video from archival sources.  Video can be a very useful assessment tool, requiring the student to look at something, and to find out what knowledge was gleaned from the video learning experience.  It also generates different motivations for learners to prepare for and to engage in learning opportunities.  Video content as a learning strategy enables students to scaffold on skills and knowledge modeled in the video, leading to intellectual growth.   It can be useful for project-based learning, be the basis for collaborative activities, and used to trigger other types of content absorption while being a catalyst to synthesize multiple sources of information for knowledge acquisition.  Visual media has evolved and improved, and now there is exponential growth of digitized old and new digital video, the acceleration of learning through utilizing visual media will continue.  The systems, networks and processes that enable today’s proliferation of video will fuel further innovation in video learning.

This article provides foundational literature for an area in need of scholarly articles on digital video for instructional purposes.  It also presents ideas for teachers to incorporate video into their pedagogical activities, including using it independently for student learning, incorporating and embedding it into LMS shells and other multimedia content, and other instructional resources.  As an educational technology, researchers can utilize instances of designed video within the suite of multimedia technologies delivered and utilized for instructional and experimental purposes, because it provides learning opportunities that involves seeing, engaging, doing, and saying which establishes, compels, reinforces and forms the basis for assessment.  In addition, by understanding how learning can be enhanced and affected by digital video, we can continue to analyze, design and build systems for learning that are comprehensively inclusive of a combination of media including text, hypertext, audio, computer simulations and digital video.   As learning science evolves, we may find that incorporating multiple media in the learning process will lead to better outcomes, especially given the digital nature of the learning environments that are emerging.

EDU800 Week 5 Annotation

Shapiro, A., & Niederhauser, D. (2004). Learning from hypertext: Research issues and findings. In D. H. Jonassen (Ed), Handbook of Research for Educational Communications and Technology (pp. 605-620). New York: Macmillan.

The article from Amy Shapiro discusses, compares and contrasts traditional text to hypertext.  She points out the dynamic, random, nonlinear structure and how learners/users of hypertext can retrieve text in their own order.  She examines how hypertext-assisted learning (HAL) shifts the cognitive burden to the learner.  However, in the studies she outlines, she points out that deep learning will occur more likely if the user has prior knowledge, and active learning is utilized, and how contradictory some of the research has been, where in some cases the findings are that hypertext is better, while other research findings show that it is worse than linear text.  The concept of cognitive flexibility theory (CFT), a constructivist theory, is also discussed.  Shapiro’s research on hypertext-based learning, while being either structured or ill-structured states that deep learning doesn’t always occur if the subject does not have prior knowledge.  She also examines how printed text differs from hypertext in that printed text formalizes the author role, whereas hypertext challenges assumptions about the roles of the author and the reader.   She characterizes hypertext as emancipatory and empowering because it forces readers to participate actively in creating meaning from the text.  Issues that arise with hypertext include scrolling, limited screen size, unusual color schemes and eye movement patterns that could lead to difficulty in reading.

When we compare the sequential, linear style of reading plain text, it is very evident that hypertext reading and navigation is a drastically different experience for the learner.  As Shapiro states, though, the learner can create their own path through hypertext, giving them more control over the mix of excerpts which are read.  However, when reading hypertext, one may skip over text that may be essential for understanding of primary concepts as compared with reading linear text.  Hypertext gives the reader a more self-guided experience through the reading.  There also can be a combination of other elements within the text such as navigational controls and use of the mouse, which may increase the cognitive burden, or requiring metacognitive functions that are not needed for plain text.  One benefit of hypertext traversal could be that it mimics the pathways in the human brain, with seemingly random connections, but the user of the text and/or brain forms these based upon how they construct their learning experience.

The kind of research that Shapiro has presented opens up more questions for current researchers with regard to utilizing hypertext for learning.  The hierarchically structured nature of good hypertext documents can contribute to deep learning, but can also have limited effect on the learning process as compared to linear text.  Shapiro has opened up the discussion so that as we get deeper into digital learning, and how it contributes and affects our understanding of learning science, we can use these foundational theories and studies to further learn about and develop new digital systems that can improve upon hypertext, perhaps by making it more adaptive to the learner.