Critical Review of Research #2: Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study

EDU800: Critical Review of Research #2
Written By Daniel Grigoletti

Article: Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., . . . Mong, C. (2007). Using Peer Feedback to Enhance the Quality of Student Online Postings: An Exploratory Study. Journal of Computer-Mediated Communication, 12(2), 412-433. doi:10.1111/j.1083-6101.2007.00331.x


1. Identify the clarity with which this article states a specific problem to be explored.

The Ertmer article clearly defined the problem involving how using peer feedback as an instructional strategy may lead to better quality postings. The researchers in the study examined how instructor facilitated feedback is valuable to enable rich learning environments. As stated in the literature they referenced, peer feedback in college courses, specifically in online discussions, could have an equal impact on student learning. The study sought to find how students perceived giving and receiving peer feedback. The researches posited that good discussion feedback in online coursework is essential to close the learning loop, and since feedback is costly to instructors in terms of logistical burden and workload, that peer feedback could go a long way to alleviate significant amount of time and effort spent, while enabling students to improve socio-cognitive engagement. The authors sought to determine how peer feedback can provide cognitive improvement to students. By replacing the instructor in a limited way, peer feedback could provide manifold value to the recipient, the deliverer of peer feedback and the instructor. It does this by providing unique enhancements to the normal feedback process. They concluded that timely and high quality peer feedback has many benefits, but was not as important as when the instructor provided the same type of feedback. There were, however, many other social benefits to the students participating in the study. They had more opportunities to collaborate and w ere able to build intra-classroom relationships, and share knowledge and opinion. However, some students were concerned that because the actual instructor was not providing feedback, that they were not getting the most value from the feedback.

2. Comment on the need for this study and its educational significance as it relates to this problem.

Studying feedback in educational environment is a useful endeavor because it seeks to understand the cognitive benefit to students having their work analyzed, reviewed and rated, and getting the results presented back to them for reflection. Feedback in online discussions extend and amplify the ramifications of feedback by showing how one of the emerging and powerful course delivery mechanisms, the online course, can be integrated with virtual and asynchronous interaction from faculty and fellow students. Further, this study combines the need to study feedback, including the use of feedback in online environments, and specifically the use of peer feedback in online environments. Since online course pursuits require unprecedented self-direction and independent learning without the face-to-face account of the instructor, the role of fellow students can prove to be a way to extend learning in a powerful and economical fashion. Since a typical class of 30 can have interactions within the hour or two for any given week in an onsite class, a hybrid modality or fully online course can enable 24×7 interaction through using an LMS, giving students the ability exchange ideas and having them share the responsibility for learning. This will extend content exploration, provide knowledge creation, and present unbounded reflective opportunities to learn. As a natural progression and complement to onsite models, emerging online delivery methods and courses need to meet with the challenges that students have absorbing extreme volumes of information in our technological world, which needs to be disseminated and learned. The improved and increased interactions between and among students in online environments can be a powerful way to build courses for learning new technological content. Utilization of new literacies such as information literacy is important especially for the digital natives or millennials who comprise much of the student body within today’s colleges. Also, since the typical instructor is logistically limited in giving high quality personalized attention to every student, peer-based learning can go a long way to alleviate the logistical challenges that educators face when teaching online.

3. Comment on whether the problem is “researchable?” That is, can it be investigated through the collection and analysis of data?

The problem of investigating the effects on learning of peer feedback in online discussions is very researchable, given the extensive availability of online course instances which deliver essentially the same set of courses that are available onsite. Since online threaded discussions are asynchronous and automatically “recorded,” the data representing the discussion events can be readily collected and examined. The networked electronic communication tools that are employed in online courses include emails, discussions, blogs, threads, wikis and synchronous chats. Therefore, the opportunities to collect qualitative data from any given LMS are plentiful. In addition, the computerized aspects of cloud-based tools, large storage capacities and the ability to access the data for assessment and analyzation enables examination of both qualitative information and quantitative data such as frequency of postings. In this study, the researchers proved that they could also examine the qualitative data using software that examines and analyzes it using various data collection techniques. Armed with technological tools, learning management systems, persistent data collection, and external software, they were able to comprehensively attack the problem, and establish a baseline for future research into online feedback, whether it is peer based or instructor based. Further, future research into feedback in online courses can be done on the other aspects of online courses that were not included in this study.
Theoretical Perspective and Literature Review

4. Critique the author’s conceptual framework.

The authors used a case study framework to investigate the learning impact of peer feedback versus instructor feedback in online courses. The environment that they examined was a graduate level course. They used a scoring rubric to examine the participant responses based on Bloom’s taxonomy to determine whether or not high-quality feedback could be sustained during the semester in several discussion questions (DQ’s). They were interested in seeing how the quality of the postings changed during the course of the semester. They wanted to see whether higher-levels of Bloom’s taxonomy could be achieved, but had to be sensitive to the way that the discussion questions were written to ensure consistency. They utilized a process to inform students of feedback, then interviewed them on the results. It involved both giving and receiving peer feedback within an online course, from pre-course to post-course. They utilized a constructivist approach and hoped to see an increase in the quality of the responses. They also wanted to gauge whether peer feedback was better or worse than instructor feedback. Since most of the previous research they referenced did not involve peer feedback for online courses, they were at a disadvantage in that they could not compare notes to similar studies. They acknowledged that additional research needed to be performed and that this study was exploratory in nature. The conceptual framework of the study was based upon a very specific type of feedback. Feedback wasn’t applied to assignments, tests, labs and other work performed in an online course, but was only provided to threaded discussions. Further, it focused on the nature of peer-to-peer feedback as opposed to traditional instructor feedback. The study was ambitious in this respect, since it sought to extend the knowledge of learning science in a relatively new medium, the online course, and with the proxy for face-to-face interaction, the discussion thread. Because of this narrow examination, it proved to be effective to ferret out the positive effects of peer feedback. The study can have the effect of furthering our understanding of the online modality and how the asynchronous interactions can help learners. There is an asymmetrical contrast to onsite course interaction/peer feedback because of the vast difference between the two environments.

5. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?

The authors of this study frequently cite prior research into feedback, and the importance of this in educational environments. As an exploratory study, this study adequately tied the previous research on feedback in non-online settings to the current examination of online peer feedback. For example, they cited Liu, Lin, Chiu and Yuan to reinforce the idea that peer feedback requires students to implement additional cognitive processes beyond just reading and writing, including questioning, comparing, suggesting modifications, and reflecting on how the work being rated compares to their own. The study also refers to McConnell’s about how collaboration of peer assessment allows students to be less dependent on educators, giving the student more autonomy and independence. This collaborative process gives alternatives to the students doing the ratings to develop and increase their own knowledge, learning and skills in the subject area. This meaningful interaction and discourse between evaluators and students receiving feedback, gives value to both parties in the learning process. It leverages the power of teaching as a learning strategy, by providing students opportunities to “micro teach” by evaluating and assessing peer discussion postings.

6. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?

While the survey utilized many good resources and references to relevant literature, they did not include a comprehensive review of literature, nor did the conclusion include a summary of literature. Instead, they strategically placed literature references throughout the article. The implication of taking this approach for the problem they were investigating, is that the literature may not be comprehensively available for peer feedback in courses with online discussions. Their approach to the literature review was not conventional, but they did sufficiently include relevant studies on peer feedback in other settings. The structure of the document was more focused on stating the problem and presenting the research results. They could have included more references to draw from for this study, but it was relatively short and focused on a very specific sub-area of providing feedback, namely that which is provided in online discussion forums.

7. Evaluate the clarity and appropriateness of the research questions or hypotheses.

The review questions provided by the researchers in this study focused on the impact of peer feedback on the quality of online student postings, the quality of and increased learning be through the use of peer feedback, the perceptions of the value of receiving peer feedback vs instructor feedback, and the perceptions of the value of giving peer feedback. The RC’s were clear and appropriate to establish the study and compare/contrast feedback from peers vs. instructors in online courses. The discussion postings in an online course form an important basis for communication and learning, and the hypothesis was clearly written, resulting in analysis of the impact and quality of discussion postings. For peer feedback in online discussions to be most valuable, the researchers reiterated from previous research on feedback in general, specifically from Schwartz and White, that good feedback is prompt, timely, and thorough, provides ongoing formative and summative assessment, is constructive, supportive, and substantive, and should be specific, objective, and individual. Also, by citing Notar, Wilson, and Ross, they included the notion that feedback should be diagnostic and prescriptive, formative and iterative, and involve both peers and group assessment. Peer interaction in online courses serves to provide an important interpersonal connection and gives the students motivation to check and recheck their work since their peers are watching and assessing, and also builds a sense of community and trust. The real learning is adjusting one’s perspective to view how others respond to the question, then responding to the response. This discourse leads to deep learning since it drills down to new territory of the topic. Peer feedback also has the effect of offloading some of the workload from the instructor, by transferring the task of reviewing content to students. The article emphasized how providing feedback is one of the most time-consuming elements of teaching online, so sharing the responsibility of providing feedback with students has a twofold benefit: 1) reduction of workload for teachers, and more importantly, 2) giving students opportunities to synthesize information at a high level, emulating the teacher role. When a student gives peer assessments, it opens up dialogue, the recipient is given insight into their own learning. Online courses rely on quality design and interaction to be rich and valuable, but it cannot all be planned, so the discussion thread provides a dynamic aspect to the course. Therefore, feedback in all forms is essential to make the course compelling, keep students engaged, accelerating and amplifying learning. Students are used to getting feedback from instructors, but when getting it from peers, then it layers the learning by having a non-expert examine responses, allowing the sharing of ideas and diverse perspectives, and leading to a more collaborative learning environment rather than a patriarchal model.

Research Design and Analysis

8. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.

The design of the study utilized a sound researching approach to learning about peer feedback in online discussions by providing multiple raters to evaluate the perceptions and effects that peer feedback delivered to participants. The hypothesis tested how peer feedback compared to instructor feedback in quality and whether or not it benefited the learning outcomes. The study provided a great variety of resulting data to help judge the effectiveness of the feedback, however, it acknowledged that there are logistical problems with providing feedback and collecting information to assess its effectiveness, including both quantitative results and qualitative analysis of the responses via interviews, providing valuable insight to the researches. Data were collected through a variety of research techniques such as multiple and standardized pre-and-post interview protocols in which students were asked several research questions addressing discussion postings and assessed the quality of interaction, and provided data on the perceptions from both students and researchers on the value of giving and receiving peer feedback. The study applied learning theory, including Bloom’s Taxonomy to help determine the depth of learning as a result of peer feedback, which appropriately addressed how deep the learning occurred with respect to higher order learning such as analysis, synthesis and evaluation.

9. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.

The study involved a number of discussion questions to measure the peer feedback process, contrasting it with instructor feedback, and using a paired sample t-test. However, due to a small sample size, the quantitative results only provided a limited insight into the effectiveness of peer feedback to learning. They were able to assess the relevance and impact that student feedback had, but cross-referencing to teacher-only feedback in online courses was not present, and the qualitative assessment of the student-to-student peer feedback was not present. The specific sampling in the study was adequate to generate knowledge about the short-term perceptions of how peer feedback can be used as a alternative (but not a substitute) for instructor feedback, but it was lacking information about how peer feedback can affect the learning outcomes for online students.

10. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).

The researchers utilized various data collection instruments such as entry and exit survey questionnaires, scored ratings of weekly discussion question postings, interviews and surveys for data collection. They applied rubrics, and standardized the interview protocol which added reliability, and analyzed data both from primary groups and subgroup. The consistency of the data sets, variety of data collection procedures gave them the ability to rate the effects and impacts on student learning while giving and receiving peer feedback, and concluded, from the interviews, that the students had a positive perception of the value of peer feedback. They also performed “triangulation” between the interview data with the ratings of the peer feedback. This provided integration between measurements of both quantitative and qualitative data collection, which had the effect of amplifying the assessment of the quality. They were able to recognize patterns in the interview data through using software for quantitative analyzation, called NUD*IST. They paid attention to validity, accuracy, and completeness of the data, looked for discrepancies, and used check-coding to check inter-rater reliability while studying the peer feedback.

11. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.

Various data collection techniques were used in the study. Qualitative data collection was conducted at intervals of weeks 3-5, and weeks 7-13, and included standardized interviews to establish reliability. The interviews were conducted via phone and in-person (for a duration of 20-30 minutes), and were then recorded and transcribed to ensure accuracy and completeness. The interviews provided insights into the participant perceptions about giving peer feedback and on various aspects of feedback including quality, timeliness, and quantity. They also collected specific feedback from students on the feedback process itself, and measured their understanding applying Bloom’s taxonomy. The researchers utilized tabular data to aggregate the sampled question responses.
Quantitative data collection included entry and exit survey questionnaires, in which they used the results to measure overall perceptions of students giving and receiving peer feedback. Providing scores/ratings on discussion postings during the semester, correlated with the research questions using the same rubric that students had used. They collected data from the peer ratings discussion postings, provided by various peers, and applied rubrics, to ensure that the measurement of posting quality was consistent. However, the data provided was sporadic from student peer feedback because they were not required to score every peer posting so the data set was incomplete. During the data collection process, the raters compared results, and examined discrepancies and collaborated on the results. They also did make sure that timing was not a factor in scoring by removing posting dates, and times were removed from these documents. With regard to sampling reliability, the raters scored randomly selected discussion question. The raters provided specific examples of student responses in the qualitative data collection, for example, measuring student feelings about Internet filtering, and enabled the students to give elaborations on their responses.

12. Critique the adequacy of the study’s data analyses. For example: Have important statistical assumptions been met? Are the analyses appropriate for the study’s design? Are the analyses appropriate for the data collected?

During the analysis of the comprehensive and adequate data they collected, the researchers in this study, the researchers utilized various statistical methods for measuring and studying the quantitative data. They compared their results to the assumptions stated in the research questions, and the results they anticipated in their hypotheses. They employed methodologies to analyze both the quantitative and qualitative data. The quantitative data analysis included tallying results of pre-surveys in which the researchers gave the students opportunities to answer not only objective questions, but also open-ended questions in order to assess student perceptions. They used a 5 level rating scale to measure agreement/disagreement, which they then analyzed using statistical means and other measurement instruments. They also conducted a post-survey in week 16, in which students rated the importance of peer and instructor feedback and commented on the value of both giving and receiving peer feedback, but they noted that not all of the pre-surveys (12/15) were returned. They also performed a final survey to verify interview data collection. During analysis, to alleviate validity concerns, after completion of the data collection they triangulated interview question data with survey results. They compared average scores using a paired sample t-test to compare the ratings obtained on postings from both peer and instructor feedback prior to the use of peer feedback. Reliability of the data was ensured by using multiple interviewers, multiple evaluators to reduce bias. They also used check coding to ensure inter-rater reliability. They utilized measurements of quantitative data, providing mean ratings regarding timeliness, quality, and perceptions of importance of feedback.

Interpretation and Implications of Results

13. Critique the author’s discussion of the methodological and/or conceptual
limitations of the results.

Feedback, to be effective, should be of high quality and timely and since students in online courses do not experience the physical interaction in onsite classes. The learners may struggle to feel social connections to classmates in the virtual environments. Students can both give and receive peer feedback which goes a long way to personalize interactions since students must use critical thinking to analyze other works, then absorb and process criticism from the other students. By prescribing an expected response, whereas the latter opens up common experience dialogical interaction. The student-to-student interaction is more socially oriented and involved co-construction of knowledge. This provides more of a group oriented factor to threaded discussions, which are decidedly asynchronous communicative instruments. However, by adding a peer-collaborative factor, it adds another valuable dimension to the activity and may help with cognitive processing of the content. Peer feedback can have drawbacks in that students may become anxious about giving and receiving feedback, concerned about the reliability of the feedback. In addition, students may not be prepared or be comfortable to take on the role of evaluator.

14. How consistent and comprehensive are the author’s conclusions with the reported results?

The researchers in this study drew from many relevant theorists with regard to the effectiveness of feedback. However, many of the studies were pertinent to face-to-face rather than online learning environments. The researchers in this study concluded that student-to-student feedback can be used effectively in place of instructor feedback. The important factors which they stated and tested repeatedly were the timeliness, consistency, and quality, but not necessarily the quantity of the feedback responses. The integrative data collection using interviews as well as direct observation of feedback responses provided a deeper understanding of the motivations of the students and how they internalized the learning opportunities into cognitive growth. The pre-and-post interview experience gave the students to reflect on the process, and the researchers cross-referenced and corroborated the interview comments to determine the perceptions that students had regarding the effectiveness of the feedback process. This reflection appeared to have a positive effect on the learning effectiveness. The difficulties that arose were assessing the qualitative aspects of student postings, and determining the reliability and validity of peer feedback. These results which were presented in the form of survey and interview results (including actual quotations from the respondents) coincided with the expectations by the researchers that the feedback process would add value to the course experience. However, the authors conceded that, since this was an exploratory study, they were evaluating peer feedback rather than feedback in general. Even though peer interaction enables sharing and comparing of information, they did not find there was better critical thinking and analysis as a result of peer feedback. The peer-to-peer feedback had value in that it enabled students to form basic feedback commentary, co-construct knowledge with peers. It did provide better comprehension of the content through reflection, and reinforcement of lower-levels of Blooms Taxonomy, but did not prove to result in higher level cognition, which face-to-face student interaction may be able to do better.

15. How well did the author relate the results to the study’s theoretical base?

The study was focused on online learners and a specific type of feedback, peer feedback in discussion threads. They tied this well to a number of theorists (Higgins, Hartley, and Skelton) analyses of the importance of timely, substantive, and high quality feedback in learning environments, and how feedback provides formative assessment (Nicol and Macfarlane-Dick) which contributes to improved self-regulation (Robyler) better socio-cognitive engagement with the content (Vygotsky), and more efficient learning. By studying discussion respondents in a variety of ways and using both qualitative and quantitative data collection methodologies, the researchers were striving to learn whether or not feedback from peers (students) improved upon or did not strengthen or weaken learning, the cognition of and construction of meaning through interactions with instructors. In addition, the researchers scored discussion feedback using Bloom’s taxonomy. By doing so, the examination of how peer feedback lent itself to the lower levels of Bloom’s taxonomy involving recalling and comprehension, but also how the reinforcements from peers would affect application, analysis and synthesis of the knowledge being discussed. They developed a process involving question-response-feedback cycle, where they collected and delivered the feedback responses to the participants. The raters also collaborated with each other, comparing the question-response-feedback results and integrating them with interview results through triangulation. They found that higher quality learning occurred with a combination of student-to-student as well as instructor feedback, and concurred the findings from the Ko and Rossen study which stated that the learning process is improved when the student can to cross-check their understanding. They also concurred with Mory that feedback is essential to the learning process.

16. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?

The study provided a significant insight into how mixing the roles of student and teacher with respect to the provision of quality feedback, specifically peer-to-peer feedback, can enable students to learn and reflect on their thoughts beyond the feedback from instructors and in addition to the immediate discussion questions/topics. The implications of the study were to inform researchers how peer feedback may aid educators in facilitating course tasks, developing alternative dialogues, disseminating information, and assessing performance in online courses. The theorists cited in the article qualified the need for good feedback as a catalyst for deep learning. They concurred that prompt, timely and thorough feedback is essential to improve learning and develop skills in communication and subject matter. The researchers in this study also provided justifications for how good feedback in general leads to better retention, but in addition, that peer feedback can provide opportunities for social interaction integrated with knowledge construction and sharing. The study presented here is a good foundation for learning about the effect of peer feedback in online courses, and can lead future researchers to delve deeper into the interactions enabled through embedded functions of the LMS. This type of study is very relevant and applicable to online courses in their current state, but the online course is evolving and will include richer interactions that may benefit greatly from various forms of feedback.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s