OLOR Logo: "OLOR Online Literacies Open Resource, A GSOLE Publication"

 Stylized green and purple 'G' with "Global Society of Online Literacy Educators" in purple.


ROLE: Research in Online Literacy Education, A GSOLE Publication

Developing an Ecology of Feedback in Online Courses in the Disciplines

by Alexandria Rockey and Kem Saichaie



Publication Details

 OLOR Series:  Research in Online Literacy Education
 Author(s):  Alexandria Rockey and Kem Saichaie
 Original Publication Date:  15 August 2020
 Permalink:

 <gsole.org/olor/role/vol3.iss1.a>

Abstract

The growth of online courses across public and private non-profit higher education institutions (Seaman, Allen, & Seaman, 2018) establish the need for empirically-based pedagogy to guide instruction in this unique context. Interactions between instructors and students are important for the success of an online course, but online courses are often criticized for a lack of instructor-student interactions. Integrating feedback in online courses provides an opportunity for instructor-student interactions and to foster student learning. This study examines an ecology of feedback to understand how students perceived feedback, what feedback instructors gave, and how technologies mediated this feedback in an online course in the disciplines. Findings suggest feedback variations occurred over the term in three ways: student perceptions of the feedback, types of feedback instructors provided, and technologies used to provide feedback.

Resource Contents

1. Introduction

[1] With increasing online course enrollments at public institutions, pedagogies for online classes are emergent but still heavily draw on best instructional practices in higher education. One such practice is feedback as it promotes vital interactions between instructors and students. Interactions are essential in online courses where technologies mediate any communications between instructors and students. Despite the prevalence of online courses, there is limited research to understand feedback in this unique context. An ecology of feedback accounts for instructors, students, and technologies as they relate to feedback in online courses and is necessary to develop an empirically-based pedagogy to guide instructors in this unique context.

[2] Allen and Seaman (2013) define an online course as any course where more than 80% of the content is delivered online, but developments in technologies have drastically changed distance education today. Once entirely delivered asynchronously and via the mail (Anderson, 2009; Sumner, 2000), distance education is now offered in a range of hybrid and online courses delivered electronically. The diversity of online courses in the 21st century incorporate synchronous and asynchronous components (Snart, 2015). Examples of synchronous components include office hours, live lectures via web-conferencing software, and chats. Examples of asynchronous components include class announcements, email, discussions, and pre-recorded video lectures. Given the range of synchronous and asynchronous activities made possible by Web 2.0 technologies, it is difficult, and perhaps futile, to describe a typical online course.

[3] Despite the diversity of online courses, the interactions are computer-mediated, which creates a need for specific pedagogical strategies to encourage student success. Regardless of setting, feedback has been established as a best practice for any learning environment (e.g., Chickering & Gamson, 1987), including online courses (Vrasidas & McIsaac 1999). Wolsey’s (2008) definition of feedback as “interaction designed to promote learning between professor and student or between students” (p. 311) demonstrates the potential of feedback to facilitate interactions in an online course. Feedback in online courses presents a challenge due to the lack of in-person, or face-to-face interaction (Boyd, 2008), which is a criticism of the modality (Arkorful & Abaidoo, 2015).

[4] We argue integrating effective feedback practices is a way to encourage interactions between instructors and students in online courses in the disciplines where feedback may be the only planned opportunity for this interaction. Feedback practices may include in-text or summative comments to student writing, use of a rubric, or even discussions during office hours/writing conferences. However, differences in online courses mean that pedagogy guiding feedback practices for face-to-face courses cannot be simply copied in this context (Planar & Moya, 2016). Despite the growth of online classes and tools for instructional purposes, a dearth of empirical research exists on feedback in online courses. As such, an empirically-based pedagogy for feedback is needed to guide instructors in the unique context of an online course. To account for the lack of empirical research on feedback practices in online courses, we propose an ecology of feedback to encourage instructors to adopt strategies that employ new technologies while leveraging existing best practices for student learning.

[5] We will define and describe an ecology of feedback approach to help answer key questions about students’ perceptions of feedback. An ecology of feedback adopts a holistic approach recommended by Planar and Moya (2016) to account for three dimensions of feedback: student, instructors, and technologies. By combining student perceptions, instructor feedback, and an analysis model created by Ferris et al. (1997) modified to account for the Canvas LMS, this study will contribute to a pedagogy specifically for online courses informed by empirical research.

2. Literature Review

[6] To establish the need for understanding the ecology of feedback in online courses, this literature review focuses primarily on the fields of distance education, response to student writing, and writing across the curriculum. These areas were identified because each provides important orientations to understand our research and how our findings (discussed below) fit within existing conversations. Following, is a brief review of the importance of interaction in online courses, instructor feedback, feedback in online courses in the disciplines, and mediating impacts of LMSs on feedback. This literature review ends with a description of the model of feedback that addresses many of the concerns or limitations raised from the existing literature.

[7] First, in online courses, interaction is important for both instructors and students. In these settings, feedback emerges as an important type of interaction (e.g., Swan, 2001, Vrasidas & McIsaac, 1999). Second, existing literature on instructor feedback indicate discrepancies between instructor and student perceptions of feedback (Montgomery & Baker, 2007; Mulliner & Tucker, 2017), and the impact of feedback on student perceptions of an instructor and a course (Boyd, 2008; Young, 2006). Third, literature on feedback in online courses in the disciplines recognizes the challenges of providing feedback in this context due to variable expectations of writing (e.g., writing to learn or learning to write). Fourth, affordances of LMSs can support integration of audiovisual feedback; however, LMSs can also impact how or if a student accesses feedback (Laflen & Smith, 2017). The final section of this literature review proposes an ecology of feedback as it relates to Ferris et al.’s (1997) analysis model. While an ecology of feedback may be applied to any course context, it is especially well-suited for a large online course in the disciplines where feedback may be the only planned opportunity for instructor-student interactions.

2.1. Interaction in Online Courses

[8] Interactions in online courses between students and instructors are affected by asynchronous communications as well as the affordances and the constraints of technologies used to deliver online course material. One of the major limitations in online courses is a lack of interaction between instructors and students (Arkorful & Abaidoo, 2015). Interaction between instructors and students is vital in any course, but is essential in online courses (Swan, 2001) where there are fewer opportunities for spontaneous interactions before, during, or after class. In asynchronous online courses, students and instructors may never participate in the course at the same time. Even if an online course integrates synchronous activities, students and instructors may still not interact. He’s (2013) study on live video streaming sessions in an online course found that students interacted more with their peers than with their instructors in live video streaming sessions, and some students never interacted with their instructors at all.

[9] A lack of interaction between instructors and students is a common problem in online courses (Dennen, Aubteen Darabi, & Smith, 2007; Swan, Shea, Fredericksen, Pickett & Pelz, 2000; Picciano, 2002); however, interaction between instructors and students has been attributed to student satisfaction in online courses. Some types of interactions may contribute more to student satisfaction than others. Dennen et al.’s (2007) study had 32 online instructors and 170 students rate and review 19 different guidelines for instructor actions in online courses. Findings suggest instructors “need to respond to learner-initiated communication and provide feedback on assignments in a timely manner” (Dennen et al., 2007, pg. 77). In fact, despite limited empirical research specifically on feedback in online courses, feedback emerges as a type of interaction important to both students and instructors (e.g., Swan, 2001, Vrasidas & McIsaac, 1999).

2.2 Instructor Feedback

[10] Initial explorations of instructor feedback questioned whether commenting on student writing was even worth an instructor’s time (Knoblauch & Brannon, 1981; Brannon & Knoblauch, 1982) as the risk of appropriating students’ texts could dissuade an instructor from providing feedback (Reid, 1994). Fear of appropriating texts makes sense as the rise of new genres, such as the personal narrative, broke traditional rules of what writing should be (Sanders, 1988) and subsequently the instructors’ role in guiding this writing. However, as Smith (1997) describes, “the teacher possesses the institutional power in the relationship and can use comments to motivate, educate, or chastise her students. But the student, the paper, and the institution can also exert power over the teacher” (p. 250). Complicating the concept of power as it relates to instructor feedback provided space for researchers in the field of response to student writing to empirically classify the feedback instructors give (Ferris et al., 1997; Montgomery & Baker, 2007; Straub & Lunsford, 1995), as well as student perceptions of this feedback (Ferris, 1995; Ferris, 1997; Hyland, 1998; Straub, 1997).

[11] Research on student perceptions of feedback indicates student preference for individualized (Scrocco, 2012; Wolsey, 2008) and timely feedback (Boyd, 2008). Additionally, studies suggest students prefer feedback delivered in multiple modalities (Ice, Swan, Diaz, Kupczynski, & Swan-Dagen, 2010), feedback on general strengths and weaknesses (Hyland, 2001), as well as feedback that generates a sense of connection with the instructor (Borup, West, Thomas, & Graham, 2014; de Kleijn, Meijer, Pilot, & Brekelmans, 2014).

[12] Research comparing student and instructor perceptions of feedback indicates the discrepancy that students may perceive receiving more feedback than instructors perceive giving (Montgomery & Baker, 2007), and students perceive using feedback more than instructors view students using feedback (Mulliner & Tucker, 2017).

[13] In regard to online courses, students want comprehensive feedback (Gaytan, 2015). The ability to provide effective feedback is an essential component of an effective online instructor (Young, 2006). In Boyd’s (2008) survey of students enrolled in hybrid and online first year composition courses, student perceptions of quality and timely feedback impacted their perception of the success of the course.

2.3 Feedback in Online Courses in the Disciplines

[14] Online courses in the disciplines provide a unique context to explore student writing and instructor feedback as writing may be incorporated to assess student learning. During the course in this study, weekly writing assignments were integrated to gauge student learning and application of knowledge, but the course was not designated as a writing experience course, which would require 3,000 words of formal writing. As such, the focus is less on improving students’ writing and more on communicating an ability to apply the content covered in the course.

[15] Feedback in a composition course has arguably a different form where the goal of the course is not to improve student writing. As Beason (1993) notes “existing composition research is relevant to writing done across the curriculum but should be cautiously generalized to this environment” (p. 417). Research on online courses, ranging from composition courses to business, identifies aspects of quality of feedback. These include: timely feedback (Young, 2006) comprehensive feedback (Gaytan, 2015), and personalized feedback (Blair & Hoy, 2006; Gallien & Oomen-Early, 2008; Grigoryan, 2017; Planar & Moya, 2016).

[16] Planar & Moya (2016) caution against adhering to a concrete list of best practices for quality feedback; what is effective may change depending on the context (e.g., assignment type, discipline). In the context of courses in the disciplines, feedback on writing in the disciplines is essential as there can be a lack of “transparency” about expectations for writing assignments in these courses (Thaiss & Zawacki, 2006, p. 124). Feedback is a central component to helping students understand what is expected of them in their writing (Thaiss & Zawacki, 2006).

[17] Orsmond, Maw, Park, Gomez, & Crook’s (2013) approach to feedback for the online environment applies to courses in the disciplines and encourages an approach that fosters student internalization of feedback to eventually become self-sufficient writers. Becoming a self-sufficient writer aligns with the goal of writing in the disciplines courses to “help initiate students to the language and conventions of the disciplines, and so help them become better writers in these contexts” (Thaiss, 1992, p. 93). Orsmond et al.’s (2013) approach, while designed with a focus on peer to peer feedback, encourages feedback that is dialogic (encourages a dialogue between the instructor and student), incorporates peer feedback, promotes student self-regulation and student engagement, focuses on the process, and encourages proactive response from the student.

[18] Technologies utilized in online and face-to-face courses may encourage different aspects of feedback suggested in Orsmond et al.’s (2013) approach to feedback. The feedback in this study is provided via Canvas, a LMS. The Canvas LMS allows students to respond to instructor feedback when it is provided in the assignment comment section or initiate a comment to the instructor. Given the potential for technologies to change how feedback is given or received, it is important to be mindful of the affordances and constraints of different technologies as they mediate feedback. For this reason, we will next explore how LMSs mediate feedback.

2.4 Mediating Effect of LMSs on Feedback

[19] LMSs greatly impact how instructors give feedback. For example, some LMSs, such as SpeedGrader in Canvas, allow instructors to provide audiovisual commentary for students without leaving the page. Research suggests the potential of audiovisual commentary to facilitate more global feedback (Cavanaugh & Song, 2014), support personal connections between students and instructors (Anson, Dannels, Laboy, & Carneiro, 2016), decrease misunderstandings common in text-based communication (Grigoryan, 2017; Sommers, 2012), and provide engaging feedback (Crook et al., 2012).

[20] One of the major constraints of a LMS interface relating to feedback is how students’ access the feedback. As Laflen and Smith (2017) found in their analysis of both online and face-to-face courses using LMSs, students were more likely to look at feedback if the interface required that students open the feedback to receive their grades. This meant that students were less likely to download feedback when they could see their grade separately. As Laflen and Smith (2017) note, it does not matter how wonderful feedback is if students do not even look at it. In SpeedGrader, summative comments provided in the assignment comments feature will be visible when students check the grade they received on an assignment. However, if an instructor annotates a student’s text in Canvas, the student will need to click the “View Feedback” button to see the feedback. Depending on the student’s computer, the student may be able to preview the feedback without leaving the screen or the student may need to download the feedback as a PDF. Laflen and Smith (2017) note, an extra step, such as this one, can deter students from even looking at the feedback they receive. For these reasons, the particular LMS interface can affect how instructors provide feedback and how students interact with the feedback given. As LMSs are almost universal in institutions of higher education, there is a need for studies to account for the particular interface being used and its impact on feedback.

[21] According to Dobre (2015), “the future belongs to LMSs” (p. 320). This statement does not seem to be exaggerative as Dahlstrom, Brooks, and Bichsel (2014) claim that 99% of higher education institutions have an LMS and 85% of instructors use an LMS in their courses. Despite predominant use of LMSs across higher education institutions and the potential of LMSs to provide personalized and instant feedback (Dobre, 2015), the LMS interface has “remained largely invisible within studies of online response” (Laflen & Smith, 2017, p. 43). Given the rapidly changing nature of technologies, it makes sense that scholarship may focus less on the specific technologies and more on what they do (Neal, 2011). However, there is a need for studies that account for the LMS interface.

[22] Due to the ubiquity of LMS usage, there is a need for research that accounts for both feedback in the context of online courses (Gallien & Oomen-Early, 2008) as well as the LMS interface (Laflen & Smith, 2017) to inform pedagogy that guides instructors as they design courses that take advantage of LMS’ customizable features (Laflen & Smith, 2017). Our study works to address this gap in the literature by adapting Planar and Moya’s (2016) recommendation for a holistic approach to studying feedback in online courses. This holistic approach, an ecology of feedback, encourages a study incorporating three dimensions: students, instructor, and technologies (Planar & Moya, 2016). For this purpose, we will be adapting the analysis model proposed by Ferris et al. (1997) to account for different feedback students may receive when the feedback is provided via SpeedGrader in the Canvas LMS.

2.5 Ecology of Feedback Model

[23] There is a large corpus of descriptive studies that analyze instructor feedback to student writing for both pragmatic intent and linguistic form (e.g., Ferris et al., 1997; Straub & Lunsford, 1995). Both Ferris et al.’s (1997) study and Straub and Lunsford’s (1995) study developed analysis models to classify instructor feedback. These two studies differ in that Ferris et al.’s study occurred in the context of a classroom and the model was developed by iteratively analyzing feedback an instructor gave on her own students’ assignments. Straub and Lunsford’s (1995) analysis model was developed to analyze feedback twelve prominent scholars on response gave to student writing, but the writing was not done by their students. As Reid (1994) notes, the social context in which feedback is given is crucial to truly understanding that feedback and how it functions within each particular classroom environment.

[24] Our adaptation of Ferris et al.’s (1997) feedback model given through SpeedGrader within the Canvas LMS, is summarized below as it relates to an ecology of feedback. Ferris et al.’s (1997) study looked at an instructor's comments on students’ first and second drafts in an ESL freshman composition course over the course of two semesters. The analysis model for this study was developed to characterize both the pragmatic intent and linguistic form of feedback given as well as to address any variation in feedback given “across student ability levels, across assignment types, and at different points during the term” (Ferris et al., 1997, p. 159). Findings suggest the instructor gave less feedback as the semester progressed, gave differing feedback depending upon the genre (e.g., asked questions when students were writing about personal experiences), and gave differing feedback to students with different ability levels (e.g., more directive feedback to students the teacher classified as weak).

[25] This analysis model is well-suited for this study and developing an ecology of feedback. An essential aspect of an ecology of feedback is a holistic approach to understanding feedback as recommended by Planar and Moya (2016). In addition to understanding students, instructors, and the mediating technologies through which feedback is provided, it is essential to understand that these components of an ecology of feedback are not stagnant but change over time. An ecology of feedback provides an opportunity to understand the bigger picture of feedback in online courses as well as how feedback may change across a semester or across different iterations of a course.

[26] Ferris et al.’s (1997) model was designed to look at how feedback changed across a semester and is therefore suitable to developing an ecology of feedback. However, while Ferris et al.’s (1997) study accounts for the classroom context, the data was not triangulated to account for student perceptions of feedback. By analyzing only instructor feedback without accounting for student perceptions of feedback, a researcher could “miss the rationale and the result of the communication negotiation between the student and the teacher, made within the established and mutually understood content and context of the classroom” (Reid, 1994, p. 279). To examine this matter, we employ Ferris et al.’s (1997) model within the ecology of feedback model to account for student and instructor perceptions as well as the specific feedback that students received on their writing. The ecology of feedback illuminates a larger context of feedback in an online course in the disciplines by including consideration of instructor perceptions, student perceptions, and mediating technologies.

3. Methods

3.1. Research Questions

[27] The following research questions guide this study. The first question accounts for student perceptions of feedback. The last two questions, adapted from Ferris et al., (1997), build on previous research to understand what feedback instructors provided.

  1. How do students perceive the feedback they receive before and after taking an online course in the disciplines?
  2. What is the nature considering both pragmatic intent and linguistic form of the teacher's written comments?
  3. Is there evidence of variation in teacher response at different points during the term?

[28] Together these research questions work to develop an ecology of feedback by understanding how students’ perceptions of feedback may evolve as they take an online course, understanding the types of feedback instructors gave in the online environment, and, with the use of Ferris et al.’s (1997) adapted model, understanding how technologies mediate feedback in this online course in the disciplines.

3.2. Course Context: Brewing Science

[29] The context of this study is a general education, fully online, high enrollment course (typically 500 students in the face-to-face version) Brewing Science. Campus departments, with specialties ranging from pedagogy, technology, and library sciences, worked on a multi-year project redesigning this course for a fully online offering. The course launched in its fully online format in Spring quarter 2017. The data for this paper is from Fall quarter 2017, its second offering.

3.3. Participants

[30] The participants of this study include the professor (the instructor of record), TA, and students enrolled in the course. The professor designed the online version of the course and integrated weekly writing assignments as well as the rubric to guide student writing. The TA was a Ph.D. candidate in the home department of the course who provided all of the feedback on students’ weekly writing assignments. The students in the course could be enrolled at any campus of the state-wide system. This study focuses on one quarter in which there were a total of 74 students enrolled.

3.4. Weekly Writing Assignments

[31] The course is divided into weekly modules. Each week students have readings, watch the professor’s recorded lectures with embedded questions, take a quiz, and write a weekly writing assignment. This assignment asks students to analyze different components (e.g., hops, malt, water) of an assigned beer each week. For example, a student may be assigned Sierra Nevada pale ale for the quarter. Each week they will analyze a different component of that beer depending on the module. One week it may be an analysis of the hops in the assigned beer and the next week it could be an analysis of the water. The guidelines for this assignment are as follows:

  1. “Response should be at least one full paragraph.”
  2. “The response should include three different cited sources with at least one reputable source outside of the lecture book.”

[32] It is important to note that despite the weekly writing assignments the course is not designated as a writing experience course where the writing expectations for both instructors and students are more prescribed. A writing experience course requires at least 3,000 words and a substantial revision and feedback component. Some writing across the curriculum researchers encourage a dissolution of the concrete distinction between writing to learn and learning to write (McLeod, Miraglia, Sloven, & Thaiss, 2001); however, in this case, the purpose of the weekly writing assignment is primarily writing to learn.

[33] This IRB approved study focuses on the feedback students received on their weekly writing assignments. Feedback was provided to students via SpeedGrader in Canvas. Feedback was provided in the assignment comments section (see Figure 1). Some students received feedback on the first weekly writing assignment feedback via the rubric feature (see Figure 1). The following section provides a more detailed description of the interface through which students received feedback.

Figure 1: Instructor view of feedback in Canvas (Canvas Guides, 2017).


3.5. Interface

[34] Given the attention of this study on mediating technologies used to provide feedback, the interface through which feedback was provided deserves attention in a description of the context of the study. Figure 1 (image from the vendor contains works that is not from the authors’ school) shows the instructor’s view when giving comments in SpeedGrader in Canvas. Here the interface provides instructors the opportunity to

  • grade with the rubric feature (3),
  • provide assignment comments (4),
  • provide in-text comments directly in the document,
  • attach media files, grade without seeing the student’s name,
  • or copy comments to reuse on later papers.

[35] Instructors can also “mute” grading so that students do not receive an alert of their grade or feedback until all student assignments in a course have been graded and the instructor “unmutes” the grades.

[36] The TA in this study provided comments primarily in the assignment comments section. However, the TA did use the integrated rubric feature on the first weekly writing assignment, “malt.” The rubric was designed in a collaborative process with input from all members of the team, including the professor. Figure 2 provides an example of the TA’s feedback (all identifying information has been removed). This example from malt, the first weekly writing assignment (on the left), features the integrated rubric feature (1) and feedback in the assignment comments section (2).

Figure 2: Example of feedback provided by the TA from the instructor’s view.

[37]​ As evidenced in this sample, the TA typically wrote a short comment in the assignment comments section. Figure 3 shows the above comment from the student’s view. Students can see comments when they view their grade. They do not have to click or download any files to see instructor feedback when given in the assignment comments section. Students do however have to click “Show Rubric” to view any comments given on the rubric (1).

Figure 3: Example of summative feedback provided by the TA from the student’s view.

[38] The interface allows students to add a comment when they submit their assignment. In some cases, this could be as simple as “Sorry it’s late” or “I think I got it this time.” Students can also add media to their comment or respond to instructor comments facilitating dialogic feedback.

3.6. Data Collection

[39] Two types of data were collected for this study: surveys and feedback on the weekly writing assignments. These will be described in more detail below.

[40] 3.6.1. Survey. 
The survey was given to students in week one and week eight of the ten-week course in 2017. The survey was developed and administered by the central center for teaching and learning. The survey was designed to understand student experience within the online format of the course. For the purposes of this paper, we will focus on student responses to two Likert-scale questions on the pre-course and late-course survey. These questions were chosen because they relate specifically to student perceptions of feedback.

Table 1: Pre-Course and Late-Course Survey Questions

 Survey Question Options Student Responses
Pre-course survey ​I will receive plenty of feedback on my work and ideas
  • True for me
  • Neither true nor untrue for me
  • Untrue for me
  • I don't know
80 students
Late-course survey I received plenty of feedback on my work and ideas
  • True for me
  • Neither true nor untrue for me
  • Untrue for me
  • I don't know
37 students
Pre-course survey Describe the type of feedback you anticipate receiving in this course Open ended 46 students
Late-course survey Describe the type of feedback you received in the course. How did you use the feedback you received to guide your learning and writing? Open ended 28 students

[41] To understand emerging themes from student responses to these questions on the pre-course and late-course survey, we analyzed open-ended student responses.

[42] 3.6.2. Feedback on weekly writing assignments. 
All 74 student submissions were de-identified before the authors had access to the data. The honest broker, as recommended per the authors’ institutional review board, took screenshots of the student submission along with any feedback after the quarter had ended in SpeedGrader, then scrubbed any information that would identify the student’s name. The honest broker provided the authors’ the opportunity to review all student submissions throughout the entire quarter while protecting student privacy. SpeedGrader was set to anonymize student grading so that the students were each assigned a number that stayed the same across all eight assignments. The weekly writing assignments covered the following eight topics: malt, hops, water, yeast, brewing process, packaging, styles, and marketing. All of the comments were provided in the assignment comments section; however, on the first weekly writing assignment (which occurred in week two on “malt”) the TA also provided comments using the rubric feature in SpeedGrader for 27 out of 74 students.

3.7. Data Analysis

[43] 3.7.1. Survey. In order to understand how students’ perceptions of feedback changed over the quarter, descriptive statistics illustrated students’ responses on the pre-course and late-course survey and how students’ perceptions of feedback changed over the quarter. Qualitative analysis and iterative rounds of coding (Merriam, 2009) were then used to further identify emerging themes and refine exploration of student responses to open-ended questions on the pre-course and late-course surveys.

[44] 3.7.2. Feedback on weekly writing assignments. 
Given the small sample size and the exploratory nature of this study, descriptive statistics using means and frequencies were used to analyze the pragmatic intent and linguistic form of feedback on the weekly writing assignments (Picciano, 2002). To analyze both the pragmatic intent as well as the linguistic form of the comments, the first author adapted Ferris et al.’s (1997) model to account for the functionality of SpeedGrader in the Canvas LMS (see Figure 4).

Figure 4. Categories Used for Analysis (Ferris et al., 1997).


[45] The first author adapted the model through an iterative process similar to the process Ferris et al. (1997) followed when creating the model. This meant the first author used the model to analyze a subset of the weekly writing assignments and noted any changes that needed to be made to the model, then conferred with the second author. The authors made three changes to the model due to the LMS interface. As noted previously, Ferris et al.’s (1997) study was situated in a face-to-face course with different mediating technologies (paper and pencil as opposed to SpeedGrader in Canvas). As such, all changes to the model were added categories that accounted for feedback features unique to the SpeedGrader interface in Canvas. First, we added a category “Language of the Participant” to code the instances when the TA would copy and paste language from the student’s writing assignment and put it in quotes in the Assignment Comment box. For example, “antiseptic properties can certainly be claimed for Guinness, despite the marketing of ‘double the amount of hops’” was coded as “Language of the Participant” (Student 40, Hops). Next, we added a category “Student Comment” to account for the unique affordance of SpeedGrader to facilitate dialogic feedback through student’s ability to respond to their instructor’s comments or to add a comment when they submit their assignment.

Figure 5. Example of Student Comment

[46]​ The third and final category we added was “rubric.” This category represented the TA’s use of the rubric feature in SpeedGrader to grade student writing assignments. For this category, the TA could use the integrated rubric in SpeedGrader to click on how well students did on each component of the rubric. Based on where the TA clicked, SpeedGrader would automatically calculate the student grade depending on how much each category was worth. Figure 6 shows the different categories of the adapted model used for analysis including these three added categories which are highlighted in yellow.

Figure 6. Categories Used for Analysis (Adapted from Ferris et al., 1997).


[47] 3.7.3. Description of categories.
 The categories were refined through multiple iterations of coding. A more thorough description of the categories follows.

[48] 3.7.3.1. Comment type. There were four different comment types: ask for information, make suggestion/request, give information, and positive. Each of the paragraphs below provide examples of comments that fall under that category as well as a discussion of any difficulties in defining the category.

[49] The ask for information category, representing 4.57% of total comments across the quarter, had two different purposes. First, this comment sometimes encouraged students to expand ideas (e.g., “what qualities of their beer do they emphasize, and do they seem to have a specific target audience in mind?” Marketing, Student 43). Second, this comment could be used as a corrective (e.g., “how is the sulfate concentration ‘huge’ if other minerals are in there at more than twice the amount?” (Water, Student 22)).

[50] The make suggestion/request category was at times difficult to delineate from asking for information. For example, “Where are your sources” was considered make suggestion/request in initial rounds of coding. However, upon further examination, this comment was determined to be asking for information, because the intent of the comment was to get information about where students found their sources. To help delineate the difference between make suggestion/request and ask for information categories, the first author used Ferris et al.’s (1997) logic that the ask for information category often does not specifically explain to the student how to apply the correctives. The following are two examples of a comment categorized as make suggestion/request:

  • “You should know that Hefeweizen beers are NOT Lagers. Specific fermentation temperatures would have been helpful” (Brewing Process, Student 24).
  • “We would hope to have more specifics for your beer style, which you culd just look up in the style guidelines. You should integrate what you have written about before, as all those components- malt, water, hops, additional extras will modify the general brewing process you have described here” (Brewing Process, Student 35).

Both of these examples make clear to the student how they would apply this feedback to future assignments or to a revision of the present assignment. The make suggestion/request category accounted for 18.65% of total comments.

[51] The give information category, like ask for information, differed from make suggestion/request, because it did not describe exactly how to apply the information given. Give information was the most frequent category accounting for 72.32% of total comments. These comments were typically information dense and content focused. An example of a give information comment is “the reasons for using hop extracts are mostly retaining aroma instead of bitterness, as well as the stability to light since MGD is sold in clear glass bottles” (Hops, Student 37).

[52] Positive comments, while representing only 3.37% of the total, were often not extraordinarily detailed. These comments ranged from “really good job!” (Brewing Process, Student 21) to the most detailed “Your formatting is a nice touch that goes beyond what we can expect from submissions in this class” (Malt, Student 18). The grammar and mechanics category represented 1.08% of total comments. This category often identified problems in grammar or mechanics when it interfered with the TA’s ability to understand the content or referred to the student’s writing style. An example of a grammar and mechanics comment that referred to writing style was “While you did a good job arguing your point, you did not reference any sources of your information and used an informal writing style” (Malt, Student 57).

[53] 3.7.3.2. Comment form. In addition to identifying whether a comment was a question, statement or imperative, the first author also noted whether the comment was hedged, or text-specific. All comments were classified as either a question, statement, or imperative, but not all comments were hedged or text-specific. Hedged comments included the word “please” or “would.” For example, “A fermentation temperature range would have been helpful. The beer would likely be filtered at some stage before packaging” (Brewing Process, Student 73). Additionally, it was important to mark between a positive hedge and a positive comment. For example, “some good points, however, they certainly use several more hop varieties, and hop extracts are not always a cheaper option, but rather used for specific purposes” (Hops, Student 42) was classified as a hedged comment, because the main focus of this comment was to give the student information about hops. Text-specific comments were classified as feedback that could not be generally applied from paper to paper and were also content specific. For example, “well done” (Malt, Student 37) would not be classified as text-specific whereas “most beers are sterile-filtered instead of being pasteurized, so no yeast would remain, or would be added back after filtration” (Packaging, Student 71) was classified as text-specific.

[54] 3.7.4. Limitations with the model.
Ferris et al.’s model (1997) was originally designed to identify responses that may be “problematic for L2 writers” (p. 156) in the context of a composition course. Despite this, we found the model, with the small additions described in the previous section, to successfully identify “how” the TA responded, specifically to identify what the feedback said and how it was said (Ferris et al., 1997, p. 156). The major limitation with the model was determining differences between similar categories (e.g., the difference between make a suggestion/request and give information). However, through an iterative coding process and by maintaining a focus on consistency, we minimized this limitation.

4. Findings

4.1. Feedback Types and Forms

[55] Figure 8 illustrates the percentage of total comments across the quarter for each category of feedback type and form. Overwhelmingly, the feedback students received was classified as Give Information (72.32%), Statement (95.07%), and Text-Specific (88.57%). An example of feedback classified as Give Information, Statement, and Text-Specific is as follows: “The still hot wort will need to be cooled before fermentation” (Brewing Process, Student 13). This comment is content-focused and reflects a typical type of feedback students would receive on their weekly writing assignments.

Figure 8. Feedback Types and Forms.

[56] While it is important to note what types of feedback students received, it is equally interesting to look at what types of feedback occurred infrequently. The use of the rubric feature, while representing only 3.25% of all feedback over the quarter, further establishes the need for an ecology of feedback that takes into account not only the variation of feedback amongst different types and forms, but also how feedback changes across the quarter.

4.2. Differences in Feedback Type and Form Across the Quarter

[57] Students received more feedback during the initial weeks of the term and received less as the course progressed both in type and form. The TA used the rubric feature to grade and provide feedback on 27 students’ submissions for the first weekly writing assignment, malt. After grading the first 27 students using the rubric feature in Canvas SpeedGrader, the TA did not use the rubric feature to score grades for students for the rest of the quarter. An interview with the TA illuminated his decision to stop using the rubric feature in Canvas SpeedGrader due to a conflict with the scores generated by the rubric feature. When the TA used the rubric feature, the scores automatically calculated were lower than the instructional team desired. Accordingly, the TA had to regrade the first 27 student assignments and then discontinued the use of the rubric feature. In this way, it is important to look at how feedback changed across the quarter. The TA used the rubric feature to grade 36.49% of the first weekly writing assignment, “malt,” but did not use the rubric feature the rest of the quarter dropping the percentage of its overall use to 3.25%. Understanding how feedback changed over the quarter provides an invaluable perspective on the rubric feature and how this tool was no longer used due to a conflict with the instructional team’s intended grade distribution though the rubric itself was still used to guide feedback.

Table 2: Differences in Total Comments Across Quarter

Variable
Number of Comments


​Early (Wk 2): Malt
Early (Wk 2): Hops

Mid (Wk 6): Beer Packaging

Late (Wk 8): Marketing

Total Comments 134
114
130 22
No Feedback 4 12 14 50

[58] Another important change in feedback across the quarter can be seen in the overall amount of feedback provided. The TA provided less feedback as the term progressed (see Table 2). This finding echoes Ferris et al.’s (1997) results in which the instructor gave less feedback over the course of the term and could be attributed to students better understanding the expectations for the weekly writing assignment. For example, in week two, when students were assigned their first two weekly writing assignments, malt and hops, comments clarifying expectations of the weekly writing assignments were common. An example of a typical comment that occurred for the first weekly writing assignment is as follows: “You may consider using outside sources next time, which you would also have to cite in the text” (Malt, Student 1).

[59] As the quarter progressed, it is possible that the TA did not need to clarify the expectations of the assignment as the guidelines of the assignment were the same every week, but with different topics (e.g., hops, marketing). For this reason, it makes sense that the frequency of feedback would decrease as students better understood the expectations of the assignment and there was less urgency on the part of the TA to provide feedback about how to write the assignment later in the quarter. This is an example of how the context of this course, as a course in the disciplines, affects the feedback students receive. Since the focus of this course was writing to learn, once students mastered the genre of the weekly writing assignment, they needed less feedback as they understood better how to demonstrate what they learned.

[60] An alternative explanation could perhaps be related to burn-out over the quarter. Ferris et al. (1997) caution against this interpretation. Previous research indicates other possibilities including an increase in shared knowledge as the quarter progresses that decreases the need for extensive feedback (Reid,1994). However, given the context of this study, an online course in the disciplines, it is important to consider burn-out as teaching online courses often places more time demands on the instructor than a face-to-face course (Warnock, 2015).

4.3. Student Perceptions of Feedback

[61] Student perceptions of feedback changed from the pre-course survey to the late-course survey in four important ways.

  1. More students felt they received plenty of feedback at the end of the quarter,
  2. students were less ambiguous about the feedback they received at the end of the quarter,
  3. students hoped for constructive feedback in the beginning of the quarter, but by the end of the quarter identified specific instances of what constructive feedback meant to them,
  4. and students expanded what they counted as multimodal feedback by the end of the quarter.

Each of these four changes in student perceptions of feedback across the quarter supports the need for an ecology of feedback model that holistically accounts for how student perceptions of feedback change over the duration of a course. This section will describe in more detail the four ways in which student perceptions changed over the quarter.

[62] 4.3.1. More students perceived receiving plenty of feedback at the end of the quarter.
Descriptive analysis of student responses to the survey indicate more students perceive receiving plenty of feedback at the end of the quarter. (see Figure 7).

Figure 7. Student Responses to the questions “I will receive plenty of feedback on my work and ideas” and “I received plenty of feedback on my work and ideas” on the pre-course and late-course survey respectively.

[63] A higher percentage (81 %, n=30) of students on the late-course survey said it was true or mostly true that they received plenty of feedback. On the pre-course survey 60% (n=48) of students expected to receive plenty of feedback on their work and ideas. In other words, more students felt they received ample feedback after taking the course than they expected before they took the course.

[64] 4.3.2. Students’ ambiguity about feedback decreased at the end of the quarter. 
Over the quarter, a smaller percentage of students felt unsure about the amount of feedback they received on their work and ideas. On the pre-course survey 32% (n=26) of students answered neither true nor untrue that they expected to receive plenty of feedback whereas 13% (n=5) of students answered neither true nor untrue on the late-course survey. Students were more uncertain of the amount of feedback before the quarter started and did not necessarily anticipate receiving a lot of feedback. However, by the end of the course, more students felt they received plenty of feedback and less students were ambiguous about the amount of feedback they received.

[65] Open-ended student responses on the pre-course and late-course surveys add student voices to the findings described above that support the need to look at an ecology of feedback and how student perceptions of feedback may change throughout the quarter. Incorporating open-ended student responses is important as students may interpret what counts as “plenty of feedback” differently. On the open-ended student responses on the pre-course survey, students described, in more detail, their uncertainty reflected in Figure 7 about the feedback they expected in this course. As one student puts it, “I am unsure at the moment since I do not know what type of assignments are laid out for the course.” While some students were unsure about the type of feedback they would receive on the pre-course survey, no students indicated uncertainty about the feedback they received on the late-course survey open-ended questions. This is not entirely surprising as one can imagine students easily being able to describe feedback they received in a course throughout the quarter. However, in developing the ecology of feedback model, it is important to note student perceptions of feedback change over the course of the quarter. Given this variation in perception across the quarter, understanding student perceptions of feedback across the quarter is necessary to develop a more holistic picture of the student experience as it relates to feedback.

[66] 4.3.3. Students’ perceptions of constructive feedback.
In terms of the type of feedback important to students, students noted their wish for “constructive” feedback on both the pre-course and late-course surveys. On the pre-course survey, students hoped to receive constructive feedback in this course. Students hoped for feedback that would allow them to improve in the future and better learn the content of the course. For example, one student said, “Hopefully, it will be positive feedback and constructive criticism that will help me learn and grow.” On the late-course survey, students noted constructive feedback that helped them improve their future weekly writing assignments. As one student noted, “The feedback was helpful because if I was given an inadequate score I was given details as to why my score was low. Thus, I could improve my score for the next assignment.” Though constructive feedback was important to students on both the pre-course and late-course survey, the application of this feedback moved from broad goals such as “learning” to more specific goals such as improving performance on weekly writing assignments.

[67] 4.3.4. Students’ perceptions of multimodal feedback.
On both the pre-course and late-course survey few students noted multimodal feedback. Types of multimodal feedback included: emails, whole class announcements from the TA or professor (delivered via the LMS), and verbal feedback via Zoom (the web-conferencing tool used for three synchronous class periods and office hours). Interestingly, a few students predicted on the pre-course survey that they would receive multimodal feedback but focused on email as the tool via which feedback would be delivered reiterating the importance of established technologies to student experience and learning (Blair & Hoy, 2006; Loes & Saichaie, 2016).

[68] Student perceptions of feedback changed from the pre-course survey to the late-course survey in four important ways. First, more students felt they received plenty of feedback on their work by the end of the quarter. Second, less students were ambiguous about the amount of feedback they received by the end of the quarter. Third, students hoped for constructive feedback at the beginning of the quarter, but at the end of the quarter noted constructive feedback that helped them improve future weekly writing assignments. Finally, students expanded definitions of multimodal feedback to include not just emails, but feedback provided via Canvas Announcements and Zoom in either synchronous lectures or office hours. These findings further establish the need to understand an ecology of feedback in the online course that could contribute to students’ changing perceptions of feedback.

5. Discussion

[69] Findings from this study indicate that in this online course in the disciplines students’ perceptions of feedback changed over the quarter and the feedback they received on their weekly writing assignments changed over the quarter. This study worked to develop an ecology of feedback model in an online course in the disciplines that accounts for students, instructors, and mediating technologies. To accomplish this, this study addressed the following research questions:

  1. How do students perceive the feedback they receive before and after taking an online course in the disciplines?
  2. What is the nature- considering both pragmatic intent and linguistic form- of the teacher's written comments?
  3. Is there evidence of variation in teacher response at different points during the quarter?

These research questions contributed to an ecology of feedback by exploring how student perceptions of feedback changed over the quarter, what kinds of feedback instructors gave, how this feedback changed over the quarter, and how technologies mediated feedback in this online course in the disciplines. The following sections summarize findings as they relate to the three research questions. We close with a discussion of implications for practice, limitations, as well as future directions.

5.1. Research Question One: Changing Student Perceptions of Feedback

[70] Students’ perceptions of feedback changed over the quarter. Based on survey responses, more students perceived receiving ample feedback at the end of the course and less students indicated ambiguity about the amount of feedback they received. Additionally, when students described the type of feedback they anticipated receiving, many students hoped for feedback that was constructive, but many other students were unsure about the feedback they would receive in the course. At the end of the quarter, students were not unsure about the type of feedback they received. They continued to note constructive feedback, but on the late-course survey constructive feedback often related to feedback that facilitated their ability to do better on future writing assignments. Additionally, students expanded their ideas of what counted as multimodal feedback to include not just emails, but feedback via web conferencing software for synchronous lectures or office hours (e.g., Zoom), or Announcements in Canvas.

5.2. Research Question Two: Feedback Types and Forms

[71] In terms of the actual feedback provided, the most frequent type of feedback was giving information (72.32%) and the form was most frequently statements (95.07%). Additionally, feedback was overwhelmingly text-specific (88.57%). Researchers have long noted the importance of providing text-specific feedback (Ferris et al., 1997; Sommers, 1982; Zamel, 1985) or feedback that is not “rubber-stamped, from text to text” (Sommers, 1982, p. 152). The heavily content-focused feedback in this course seemed to facilitate more text-specific feedback therefore raising important questions about the nature of feedback in courses in the disciplines. For example, does the writing in this course in the disciplines support particular feedback best practices (e.g., text-specific feedback)?

[72] In examining feedback features specific to the interface, we see infrequent use of the rubric feature in Canvas SpeedGrader (3.25%). While it may be tempting to remove this category from the coding scheme given its infrequent use, interviews and an examination of feedback variation across the quarter reveals the impact of the interface on use of the rubric feature. It is important to note, using a rubric to guide feedback in a systematic way still remains a hallmark of good practice when it comes to assessing writing in courses in the disciplines.

5.3. Research Question Three: Response Variations Across the Quarter

[73] As discussed previously, the rubric feature accounted for only 3.25% of all feedback across the quarter. However, when analyzing feedback across the quarter, we see that the rubric feature was used to grade approximately the first third of the first weekly writing assignment, malt. After this initial use, the rubric feature was not used again for the rest of the quarter as the scores it automatically generated conflicted with the instructional team’s intended grade distribution. This finding reiterates the need for studies that explore the ecology of feedback in online courses and take into account response variation across the quarter.

[74] In addition to variations in the use of the rubric feature, the TA provided less feedback as the quarter progressed, a finding that echoes Ferris et al.’s (1997) study. Increased time demands on the TA may contribute as the workload increased significantly as compared to the face-to-face course where his previous grading responsibilities were limited to scanning scantrons for two midterms and a final. In fact, scholars suggest the importance of considering the increased time demands online courses may place on instructor’s workload (Blair & Hoy, 2006).

6. Implications for Practice

[75] Findings from this study help to develop an ecology of feedback to guide instructors as they give feedback in an online course in the disciplines. First, students in online courses may be unsure of the type of feedback they will receive at the beginning of the quarter. To alleviate this uncertainty, instructors should provide students with clear guidelines about the type of feedback they will receive in the course as well as how technologies will facilitate a wide-range of feedback opportunities for students. Additionally, instructors need to be aware students may have difficulty accessing feedback depending on the interface through which feedback is provided (Laflen & Smith, 2017). In this way, students may benefit from direct instruction on how to even access feedback. Second, instructors should consider the alignment between mediating technologies and instructional goals as they provide feedback. It has been shown that established technologies, like email, can play a role in the cognitive development of students (Loes & Saichaie, 2016), but our research suggests that emergent technologies also have a role. Finally, in an online course, the technologies that mediate feedback are pervasive. As such, an ecology of feedback as noted in the sections above pays attention to technologies as they impact how students receive feedback and how instructors provide feedback. Understanding how feedback changes over the course of a quarter could allow instructors to make careful preparations to ensure students receive the feedback they need through a variety of different modalities across a quarter.

7. Limitations

[76] As mentioned previously, despite a focus on a holistic approach as suggested by Planar and Moya (2016) to understand feedback in online courses that accounts for students, teachers, and technologies, this study only accounted for one type of mediating technology, SpeedGrader in Canvas, and on one assignment, the weekly writing assignment. In this way, while the study contributes to an understanding of how student perceptions of feedback and feedback itself changed over the course of a quarter, the study accounts for only a limited sense of how mediating technologies impacted feedback. For this reason, is important to expand the focus of this study to account for the larger ecology of feedback students receive in an online course.

8. Call for Future Research

[77] This exploratory study establishes the need for future research that develops an ecology of feedback by holistically accounting for feedback in online courses in three important dimensions: student, instructor, and technologies. Future studies need to account for the ecology of feedback to determine in what ways and through which mediating technologies students receive feedback in online courses. By understanding the ecology of feedback in online courses that fully accounts for students, instructors, and mediating technologies, we can begin to develop a rigorous and empirically supported pedagogy for online courses that encourage meaningful and productive interactions between instructors and students.

9. References

Allen, I. E., & Seaman, J. (2008). Staying the course: Online education in the United States. The Sloan Consortium and Babson Survey Research Group. Retrieved from https://www.onlinelearningsurvey.com/reports/staying-the-course.pdf.

Allen, E., & Seaman, J. (2013). Changing course: Ten years of tracking online education in the United States. The Sloan Consortium and Babson Survey Research Group. Retrieved from http://sloanconsortium.org/ publications/survey/changing_course_2012.

Anson, C. M., Dannels, D. P., Laboy, J. I., & Carneiro, L. (2016). Students’ perceptions of oral screencast responses to their writing: Exploring digitally mediated identities. Journal of Business and Technical Communication, 30(3), 378-411.

Arkorful, V., & Abaidoo, N. (2015). The role of e-learning, advantages and disadvantages of its adoption in higher education. International Journal of Instructional Technology and Distance Learning, 12(1), 29-42. Retrieved from http://www.ijern.com/journal/2014/December-2014/34.pdf

Beason, L. (1993). Feedback and revision in writing across the curriculum classes. Research in the Teaching of English, 27(4), 395-422. Retrieved from http://www.jstor.org/stable/40171241

Blair, K., & Hoy, C. (2006). Paying attention to adult learners online: The pedagogy and politics of community. Computers and Composition, 23(1), 32-48. https://doi.org/10.1016/j.compcom.2005.12.006

Boling, E. C., Hough, M., Krinsky, H., Saleem, H., & Stevens, M. (2012). Cutting the distance in distance education: Perspectives on what promotes positive, online learning experiences. The Internet and Higher Education, 15(2), 118-126. https://doi.org/10.1016/j.iheduc.2011.11.006

Borup, J., West, R. E., Thomas, R., & Graham, C. R. (2014). Examining the impact of video feedback on instructor social presence in blended courses. The International Review of Research in Open and Distributed Learning, 15(3). http://dx.doi.org/10.19173/irrodl.v15i3.1821

Boyd, P.W. (2008). Analyzing students’ perceptions of their learning in online and hybrid first-year composition courses. Computers and Composition, 25(2), 224-243. https://doi.org/10.1016/j.compcom.2008.01.002

Brannon, L., & Knoblauch, C. H. (1982). On students' rights to their own texts: A model of teacher response. College Composition and Communication, 33(2), 157-166. DOI: 10.2307/357623

Canvas Guides. (2017). What is SpeedGrader? Retrieved from https://community.canvaslms.com/docs/DOC-10712

Cavanaugh, A. J., & Song, L. (2014). Audio feedback versus written feedback: Instructors' and students' perspectives. Journal of Online Learning and Teaching, 10(1), 122-138. Retrieved from http://jolt.merlot.org/vol10no1/cavanaugh_0314.pdf

Chickering, A., & Gamson, Z. (1987). Seven Principles for Good Practice in Undergraduate Education. American Association of Higher Education Bulletin, 3–7. Retrieved from https://files.eric.ed.gov/fulltext/ED282491.pdf

Crook, A., Mauchline, A., Maw, S., Lawson, C., Drinkwater, R., Lundqvist, K., ... Park, J. (2012) The use of video technology for providing feedback to students: Can it enhance the feedback experience for staff and students? Computers & Education, 58(1), 386 –396. https://doi.org/10.1016/j.compedu.2011.08.025

Dahlstrom, E., Brooks, D. C., & Bichsel, J. (2014). The current ecosystem of learning management systems in higher education: Student, faculty, and IT perspectives. Research report. Louisville, CO: ECAR. Available from http://www.educause.edu/ecar

de Kleijn, Meijer, Pilot & Brekelmans (2014). The relation between feedback perceptions and the supervisor– student relationship in master's thesis projects. Teaching in Higher Education, 19(4), 336-349. https://doi.org/10.1080/13562517.2013.860109

Dennen, V. P. (2005). From message posting to learning dialogues: Factors affecting learner participation in asynchronous discussion. Distance Education, 26(1), 127-148. https://doi.org/10.1080/01587910500081376

Dennen, V. P., Aubteen Darabi, A., & Smith, L. J. (2007). Instructor–learner interaction in online courses: The relative perceived importance of particular instructor actions on performance and satisfaction. Distance education, 28(1), 65-79. https://doi.org/10.1080/01587910701305319

Dobre, I. (2015). Learning Management Systems for Higher Education- An Overview of Available Options for Higher Education Organizations. Procedia-Social and Behavioral Sciences, 180, 313-320. https://doi.org/10.1016/j.sbspro.2015.02.122

Dziuban, C., & Picciano, A. G. (2015). The evolution continues: Considerations for the future of research in online and blended learning. Research Bulletin.

Ferris, D.R. (1995), Student reactions to teacher response in multiple-draft composition classrooms. TESOL Quarterly, 29(1), 33–53. Retrieved from http://www.jstor.org/stable/3587804

Ferris, D. R. (1997). The influence of teacher commentary on student revision. TESOL Quarterly, 31(2), 315-339. Retrieved from https://www.jstor.org/stable/3588049?seq=1#page_scan_tab_contents

Ferris, D. R., Pezone, S., Tade, C. R., & Tinti, S. (1997). Teacher commentary on student writing: Descriptions & implications. Journal of Second Language Writing, 6(2), 155-182. https://doi.org/10.1016/S1060-3743(97)90032-1

Gallien, T., & Oomen-Early, J. (2008). Personalized versus collective instructor feedback in the online course room: Does type of feedback affect student satisfaction, academic performance and perceived connectedness with the instructor? International Journal on ELearning, 7(3), 463-476.

Gaytan, J. (2015). Comparing Faculty and Student Perceptions Regarding Factors That Affect Student Retention in Online Education. American Journal of Distance Education, 29(1), 56-66. https://doi.org/10.1080/08923647.2015.994365

Grigoryan, A. (2017). Audiovisual Commentary as a Way to Reduce Transactional Distance and Increase Teaching Presence in Online Writing Instruction: Student Perceptions and Preferences. Journal of Response to Writing, 3(1), 83–128. Retrieved from http://journalrw.org/index.php/jrw/article/viewFile/77/49

He, W. (2013). Examining students’ online interaction in a live video streaming environment using data mining and text mining. Computers in Human Behavior, 29(1), 90-102. https://doi.org/10.1016/j.chb.2012.07.020

Hyland, F. (1998). The impact of teacher written feedback on individual writers. Journal of Second Language Writing, 7(3), 255-286. https://doi.org/10.1016/S1060-3743(98)90017-0
 
Hyland, F. (2001). Providing effective support: Investigating feedback to distance language learners. Open Learning, 16(3), 233-247. https://doi.org/10.1080/02680510120084959

Ice, P., Swan, K., Diaz, S., Kupczynski, L., & Swan-Dagen, A. (2010). An analysis of students' perceptions of the value and efficacy of instructors' auditory and text-based feedback modalities across multiple conceptual levels. Journal of Educational Computing Research, 43(1), 113-134. https://doi.org/10.2190/EC.43.1.g

Knoblauch, C., & Brannon, L. (1981). Teacher commentary on student writing: The state of the art. Freshman English News, 10(2), 1-4. Retrieved from http://www.jstor.org/stable/43518564

Laflen, A., & Smith, M. (2017). Responding to student writing online: Tracking student interactions with instructor feedback in a Learning Management System. Assessing Writing, 31, 39-52. https://doi.org/10.1016/j.asw.2016.07.003

Lee, K. (2017). Rethinking the accessibility of online higher education: A historical review. The Internet and Higher Education, 33, 15-23. https://doi.org/10.1016/j.iheduc.2017.01.001

Loes, C. N., & Saichaie, K. (2016). Cognitive effects of technology over four years of college. Journal for the Study of Postsecondary and Tertiary Education, 1, 181-196. Retrieved from http://www.informingscience.org/Publications/3506

McLeod, S. H., Miraglia, E., Soven, M., & Thaiss, C (Eds.). (2001). WAC for the New Millennium: Strategies for Continuing Writing-Across-the-Curriculum Programs. Urbana, IL: National Council of Teachers of English.

Merriam, S. (2009). Qualitative Research: A Guide to Design and Implementation. 3rd Ed. San Francisco: Jossey-Bass.

Montgomery, J. L., & Baker, W. (2007). Teacher-written feedback: Student perceptions, teacher self-assessment, and actual teacher performance. Journal of Second Language Writing, 16(2), 82-99. https://doi.org/10.1016/j.jslw.2007.04.002

Mulliner, E., & Tucker, M. (2017). Feedback on feedback practice: perceptions of students and academics. Assessment & Evaluation in Higher Education, 42(2), 266-288. https://doi.org/10.1080/02602938.2015.1103365

Neal, M. R. (2011). Writing assessment and the revolution in digital texts and technologies. New York, NY: Teachers College Press.

Norman, D. A. (1999). Affordance, conventions, and design. Interactions, 6(3), 38-43. https://doi.org/10.1145/301153.301168

Orsmond, P., Maw, S. J., Park, J. R., Gomez, S., & Crook, A. (2013). Moving feedback forward: theory to practices. Assessment & Evaluation in Higher Education, 38(2), 240– 252. https://doi.org/10.1080/02602938.2011.625472 

Parker, K., Lenhart, A., & Moore, K. (2011). The Digital Revolution and Higher Education. Retrieved from http://www.pewsocialtrends.org/2011/08/28/the-digital-revolution-and-higher-education/

Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in an online course. Journal of Asynchronous learning networks, 6(1), 21-40.

Planar, D., & Moya, S. (2016). The Effectiveness of Instructor Personalized and Formative Feedback Provided by Instructor in an Online Setting: Some Unresolved Issues. Electronic Journal of e-Learning, 14(3), 196-203. Retrieved from https://eric.ed.gov/?id=EJ1107130

Reid, J. (1994). Responding to ESL students' texts: The myths of appropriation. TESOL Quarterly, 28(2), 273-292. DOI: 10.2307/3587434

Sanders, S. R. (1988). The singular first person. The Sewanee Review, 96(4), 658-672. Retrieved from http://www.jstor.org/stable/27545966

Scrocco, D. L. A. (2012). Do you care to add something? Articulating the student interlocutor's voice in writing response dialogue. Teaching English in the Two Year College, 39(3), 274. Retrieved from https://search.proquest.com/docview/940915540?accountid=14505

Seaman, J., Allen, J., & Seaman, J. (2018). Grade increase: tracking distance education in the United States. The Sloan Consortium and Babson Survey Research Group. Retrieved from https://eric.ed.gov/?id=ED580852

Smith, S. (1997). The genre of the end comment: Conventions in teacher responses to student writing. College Composition and Communication, 48(2), 249-268. DOI:10.2307/358669

Snart, J. (2015). Hybrid and Fully Online OWI. In B. Hewett & K. DePew (Eds.), Foundational practices of online writing instruction, 93-127. Fort Collins, CO: Parlor Press.

Sommers, N. (1982). Responding to student writing. College composition and communication, 33(2), 148-156. DOI: 10.2307/357622

Sommers, J. (2012). Response rethought... again: Exploring recorded comments and the teacher-student bond. Journal of Writing Assessment, 5(1). Retrieved from http://journalofwritingassessment.org/article.php?article=59

Straub, R. (1997). Students' reactions to teacher comments: An exploratory study. Research in the Teaching of English, 31(1), 91-119. Retrieved from http://www.jstor.org/stable/40171265
 
Straub, R., & Lunsford, R. F. (1995). Twelve readers readings: Responding to college student writing. Cresskill, NJ: Hampton Press, Inc.

Sumner, J. (2000). Serving the system: A critical history of distance education. Open learning, 15(3), 267-285.

Swan, K., Shea, P., Fredericksen, E. E., Pickett, A. M., & Pelz, W. E. (2000). Course Design Factors Influencing the Success of Online Learning. Paper presented at the Webnet 2000 World Conference, San Antonio, TX.

Swan, K. (2001). Virtual interaction: Design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance education, 22(2), 306-331. https://doi.org/10.1080/0158791010220208

Vrasidas, C., & McIsaac, M. S. (1999). Factors influencing interaction in an online course. American Journal of Distance Education, 13(3), 22-36. https://doi.org/10.1080/08923649909527033

Wolsey, T. (2008) Efficacy of Instructor Feedback on Written Work in an Online Program. International Journal on E -Learning, 7(2), 311–329.

Young, S. (2006). Student views of effective online teaching in higher education. The American Journal of Distance Education, 20(2), 65-77. https://doi.org/10.1207/s15389286ajde2002_2

Zamel, V. (1985). Responding to student writing. TESOL Quarterly, 19(1), 79-101. DOI: 10.2307/3586773

Privacy Policy | Contact Information  | Support Us| Join Us 

 Copyright © Global Society of Online Literacy Educators 2016-2023

Powered by Wild Apricot Membership Software
!webmaster account!