The value of written corrective feedback

After a slight interruption to our planned schedule of posts we are back on track. Blogging was interrupted by a number of (exciting) developments. Firstly, we launched a new presessional programme – one that focuses on education and knowledge. Secondly, we have launched the MA TEAP. These events, combined with a host of other activities – start of term, away days, plenty of meetings etc. have taken over somewhat.

This blog post was written by Richard Lee – a colleague on the insessional programme here at Nottingham. Richard is also studying for an Ed.D in the School of Education. Please feel free to comment.

Written corrective feedback

I would imagine that most EAP practitioners believe in the value of written corrective feedback (WCF) when helping students develop their writing proficiency. It can help the writer to revise a particular text and, more importantly, it provides them with positive input which aids in long term improvement in writing ability. Learners seem to expect it, and research which has looked into the experiences of international students at HE institutions has shown that students value learning support systems that provide feedback on their writing skills (Andrade, 2006). It would appear then that WCF it is a key support element which helps non-native English speakers (NNES) to adjust to their new academic surroundings and go on to successfully complete their course requirements.

We provide WCF not just because we feel it is important to meet the expectations of our students but also because we know that errors may lead to the stigmatization of the writer (Ferris, 2006). The NNES not only has to deal with this deficit perspective but they are also victims of what Williams (1981) suggested was an unconscious tendency for the reader to notice more errors in the novice writer than the expert – a double-whammy if you like.

So, there appear to be solid reasons to suggest that error correction is important.  But what of the occasions where my feedback has appeared to be rather less than helpful? Why is it that some individuals appear to make meaningful gains in accuracy and fluency in relatively short periods of time, but others do not? Is there something wrong with my approach to written feedback? Or is it simply because some people have learning styles that simply do not take to written corrective feedback? Or are there simply hidden variables which I am not aware of?

Recent L2 writing research has begun to look at these particular questions and challenge commonly held opinions about the efficacy of WCF. This began with an article published by Truscott (1996)  in which he challenged both the theoretical and pedagogic principles underpinning its use. In essence, his analysis of prior WCF research showed it to be ineffective – his case resting on the fact that much of the previous research found no significant positive effects for correcting student errors in L2 writing.

Central to Truscott’s criticism is his suggestion, based on Krashen’s monitor model, that different linguistic forms have their own particular order of acquisition and providing feedback on a form which the student isn’t yet ready to acquire is problematic.

Truscott raised concerns over other issues too. He questioned the ability of teachers to identify errors correctly and provide the appropriate correction and meta-linguistic explanation. In addition, he doubted whether students actually understand the feedback and suggested that on many occasions students simply forget the rule or lack the motivation to apply it at a later date. Although, more recently, Truscott and Hsu (2008) have been willing to concede that error correction may help in the editing of a particular piece of writing, they continue to maintain that WCF doesn’t lead to any noticeable long term outcomes.

However, many have challenged the validity of his conclusions. Bitchener and Ferris (2011: 22) sum this up by suggesting that ‘the evidence he presented was extremely limited and the findings of the studies were conflicting’. One only has to look at Ellis’ typology of written corrective feedback (2009) to see that there is a wide array of WCF options available and that the type of feedback we use has ramifications for how successfully the recipient of WCF will attend to error. Indeed, there are clearly both good and bad ways of providing feedback and, as EAP practitioners or researchers, we need to identify and prioritize approaches that are going to be effective. Hence, for research to provide a clearer picture of what constitutes effective WCF, or indeed whether it works at all, it requires research where learner, situational and methodological variables are embedded and clearly evaluated in the research (Evans et al., 2010). Many would suggest that the studies Truscott looked at in his original analysis fell well short of this goal.

It’s interesting to note that since Truscott threw down the gauntlet, there has been a reappraisal of what WCF can realistically offer and also a closer examination of good practice which has been informed by research that has applied more ‘rigorous’ research designs to test its efficacy. This has led to some interesting findings and subsequent recommendations for improving practice when providing WCF.

Here are some of the more interesting and contentious findings (by no means exhaustive) found in recent WCF research: 

  • Much recent research has tried to provide a more robust approach to testing the efficacy of WCF by using control groups. In virtually all cases, the treatment groups that received either direct feedback (the teacher gives the correct form) or indirect feedback (the teacher indicates the error but does not give a correction) outperformed the control groups on subsequent post tests (Bitchener and Ferris, 2011).
  • Where feedback is focused on particular linguistic forms rather than using an unfocused approach treatment groups tended to do better long term (Bitchener and Ferris, 2011).
  • Where research has compared the effectiveness of direct feedback and indirect feedback, direct error correction appears to be more effective long term (Bitchener and Ferris, 2011).
  •  Where research has looked at WCF delivered in conjunction with oral meta-linguistic explanation, there appears to be more successful outcomes (Bitchener and Ferris, 2011).
  • One study by Ferris and Roberts (2001) found no significant differences in the editing success of treatment groups which used either coded feedback or where errors where simply underlined.
  • Chandler (2003: 293) suggests that one crucial element in the success of WCF is that the learner needs to properly attend to the error by systematically  incorporating the feedback in further revisions.

The first finding outlined above suggests that WCF appears to provide learners with clear gains in their writing development. The five additional points suggest that L2 writing research is starting to answer the questions outlined at the beginning of this article and develop a more informed approach to WCF – even if some of the findings appear to be rather disconcerting and may run counter to what we understand as standard practice.

References

Andrade, M.S., 2006. International students in English-speaking universities Adjustment factors. Journal of Research in International Education 5, 131–154.

Bitchener, J., Ferris, D.R., 2011. Written Corrective Feedback in Second Language Acquisition and Writing, 1st ed. Routledge.

Chandler, J., 2003. The efficacy of various kinds of error feedback for improvement in the accuracy and fluency of L2 student writing. Journal of Second Language Writing 12, 267–296.

Ellis, R., 2009. A typology of written corrective feedback types. ELT J 63, 97–107.

Evans, N.W., Hartshorn, K.J., McCollum, R.M., Wolfersberger, M., 2010. Contextualizing corrective feedback in second language writing pedagogy. Language Teaching Research 14, 445–463.

Ferris, D., Roberts, B., 2001. Error feedback in L2 writing classes: How explicit does it need to be? Journal of Second Language Writing 10, 161–184.

Ferris, D., 2006. Does error feedback help student writer? New evidence on the short- and long-term effects of written error correction. In K. Hyland & F. Hyland (Eds.), Feedback in Second Language Writing (pp. 81-104). Cambridge: Cambridge University Press.

Truscott, J., 1996. The Case Against Grammar Correction in L2 Writing Classes. Language Learning 46, 327–369.

Truscott, J., Hsu, A.Y., 2008. Error correction, revision, and learning. Journal of Second Language Writing 17, 292–305.

Williams, J.M., 1981. The Phenomenology of Error. College Composition and Communication 32, 152–168.


4 Comments on “The value of written corrective feedback”

  1. Steve O'Sullivan says:

    Thanks very much for this, Richard.

    Again on this blog, an excellent summary of practical issues which ring true on the ground, and a very useful survey of relevant literature.

    Re. the point you report made by Truscott (1996): “the ability of teachers to identify errors correctly and provide the appropriate correction and meta-linguistic explanation …. [and] whether students actually understand the feedback”, achieving some kind of consensus on meta-linguistic ‘labels’ or codes can also be a bit of a minefield amongst teachers. One person’s ‘style error’ annotation may be another’s ‘wrong word/phrase’ annotation – both labels could be imprecise; moreover, neither, as mentioned in your post, might be understood or ‘corrected’ by the writer.

    Wingate (2012) uses the following neat description to sum up what can happen in this regard in her recent article about the problems faced by undergraduate students who may be informed in feedback that they need to be ‘more critical’ or that their writing ‘lacks argument’. She makes the point that “unknown concepts are used to explain unknown concepts, and different labels are used for the same concepts” (p. 152).

    Reference:

    Wingate, U. (2012) ‘Argument!’ Helping students understand what essay writing is about. Journal of English for Academic Purposes, 11(2), pp. 145-154.

    Like

  2. Good choice of topic. Richard and Steve have highlighted some things that have interested me for some time.

    Last year I decided to use ‘Quickmark’ (part of Blackboard) for a 10 week pre-sessional. It’s an online marking system – you highlight a word or phrase in the student’s online submission, and add from a ‘set’ of labels (‘Style’, ‘Structure’ and so on).

    There were 7 compositions, and the students had to correct and resubmit each one.

    The student responses were interesting. The fact that they all did all the corrections (not necessarily correctly!) was a pleasant surprise. Their feedback on this crude piece of research:
    -They could always read my feedback
    -They always knew what each label meant (the label is linked to a description and example)
    -They only had eight small changes to make on a page and one big one (that’s all I could fit on the document)
    -Correcting was easy

    I don’t know that their English improved – but their homework did. And it’s making me think about whether I’m wasting my time the ‘old’ way.

    Has anyone else had experience with online WCF? It would be interesting to hear some other views.

    Like

    • Steve O'Sullivan says:

      Hi James. Quickmark sounds interesting. TurnItin has something called Grademark which sounds similar. I haven’t used either, yet, though have dabbled briefly with GM.

      Word has the facility (or used to, at least) whereby you could create your own pre-recorded customised macros and attach them to your own dedicated toolbar with annotation buttons – this, with ‘insert comments’ for more detailed descriptions. I used that macro method quite a bit but, apart from an ever-burgeoning set of macros and buttons, possibly strayed into ‘over-analysis’ and ‘over-correction’ territory, with consequences possibly in terms of payback for time invested. The ‘in-built’ restriction to 8 small changes and 1 big one per page sounds as if it it would have turned out more sensible. (Out of interest, did you use a particular ‘set’ of 8 change ‘types’ and apply them to all submissions? And, if so, which ones did you use?)

      Some years ago, Martin Holmes (Hot Potatoes) brought out a programme called Markin’, but having seen Grademark (and Quickmark sounds like it may be similar), it’s a potentially much more sophisticated tool. With fuller ‘meta’ description possibilities linked to the annotation clue for further help, both seem to have a potential to cut down on ‘marking’ time, as you say with regard to your experience with QM. I’m sure other TEAP colleagues will have also had experience with one or other of these tools, and it looks like there might be a bit of research already done, if a brief cursory search return is anything to go by.

      Steve

      Like

  3. PAUL FANNING says:

    Thanks for a stimulating read on a topic in which I have a great interest after thousands of hours of marking! I have come to believe that the wording of written feedback is all important, and in giving more attention to my own wording have started to appreciate the difficulty it poses and the sophistication that it requires.

    My own response to the sort of worries that Truscott had about the worth of written feedback is to trust in the fairly widely accepted idea of delayed effect: that learners who are “not ready” to acquire a piece of language will still acquire it more quickly eventually if they have received some instriuction on it in the past.

    Like


Leave a comment