We have undergone something of a marking and feedback transformation over the last 6 or 7 years in the UK. Gone are the days of “ticking and flicking” along with simple “well done” comments (and rightly so). Feedback occupies one of the highest impact spots according to John Hattie’s, “Visible Learning” research and, boiling it down to its core elements, it makes perfect sense that it should too. Working out what students can’t yet do and helping them learn how to do it is fundamental to the learning process and feedback should do exactly this. But how has Remote Learning affected the role of feedback in this learning process?
Often, feedback is wrongly seen as something we do to a piece of work after it has been completed. We focus on improving the work, not on improving the student and become the editor and proofreader rather than the teacher. The result often manifests itself in marking policies that place an emphasis on the frequency/colour/format of “marks” in an exercise book rather than on developing a culture of feedback rich teaching and learning. When we make feedback about the marks made in a book, we get teaching and learning that is planned with the marking in mind. I am guilty of this myself. I have in the past planned lessons with the tasks I want students to do in their response pen colour as much in mind as the learning in the lesson. My planning was in danger of becoming about the feedback policy rather than the learning. My books looked great but I wasn’t entirely convinced that the learning was.
I get why these policies evolve. Books are easy to evidence should anyone come asking questions about the quality of teaching and learning. But there is so much more to feedback than just what we see in books (which can sometimes be superficial but pretty). This is where remote learning and digital mediums can accentuate the problem of superficial feedback; it sometimes looks great but ultimately falls a bit flat.
Feedback has been a personal research and experimentation project of mine for nearly two years and I am leading the work to develop this aspect of teaching and learning across our academy this year. As I read, researched and experimented it became clear that there were three main areas where feedback needed to be considered so I started to formulate what has become the Active Feedback Cycle. There are no new ideas in what follows here. You will have probably seen this in some format or another elsewhere but it helped me organise my thoughts around “feedback”. Shaun Allison for example, blogged brilliantly recently about a similar cycle of feedback. This is just my interpretation of it but it impacts on how we view remote feedback so I will explain my thinking.
Active Feedback Cycle
This is an organic process and permeates every single thing we do. We constantly Check the extent to which our students have understood something. Every time we ask a question, for example, we are engaging in the process of checking for understanding. The often overlooked direction of travel for feedback information here is that which passes from student to teacher. When we check for understanding we are gaining feedback from our students as to what they can and can’t yet do. We should be in receive mode, gathering information that will influence how we proceed with the learning process but always prepared to Act when needed.
If our checks indicate that the new learning is understood by all then we move on but as soon as a misconception or knowledge gap is exposed we need to act to address it before it becomes embedded. Again this action stage involves a partnership between student and teacher. Both must take action to rectify the issue. We sometimes do this without realising but that doesn’t mean it can’t be a conscious and deliberate action. For example, every time we rephrase a question that students have struggled to answer, we have taken action based on the feedback we receive from students about what they have understood. More deliberate actions have also become embedded practice for teachers and students such as redrafting and repeating things they struggled with in the “response to feedback” stage of lessons. I really like Tom Sherrington’s “Action Feedback” here. Go check it out.
This action stage is often the one that draws the attention in book scrutinies because it should be clear to see. We check and notice that a student has made a mistake or that they could have done better with a different bit of input from us so we give them some feedback and ask them to do something based on that feedback. If they use a differently coloured pen for this it stands out when someone looks in their book and the action becomes evidence of the learning. But what if they can’t act on the feedback because of a knowledge gap? “Can you check your spellings in the first paragraph for me?” – Well if I could then I wouldn’t have made the mistakes in the first place! In this instance there would be a break in the feedback cycle unless the teacher uses this gap to inform what they do next.
Sometimes “Acting” isn’t enough and we need to address an issue in a more explicit way. The information we gather from the feedback process can (should?) be instrumental in informing what comes next. This is often where the feedback process falls down. We plan our schemes of work in advance to ensure curriculum content is covered and developed efficiently. But how often do we really adapt those as we go to take account of the information we gathered in the Check and Act phases of feedback? How often do we stop and take the time needed to reteach something that may have been misunderstood or need practising more?
How has Remote Learning impacted on the Active Feedback Cycle?
There are myriad ways to check for understanding in both synchronous and asynchronous remote learning lessons. Whether you create a quiz using Google Forms, Quizizz or Kahot or whether you simply ask questions for students to answer in chat or out loud as a verbal contribution, is your call entirely. I have seen brilliant uses of all of them but the same underlying principle is common when they are used successfully: students are being actively checked for their level of understanding. So far so good. But what happens when we spot an issue? How do we take action?
Taking Action to Address Issues in Remote Learning
Firstly, any platform that allows a teacher to monitor students as they work has a distinct advantage over systems that facilitate feedback given after the task is completed. The teacher can take action as soon as they see an issue.
I watched a brilliant lesson in Maths recently where the teacher had students working through tasks set on Desmos. As students worked through those tasks, she monitored, intervened and modelled as and when students encountered issues. She called them back to the Google Meet tab where she was sharing a Smart Notebook workspace where she could actively model and talk through the process with the struggling student(s). The active feedback cycle was embedded throughout the entire process in such a way that nobody was left behind and by the end of the lesson the next learning steps had been identified and planned out.
In our remote English lessons, we use short Google Forms based quizzes as starters with classes based on technical accuracy feedback from previous lessons. If we see lots of students using capital letters inaccurately on a piece of writing we can start the next lesson with some short input from the teacher alongside a quiz which acts as a recall or consolidation exercise. The quiz is being used as a targeted intervention alongside teacher input based on information that the checking stage gives us. Quizzes don’t have to be used only as the assessment at the end of the learning. They can be really powerful ways to lead into topics too.
The closest I have been able to come to the level of live monitoring I saw in the Maths lesson in English however has been by monitoring in Google Classroom as students work on their assignments but it never feels as “in touch” as that Maths lesson did. I have tried several different collaborative platforms such as Jamboard but found that the flaws (only allowing 16 touch points for example) outweigh the gains for me. There are of course management tips and tricks to level that playing field a little but I found the cons outweigh the pros and therefore try to keep things as simple as possible. And I can hear team Jamboard shouting at their screens right now as they read this so I apologise in advance.
Giving Feedback or Embedding Feedback in the Learning Process?
There are lots of different ways to “give” students specific feedback on digital work that they have completed. Whether you are a rubric person or a comment bank person will be down to your personal preference. Where one person will like to annotate and leave voice typed comments, another will prefer Mote clip recordings and someone else BitMoji stickers. Giving students feedback is really well facilitated across lots of digital platforms. But this is where we are often blinded by the technology and miss the importance of the next step in the cycle. How do we ensure students act on that feedback?
If your quiz in Google Forms is set up a certain way, it can self mark and let students know immediately how well they did which is great. What it can’t do is intervene to catch misconceptions. That’s the teacher’s job. If the data from the quiz shows that lots of students struggled with a particular question, that is really strong and important feedback that we should be acting on. Remember that feedback is a two way street. Well that quiz gave us a strong message from student to teacher that they haven’t understood and action is required. What do we do about it? If we leave it and move on because students got feedback about how well they did then we really did miss the point entirely.
One solution is to bring the feedback into the lesson itself of course. Taking a snapshot of the work done in a lesson and using it to create the first stages of learning in the next lesson can be really powerful. Here for example I used work students had completed one day to give the whole group some feedback as to common successes and weaknesses that we then acted on via the chat function in Google Meet during the live lesson. It took seconds to grab the work and paste into the Slides but generated quality learning points that were fresh in the students mind as they practised the next task. It was probably as close to taking a student’s book under my visualiser as I could get in remote learning lessons. If, like me, you use your visualiser for sharing, modelling and improving work then I strongly recommend trying something like this yourself.
The Bottom Line
Feedback is at its best when there is a dialogue between teacher and student with the teacher taking just as much information from the student about what they can or can’t yet do as they give them in new content. It forms a continual cycle of checking for understanding, acting to address gaps or misconceptions and using both of these things to inform what we do next. If we, or the tools we choose, only ever “give” feedback, be it a score, a comment or a sticker, we have made a break in the feedback cycle and may well find that our new content is being built over gaps that will eventually combine and open into chasms. By keeping the Active Feedback Cycle alive and constantly informing our immediate interventions and next steps teaching we can hopefully catch the gaps and address them before they open so wide they are impossible to fill.