Formative assessment is not just to get a sense of where students are in their learning and give them a predicted grade; it is to help them learn more. Unless an assessment comes with feedback that students engage with and use to improve, it is not formative, it is just a test.
Luckily, formative feedback does not necessarily require a lot more time from you. Here are a few ways of providing fast feedback that is valuable to your students.
You can set up self-marking quizzes using a number of different tools including Blackboard Tests, Mentimeter, Panopto, Xerte and Microsoft Forms. See our article outlining the features of different tools.
Question types that can be automatically marked include multiple choice, fill in the blank, calculations, drag-and-drop, matching pairs and hotspots (i.e. clicking on a certain area of an image). This gives you scope to make the tests engaging and varied. Some tools will also allow you to use question pools or variables so that a student can take the test multiple times and see different questions.
Of course, some thought needs to go into writing the questions (read more in our article on writing MCQs) and the feedback provided. While information just on what you got right or wrong is a form of feedback, it is more helpful for learning if a student understands why it is wrong. In many online tools it is possible to customise the feedback for different answers, in order to point students to further resources, give a hint or trigger further reflection. Tests can often be configured to allow multiple attempts, either of the question or the whole test, and withhold marks until a certain time or completion point.
Online testing tools may also include question types that require a human marker, such as short answer or essay questions. It is still possible to give feedback on these types of question within the test by using model answers. First, the student records their own answer to the question. Then they are provided with a model answer for comparison. Ask them to reflect on the differences between their answer and the model answer – did they cover all the points? Is there anything they didn’t understand? This is very much a situation where learners will get out what they put in, and a certain amount of maturity and motivation is required of them. But it could prove a good self-help resource for those who are engaged.
Generic feedback may also be in the form of feedback written by you after the test, but to the whole group rather than individuals. You can use the results on automatically marked tests to see where the common misconceptions are. If there is a question everyone has got right you don’t need to return to it, but if there are questions most people have got it wrong, you can prepare feedback on why their answers were wrong and release it to the whole group.
In this context, a rubric is essentially a table of marking criteria against level of competence, with a mark and feedback assigned to each cell for you to pick from when marking. Ideally, rubrics should contain specific hierarchical learning outcomes, not just ok/good/very good at… etc.
There are rubric tools available in Blackboard and PebblePad. In Blackboard you can choose points, percent, or a range of points or percent for each cell, or just feedback. See Blackboard’s help pages on rubrics for more information.
Rubrics are very useful for consistency in assignments where you have more than one marker, but they also allow for quick feedback to students by showing which criteria they were proficient in and which needed improvement. You might also use rubrics for scaffolding peer feedback or inviting students to self-assess before receiving their mark.
Done correctly, peer feedback can be really valuable to both the receiver and giver of feedback, and while you may have to do some checks for quality it can be a very efficient way for all students to receive personalised guidance.
The real power of peer feedback is that the student giving the feedback has to think even harder about the brief and how the work compares to it, which may highlight mistakes they have made in their own work.
For more information see our article on peer feedback.