Covid-19 and cheating

Empty chairs

Covid-19 saw a whirlwind of change in higher education assessment. Lab and practical work transformed into mailed out kits and remote activities, whilst other assessments were abandoned or combined into capstone projects. Without the possibility of face-to-face invigilation, most higher education exams shifted online and became open book. Having responded to these difficult circumstances, there is now a major opportunity for improving assessments long term (Brown & Sambell, 2020). 

Plenty of exciting transformations are taking place, more on that in another blog, but what could potentially derail some of it is how universities respond to rising academic misconduct. At UCL there has been an increase in institutional misconduct case numbers and strong concerns voiced in conversations and internal surveys. The issue has impacted the sector, with one survey of over 900 UK students finding 1 in 6 students admitted to cheating their online assessments in the academic year 2021-22 (Alpha Academic Appeals, 2022) 

There are several reasons to respond carefully. From the staff point of view, designing open book exams takes time and expertise. Questions must be designed for a context where a student could easily google answers, use online resources and/or share answers. Whilst teachers worked above and beyond to rapidly redesign their assessments, it’s important to understand this period as “emergency remote teaching” rather than a planned and informed shift to online education (Charles Hodges et al., 2020). Anecdotally, UCL students reported that their assessments were much harder this year, presumably a result of staff trying to design questions that couldn’t be googled. Rather than deterring cheating, some students explained that they needed to collude to answer these more difficult questions.  In ideal times the use of more challenging and applied assessment would be carefully scaffolded so that students feel prepared.  

 On the student side, students are more likely to cheat when they perceive opportunities to do so and having access to the internet and a wealth of online communication tools provides just that. At the same time, the learning and wellbeing issues caused by lockdown no doubt disrupted many students’ preparation and focus so likely motivated a greater number of students to resort to cheating and collusion to complete their assessments. Additionally, the assessment technology used at UCL for its central exams was new and was deployed in a standardised way. The potential benefits of randomising questions, data, and variables was not widely used but could be an effective strategy going forward. It’s therefore important not to judge online education and online assessment too harshly by how it fared during the Covid-19 period. 

Student doing an exam

Invigilated Exams: back from the dead? 

Given the rise in academic misconduct and collusion, some UCL staff have expressed a strong desire to return to invigilated face to face (f2f) exams. Most are calling for f2f invigilation and a small number have called for proctoring.  In contrast, others have celebrated the shift away from invigilated exams as long overdue. UCL’s assessment operating policy for 2022-23 tries to strike a balance. It firmly advocates for a “digital first” approach. However, the policy also gives staff grounds to apply for a return to invigilated face-to-face exams and provides the following examples as acceptable reasons: 

  • closed-book knowledge-based and applied knowledge-based recall assessments where possession of that applied knowledge is central to the students’ education and/or learning outcomes 
  • proof of specific skills (e.g., foreign language ability) ‘on the spot’ without recourse to extraneous resources where that is educationally justified and represents an authentic assessment, including the need to demonstrate professional skills and capability requirements 
  • compliance with professional, statutory or regulatory body (PSRB) requirements 
  • assessments where there is a single definitive answer that could otherwise be found through a search and where it is difficult to monitor assessment misconduct or redesign the assessment to mitigate these concerns (e.g., some numerical subjects) 
  • hand-written examinations (e.g., equations, diagrams) 

These examples certainly chime with the narratives and concerns I’ve been hearing from staff. Some are worried that students aren’t learning material as thoroughly as they would if they were preparing for a closed book exam. Other disciplines are navigating their regulatory and professional body requirements which are mandating the use of invigilation. Then there is the difficulty of STEM based exams, where writing out equations or formulas on a computer just doesn’t work as well as writing them out on paper. Whilst online exam technologies often allow for hand-written answers to be uploaded, this jars with the experience and benefits of doing the exam digitally in the first place. None of these issues are intractable.  

In my discussions in the Joint Faculty of Social and Historical Sciences and Arts and Humanities, the main arguments I’ve heard for using invigilated f2f exams come from language skill and quantitative subjects. The learning outcomes for these modules are the acquisition of language skills or basic numeric, economic or mathematical concepts. The argument goes that student can easily look up this information or collude which makes open book exams unsuitable. Across UCL it remains to be seen which exams remain open book and which return to f2f invigilated exams.  

Kill your Zombies 

On the surface it seems hard to argue against online open book exams being in general easier to cheat than invigilated f2f exams. However, no assessment is completely cheat proof and there are several strategies that could be adopted to minimise cheating in an online open book exam. Interestingly, surveys of staff and students show a mixed picture of online exams of online exams during this period (see Reedy et al., 2021). It is tricky to know how to interpret studies based on self-reported behavior, particularly concerning cheating. It may be that students downplay the ease in which cheating can take place precisely because it is easier and more widespread. Alternatively, differing student views may relate to the differing design of the exam, with some exams being more effective than others. Understanding best practice online exam design will be crucial going forward. 

Online open book exams have a lot of benefits to them after all They allow for a broader range of tasks to be completed by the student and can be more authentic. Students have reported finding the format less stressful as they could take their exams conveniently from home or a library rather than travelling to the exam hall.  

Shifting back to invigilated f2f exams risks losing these benefits and reintroducing some long-standing problems. For example, invigilated f2f exams: 

  • aren’t cheat proof and in fact no assessment is (Harper et al., 2021) 
  • cause significant anxiety for students.  
  • encourage rote learning and cramming the night before.  
  • tend to occur in bulk, which adds to student and staff assessment load. 
  • aren’t usually effective points of feedback if the exam occurs at the end of semester before summer holidays.  
  • consist of a student working alone without resources which is precisely the opposite of the team based and connected to the internet world that we live and work in. 
  • may contribute to the BAME awarding gap (Cramer, 2021). 

If invigilated f2f exams return, these issues will need careful consideration. They really beg the question of whether invigilated f2f exams are ever suitable assessments for specific disciplines and learning outcomes or are they never a good idea because they cause more problems than they solve?  Is it time to move on from them? Furthermore, what are the alternatives and what are the barriers to these alternatives?  

References: 

Alpha Academic Appeals. (2022). Press release on prevalence of cheating in online assessments, July 2022 | Blog. http://www.academicappeals.co.uk/news/05072022201747-press-release-on-prevalence-of-cheating-in-online-assessment–july-2022/ 

Brown, S., & Sambell, K. (2020). Changing assessment for good: A major opportunity for educational. https://www.seda.ac.uk/wp-content/uploads/2021/04/Paper-for-SEDA-Changing-assessment-for-good-a-major-opportunity-for-educational-developers-4.pdf 

Charles Hodges, Stephanie Moore, Barb Lockee, Torrey Trust, & Aaron Bond. (2020). The Difference Between Emergency Remote Teaching and Online Learning. https://er.educause.edu/articles/2020/3/the-difference-between-emergency-remote-teaching-and-online-learning 

Cramer, L. (2021). Alternative strategies for closing the award gap between white and minority ethnic students. ELife, 10, e58971. https://doi.org/10.7554/eLife.58971 

Harper, R., Bretag, T., & Rundle, K. (2021). Detecting contract cheating: Examining the role of assessment type. Higher Education Research & Development, 40(2), 263–278. https://doi.org/10.1080/07294360.2020.1724899 

Reedy, A., Pfitzner, D., Rook, L., & Ellis, L. (2021). Responding to the COVID-19 emergency: Student and academic staff perceptions of academic integrity in the transition to online exams at three Australian universities. International Journal for Educational Integrity, 17(1), 1–32. https://doi.org/10.1007/s40979-021-00075-9 

 

Leave a Reply

Your email address will not be published. Required fields are marked *