Early in 2023, Leo Havemann and I ran workshops on Designing Assessments for Academic Integrity (including AI) – one in February and an update in March (material and links below). The original idea (way back in December 2022 when I was designing it), was to do something around Academic Integrity. This had been a predominant concern in conversations between Digital Assessment Advisory and academic staff over the last year (not to mention the HE sector more generally).
Since then, the conversation has been ignited by the attention grabbing headlines around AI content generators, specifically ChatGPT3, an algorithmic text generating software developed by a Microsoft backed company called OpenAI. The reason it’s been controversial is that it’s capable of creating original human like texts either from scratch or with user input or prompts. The proliferation of these tools present a profound challenge for many of the most common assessment types in HE e.g. those that rely on students submitting written responses, being tested on factual recall or being judged on final outputs rather than process.
As Leo and I discovered, there are quite a few challenges with introducing AI into an assessment design workshop. Firstly, AI rather tends to steal the scene away from more fundamental discussions around assessment design. Secondly, in such a fast-moving field, guidance seems to date by the hour; and, although we can try, there is really little point designing assessments around what AI can or can’t do. Equally no sooner do we have a new version of AI content generator than a new AI content detector is developed which users quickly learn to foil . This cat and mouse situation isn’t fun when we’re talking about the high stakes involved for students and the concern that many academics are feeling right now. And finally, our expertise is in teaching, learning and assessment, not the technical aspects of AI and so we came at this very much as lay folk in this arena.
In our workshops, we tried to approach this by being upfront about our limitations , taking a general look at AI and its implications for assessment, aiming to reduce panic and provide a forum to think through some short term and longer-term solutions. We really want to invite colleagues into this discussion to contribute their expertise and ideas as we need to learn from each other in this new and evolving context.
So the first two workshops were kind of 101 stepping gingerly into the AI and assessment arena. Since running the first workshops I realised that that desiging assessments around what AI cannot do is a fruitless task as it is continually evolving. We then ran a revised version which was a bit more discussion based and included updates, for example on the newly released ChatGPT4. Recording for this can be found below ( at least part of it!).
AI or not, we really wanted to foreground how to create assessments that are fit for purpose and promote good academic practice – exploring what we can do to promote academic integrity and minimize students motivation or susceptibility to engage in academic misconduct through a combination of measures such as education, environment and assessment design (drawing on the Swiss Cheese model from Rundle et al).
Exploring the implications of Artificial Intelligence for assessment
Short term solutions
Since most assessments have already been designed and finalized for this year, we made suggestions around a) how to start conversations with students right now around AI content generators b) how to increase student confidence and understanding of the learning process through feedback and formative activities and c) how to make tweaks to existing assessments e.g. revising essay and written questions, switching up the format, converting generic questions into scenario based ones and so on, depending on the flexibility within your assessment design.
Adapting current assessment practice to promote academic integrity
Just as no (text-based) assessment is 100% resistant to academic misconduct there is no 100% AI proof written essay or exam question. But, combined with other measures, these suggestions will help to increases academic integrity.
Whilst AI is pretty good at answering right-answer type questions (MCQs etc), for essay type questions, the quality can be questionable. In a recent talk, Phil Dawson, an expert in Academic Integrity and security (Centre for Research in Assessment and Digital Learning, Deakin University) , said that when AI generated written essays were graded according to criteria, most assessors did not pass them.
We have made suggestions about how to work proactively with AI in assessment (asking students to evaluate AI responses and so on) but please note, if you are integrating AI into formal assessments, much of the currently open access software may not remain so for long. There are also data protection and IP issues to consider when using non-institutionally supported software. We need to consider this as an institution so that we approach this fairly. Also ChatGPT3 is frequently over capacity but You.com, a lesser known but promising site, suggested by Martin Compton, is currently free (for limited use).
Also in the video is some guidance on scenario based questions. This downloadable Creating scenario-based exam questions document might prove useful (you can insert your own discipline and context specific material or use current critical events to make these more robust).
The complete set of slides from the session are also available and include the most recent results from interactive activities.
Recording of version 2 Designing Assessments for Academic Integrity (including AI) and Mentimeter slides
Longer term solutions
The way forward for us at UCL can only be shaped by combining expertise from a range of areas (AI, teaching and learning, assessment, ethics and so on). We are fortunate to have people with such know-how at UCL and representatives from these different areas have come together to form the AI experts group. The group are publishing regular briefings:
As mentioned the capabilities of AI content generators are improving all the time. But just because they can produce a credible piece of writing, code or image doesn’t mean that we don’t need to know how to do this ourselves. Perhaps we can outsource some tasks such as report or resume writing to AI but we humans still need to understand the process of writing and producing outputs, of synthesizing complex ideas, thinking through problems and evaluating outputs. For this we will need to understand, value and develop (as Rose Luckin proposes) our experiential, embodied, social and emotional intelligence and to engage in metacognition and epistimology – how do we know what we know, where does knowledge come from, who produces it, what world view does it represent, is it stable or contextual? Critical thinking, as many participants in our workshop acknowledge, is key to student learning and never moreso than now.
Follow the debate
For anyone who wants to follow discussions and perhaps delve more into the technical as well as the pedagogical aspects, the Jisc national centre for AI is a good place to start. They have recorded webinars such as How artificial intelligence has the potential to disrupt student assessments
There are a plethora of teaching, learning and assessment events happening across the sector on this subject such as this Teaching with ChatGPT (advertised via Bournemouth website here but run by Kent University)