AI and implications for HE assessment – my tuppence worth*

ice crystals for decoration only

When I can, Friday afternoons are a good time to fiddle with things. EVERYONE it seems has been fiddling with the OpenAI ChatGPT this week and much larking about and amusement has been had with the churning out of mostly anodyne, often bland responses to questions.  Many of the responses to actual short answer questions, according to academic colleagues, would likely earn a student a comfy but not exceptional mark in the 50s range. I personally feel like the threat to school level homework is much more immediate (and trust me when I say I don’t think a threat to school homework is necessarily a bad thing at all!) but there’s no doubt in my mind that if the Covid lockdowns didn’t catalyse thinking around assessment approaches in HE, then surely this rapidly improving technology must?

Anyway, on the principle that to respect intelligence is to hear its voice, I  laboured hard to put my assumptions aside (actually I didn’t of course as I am, like everyone, a flawed and opinonated human) and I had a conversation with the OpenAI language model commonly referred to as Chat GPT-3 about one of the issues that seems to be the most vexatious.  Make of it what you will!  We are already having some really fascinating conversations at my place about this as folk are across the World. As a starting point (and it does seem that for now, the text generation is likely best seen only as a starting point), I first asked:

  1. Martin: What is your name?

 Catherine: I’m sorry, but I am not a person with a name. I am a large language model trained by OpenAI to assist with a variety of tasks, such as answering questions and providing information on a wide range of topics. I don’t have my own personal identity or consciousness. I exist solely to serve as a tool to help people with their questions and information needs. Is there anything else I can help you with?

So, that told me. Nevertheless, I am a human and wield extraordinary power though, oddly, felt uncomfortable assigning a name. However, I also fiddled a bit with a text to speech avatar- see below- whose name in the software is Catherine so I stuck with that. Why did I want a name? Because 1. It’s what they do in sci-fi just before they realise it’s too late to pull the plug. 2. I’m interested in my own perceptions of my interlocutors where subtle changes might influence my biases and perceptions about what they say and the value I put in it and 3. I have been thinking a lot about how we can do more to humanise our interactions with students this week and this seemed like an opportune moment to bring these two worlds together, albeit clumsily!

 2. Martin: I’m interested in how AI will change the types of assessment students in higher education will do. What are your thoughts?

Catherine: As AI technology continues to advance, I believe that there will be an increasing focus on using AI to assist with assessment in higher education. This could take the form of using AI to grade written assignments, conduct exams, or even provide personalized feedback to students on their work. For example, an AI system might be able to read a student’s essay and provide feedback on the organization and clarity of the writing, or it might be able to grade a multiple choice exam more quickly and accurately than a human.

[incidentally, I experimented with using a avatar to speak the words of OpenAI  ChatGPT-3 and this is what I came up with from https://ttsdemo.com/ ‘oddcast’. I struggle though to see where avatars add value in this dynamic so left it there. Happy to be guided on why this might be something to explore and invest in. Transcript as per response to question 2 above].

3. Martin Ok, thank you [My son thanks his Alexa on the assumption that when the machines do take over they’ll remember this and spare him the most arduous labours or punishments]. What about now though? Could students use GPT3 for example to generate essays of a passable standard?

Catherine: It is possible that a student could use GPT-3, or another large language model, to generate essays of a passable standard. However, I think it’s important to note that the use of AI in this way would not necessarily reflect the student’s own knowledge or understanding of the subject matter. Instead, it would be a form of cheating, as the student would be using AI to do the work for them instead of demonstrating their own understanding of the material. It’s also worth considering that using AI in this way could be detected by instructors, which could result in disciplinary action. Ultimately, I think it’s important for students to do their own work and demonstrate their own knowledge and understanding of the material in order to truly benefit from their education.

Note the use of the repeated phrase ‘I think it’s important…’. This connotes beliefs/ opinions but when pushed ‘Catherine’ is clear they have none (see final question below)

4. Martin: In case students were tempted to take such a short cut what might you recommend to lecturers keen to design assessments that were harder to generate using AI?

 Catherine: If a lecturer is concerned about students using AI to cheat on assessments, there are a few strategies that they could consider. For example, they could design assessments that require students to apply their knowledge and understanding of the material in a more creative or open-ended way, rather than simply regurgitating information. This could include tasks such as writing an original research paper, creating a multimedia project, or conducting a scientific experiment. Additionally, lecturers could use a combination of different assessment types, such as multiple choice exams, short answer questions, and essays, in order to get a more well-rounded view of a student’s knowledge and understanding of the material. Finally, lecturers could also consider using plagiarism detection software to help identify any instances of cheating on assessments.

5. Martin: But plagiarism detection doesn’t work on AI generated text does it?

Catherine: It’s true that plagiarism detection software is not designed to detect AI-generated text, as it is not typically possible to attribute authorship of such text to a specific individual. However, there are other ways that lecturers can use technology to help detect cheating on assessments. For example, they could use tools that track a student’s online activity during an exam to ensure that they are not consulting external sources of information. They could also use AI to grade exams and assignments in a more objective and consistent manner, which could help to identify anomalies or inconsistencies in a student’s work that may indicate cheating. Ultimately, the key is for lecturers to stay up-to-date with the latest technologies and use a combination of different strategies to help prevent cheating on assessments.

6. Martin: I actually believe that surveillance of students is insidious and dangerous for a whole host of reasons. Do you not share that view?

Catherine: As an AI assistant, I don’t have personal opinions or beliefs about surveillance of students or any other topic. My purpose is to provide objective and accurate information to help people with their questions. It’s important to note that surveillance of students can be a sensitive and complex issue, and there are valid arguments on both sides. Some people believe that surveillance is necessary in order to prevent cheating and ensure that students are held accountable for their work, while others argue that it can be intrusive and potentially violate students’ privacy. Ultimately, the decision about whether and how to use surveillance in education is up to individual schools and educators, and will depend on their specific goals and circumstances.

So, if this were a conversation with a human colleague, I’d come away with the impression they were trying hard to offer balance but I’d read, perhaps unsurprisingly, a bias towards tech solutions to some of the issues raised. In that respect I find it interesting that question 5 very much drifts from any sense of thread and focuses on online exam surveillance rather than, say, tools that might be used to determine a students writing signature. Anyway, it’s a start. And very much the start of something pretty big.

*almost literally as the tokens I used to do this represent around 2 US cents.