Developing an AI Assessment in the Division of Medicine

Developing an AI Assessment in the Division of Medicine

The UCL AI Expert group have recently published a resource providing guidance on adapting assessment methods to better support learning and accommodate the challenges that may arise from the global use of generative AI. The resource (Designing assessment for an AI-enabled world) includes the following:

  • Six small-scale adaptations to current assessments, presented as video guides, which can be seamlessly integrated into current module descriptions
  • Suggestions for planning more substantial changes using an ‘Assessment Menu’, a set of over 40 assessment approaches

Although implementing these changes may not be easy, some module teams have already begun contemplating revisions to ensure that their assessments remain fit for purpose in the future.


Andrew WilliamsIn the following video case study (20 minutes 22 seconds) Dr Andrew Williams, Associate Professor, Division of Medicine, talks about how the Molecular Basis of Disease (MEDC0010) module team have redesigned their assessment to incorporate use of AI.

While the team have made this decision in part to ensure the academic integrity of their assessment, their primary rational is to explore the potential benefits of using AI in assessments. As Andrew explains:

It’s already becoming a reality in the workplace, and therefore I think we need to try and train our students on its use, on its capabilities, on its limitations, and also on its ethical implications.

Overview

The module Molecular Basis of Disease (MEDC0010) is a core second-year module within the BSc Applied Medical Sciences programme, taken by around 160 students. In the previous assessment students were tasked with writing a grant proposal on a chosen human disease. However, a new approach is being adopted for the next academic year using AI-generated grant proposals created by Chat GPT. Students will evaluate these AI-generated proposals using specific criteria, including scientific accuracy, background coverage, rationale, and experimental feasibility.

This shift in assessment aims to foster critical thinking and evaluation skills among  the students. The decision to embrace AI in assessments has been influenced by existing case studies in the HE sector and close collaboration with module leads and students. The module team will be using workshops and formative exercises to train students in using Chat GPT and to consider ethical considerations.

In the video case case study Andrew Williams explains that while introducing AI into assessments presents exciting opportunities, the module team has faced and anticipates some challenges. Redesigning the assessment, course structure, and marking criteria to accommodate the AI-based approach has demanded time and effort. And evaluating the effectiveness of this new AI assessment is expected to be a complex task – the team plan to compare grades from the AI-incorporated assessment with those from the previous format and may also gather student perspectives through questionnaires and focus groups.

Andrew reflects on the significance of training students in AI usage rather than resisting its adoption. In various fields, including science and medicine, AI has already found extensive application. For example the DeepMind AlphaFold protein-structured database predicts protein structures based on amino acid sequences, with significant implications for the pharmaceutical industry and biochemistry.

AI’s presence in clinical medicine is pervasive now, with machine learning algorithms analysing signs and symptoms to aid healthcare professionals in making diagnoses and treatment plans.

However, challenges exist as AI technologies may struggle to match the scientific depth and critical appraisal required at higher academic levels. Andrew also highlights controversial AI technologies like SciSummary and Paperpal, which can accurately summarise published articles. The use of such technologies raises questions about authorship criteria and the implications for training future scientists.

Suggestions for Incorporating AI in Assessments

Andrew gives the following 5 tips for incorporating AI in future assessments:

  1. Accept that it is a necessary progression. Generative AI is already a  prevalent tool in the workplace and can’t be ignored.
  2. Encourage your staff to familiarise themselves with the tools and discuss as a team.
  3. Listen to your students. We have been working on a project with UCL Arena in which one of our students has been looking at our existing assessments and analysing how effective they will be in light of ChatGPT.
  4. Think about building generative AI skills in both your staff and students. We are running our own workshops, but UCL will be providing institutional training packages soon.
  5. Use it as an opportunity to move away from that traditional standard assessment paradigm of MCQs, SEQs and essays.