AI text generation: Should we get students back in exam halls?

There’s a lot of talk about in-person, invigilated, hand-written exams being the obvious solution to assessment concerns being discussed across education in light of developments in what is porpularly referred to as AI.  Putting aside scalability issues for now, I have looked at some of the literature on utility and impact of such exams so that we might remind ourselves that there is no such thing as a simple and obvious solution!

According to Williams and Wong (2009) in-person, closed-book exams are: 

an anachronism given the human capital needs of a knowledge economy, not just because of the absence of technology that is used routinely in everyday business and commerce, but because this type of examination instrument is incompatible with constructivist learning theory that facilitates deep learning (pp. 233-234). 

My own sense was that during the pandemic we were finally able to leverage circumstance along with similar arguments to effect change. We saw successful implementation of alternative assessments such as ‘capstones’, grade averaging and take-home exams as the examinations themselves were cancelled, modified or replaced. But since the great return to campus,  we have witnessed a reinvigoration of enthusiasm for the return of exams, the re-booking of exhibition centres and conference halls to host them and hear many academic colleagues doubling down on the exam as a panacea as the capabilities of generative AI tools have caught the World’s attention. 

Non-pedagogic reasons are often heard in support of the ‘traditional’ exam (imagine, red bricks, sun shining through windows and squeaky invigilator shoes).  These may invoke convention and tradition as well as pragmatic reasons of identity confirmation and significant reductions in marking time where feedback is not required to be given on examinations (Lawrence & Day, 2021). It has to be said, that the widely held belief that examinations promote rigour is supported by some research (especially in medical education). So, for example, students spend more time preparing for traditional exams and attend to studies more assiduously (Durning et al. , 2016). Durning et al. also argue that medical students need to have the knowledge to hand and that the students who do well in these exams do better by their patients. Misunderstandings about the nature of open book exams and (over) confidence in their ability to find answers in sources available leads to less preparation for open book exams and can lead some students to spend more time searching than producing (Johanns et al., 2017).   In addition, closed-book, in-person exams are believed to reduce cheating in comparison to open book exams or other assessment types (Downes, 2017; D’Souza and Siegfeldt, 2017). Although exams are seen to favour high-achieving students (Simonite, 2010), it is interesting to note that high achievers are more likely to cheat in exams (Ottaway et al., 2017).   

Online exams in particular are found to increase the likelihood of ‘cheating’ and lead to confusions about what is permitted and what constitutes collusion (Downes, 2017). However, whether cheating is less likely in closed book exams is contested (Williams, 2006). Williams and Wong (2009) argue that of open book exams where the pressure and dependency on memorization are reduced:

“The opportunity for academically dishonest practice is less because of the way these examinations are structured, but so is the temptation to resort to this kind of behaviour in the first place” (p.230).

Whilst online exams  are perceived to be more reliable and efficient (sample student group n=342) compared to paper-based exams (Shraim, 2019), both staff and students perceive opportunities for cheating to be easier in online modes (Chirumamilla et al., 2020) 

There are three dominant themes in the literature which focus on issues with traditional examinations: pedagogic, wellbeing and inclusivity. Closed exams tend to focus on recall and memorization at expense of higher order/ critical thinking (Bengtsson, 2019). Significant proportions of students use memorization techniques and consequently can perceive exams as unfair when exam questions do not mirror problems or content they have practiced (Clemmer et al., 2018). Open book exams de-emphasize memorisation imperatives (Johanns et al., 2017). Open book/ open web – when well-designed (e.g. problem based) is seen as more authentic, more applicable to real-world scenarios, and more learner-directed and bridges the learning with social context (Williams and Wong, 2009). 

Exams put ‘unnatural pressure’ (Bengtsson, 2019, p.1) on students that affects performance. The common perception that stress is ‘good for students’ is undermined by studies that show impeded cognition and outcome in stressed students (Rich, 2011). Students tend to prefer coursework or coursework + exams rather than exams alone (Richardson, 2015; Turner and Briggs, 2018). A small study of student perceptions of alternatives offered due to Covid-19 found that replacing traditional examinations with open-book, take home examinations found the stresses reported were replaced by technical anxieties and a sense that the papers were much harder than traditional invigilated exams would have been (Tam, 2021). A study in New Zealand of ‘take home tests’ however, found students performed better and saw learning and anxiety reduction benefits (Hall, 2001).  

A comparative study of undergraduate psychology students found greater student satisfaction and pass rates for students undertaking coursework, slightly lower satisfaction and pass rates for seen exams and lowest satisfaction and pass rate for the unseen exams which meant students saw as unfair, stressful and invalid due to need to memorize (Turner and Briggs, 2018).  

Although Richardson’ s (2014) review found studies offer contradictory findings in terms of ethnicity and performance in exams and coursework, all ethnicities tend to do better in terms of grade profile with coursework.  However, markers are idiosyncratic, privilege ‘good’ language and expression (Brown, 2010) and this contributes to higher degree outcomes for primary/ first language English speakers over English as second language speakers (Smith, 2011). Coursework increases consistency of marks across types of assessment, improves mean performance in terms of final degree outcomes and counter-balances disproportionate disadvantage of exams faced by students whose means scores are low (Simonite, 2010).  

It goes without saying that there is no ‘one size fits all’ solution but we do need to think carefully, in light of research, of the consequences of the decisions we make now about how we manage assessment in the future. It would be foolish to knee-jerk our  responses though. Just because the wheels of change move so slowly in universities, shifts back to exams may appear to offer a path of least resistance. Instead, our first consideration must be modifications and innovations that address issues but are also positive in their own right. We need to consider the possibilities of more programmatic assessment for example or perhaps learn from medical education ‘OSCE’ assessments where knowledge and communication are assessed in simulated settings or even look further to other higher education cultures where oral assessments are already the default. To achieve this level of change we need to recognise that AI is a catalyst to changes that many have been advocating (from a research-based position) for a long time but have often only achieved limited success if the resource for change has not accompanied that advocacy.

References 

Bengtsson, L. (2019). Take-home exams in higher education: a systematic review. Education Sciences, 9(4), 267. 

Brown, Gavin. (2010). The Validity of Examination Essays in Higher Education: Issues and Responses. Higher Education Quarterly. 64. 276 – 291. 10.1111/j.1468-2273.2010.00460.x. 

Chirumamilla, A., Sindre, G., & Nguyen-Duc, A. (2020). Cheating in e-exams and paper exams: the perceptions of engineering students and teachers in Norway. Assessment & Evaluation in Higher Education, 45(7), 940-957. 

Clemmer, R., Gordon, K., & Vale, J. (2018). Will that be on the exam?-Student perceptions of memorization and success in engineering. Proceedings of the Canadian Engineering Education Association (CEEA). 

Downes, M. (2017). University scandal, reputation and governance. International Journal for Educational Integrity, 13(1), 1-20. 

D’Souza, K. A., & Siegfeldt, D. V. (2017). A conceptual framework for detecting cheating in online and take‐home exams. Decision Sciences Journal of Innovative Education, 15(4), 370-391. 

Durning, S. J., Dong, T., Ratcliffe, T., Schuwirth, L., Artino, A. R., Boulet, J. R., & Eva, K. (2016). Comparing open-book and closed-book examinations: a systematic review. Academic Medicine, 91(4), 583-599. 

Hall, L. (2001). Take-Home Tests: Educational Fast Food for the New Millennium? Journal of the Australian and New Zealand Academy of Management, 7(2), 50-57. doi:10.5172/jmo.2001.7.2.50 

Johanns, B., Dinkens, A., & Moore, J. (2017). A systematic review comparing open-book and closed- book examinations: Evaluating effects on development of critical thinking skills. Nurse Education in Practice, 27, 89-94. https://doi.org/10.1016/j.nepr.2017.08.018  

Lawrence, J. & Day, K. (2021) How do we navigate the brave new world of online exams? Times Higher Available: https://www.timeshighereducation.com/opinion/how-do-we-navigate-brave-new-world-online-exams [accessed 17/6/21] 

Ottaway, K., Murrant, C., & Ritchie, K. (2017). Cheating after the test: who does it and how often?. Advances in physiology education, 41(3), 368-374. 

Rich, J. D. (2011). An experimental study of differences in study habits and long-term retention rates between take-home and in-class examinations. International Journal of University Teaching and Faculty Development, 2(2), 121. 

Richardson, J. T. (2015). Coursework versus examinations in end-of-module assessment: a literature review. Assessment & Evaluation in Higher Education, 40(3), 439-455. 

Shraim, K. (2019). Online examination practices in higher education institutions: learners’ perspectives. Turkish Online Journal of Distance Education, 20(4), 185-196. 

Simonite, V. (2003). The impact of coursework on degree classifications and the performance of individual students. Assessment & Evaluation in Higher Education, 28(5), 459-470. 

Smith, C. (2011). Examinations and the ESL student–more evidence of particular disadvantages. Assessment & Evaluation in Higher Education, 36(1), 13-25. 

Tam, A. C. F. (2021). Students’ perceptions of and learning practices in online timed take-home examinations during Covid-19. Assessment & Evaluation in Higher Education, 1-16. 

Turner, J., & Briggs, G. (2018). To see or not to see? Comparing the effectiveness of examinations and end of module assessments in online distance learning. Assessment & Evaluation in Higher Education, 43(7), 1048-1060. 

Williams, J. B., & Wong, A. (2009). The efficacy of final examinations: A comparative study of closed‐book, invigilated exams and open‐book, open‐web exams. British Journal of Educational Technology, 40(2), 227-236. 

Williams, J. B. (2006). The place of the closed book, invigilated final examination in a knowledge economy. Educational Media International, 43, 2, 107–119. 

AI text generators (not chatGPT) on essays, citations and plagiarism

I like to think of myself as generally optimistic. Like everyone, I make jokes about the machines taking over but, by and large, I feel excited by the potentials of AI (whilst keeping one hand behind my back tightly gripping a broken bottle just in case). In higher education the most pressing fears relate to threats to academic integrity and, as much as some might like, this is an issue that can’t be addressed with a broken bottle. Instead, it’s forcing/ encouraging/ catalysing (choose as per your perspective) us to reflect on how we assess students (see AI+AI=AI) . Whilst ChatGPT caught the world’s attention and millions of educators and students went slack-jawed and wide-eyed at first exposure (most soon of course started to see how frequently it ‘may occasionally generate incorrect information’ such as in this experiment I did). Whilst ChatGPT allowed us to draw our own (often erroneous) conclusions about its capabilities, other tools built on the same parameters as ChatGPT are, in different ways, honing in on the ‘help with essays’ market. On the one hand, this may be the death of the essay mill (who, after all, once they have decided to go down an academic dishonesty path, would decide to pay an essay mill to do what anyone with an internet connection can do for themselves?). On the other, it really blurs the picture of what is honest or dishonest or what we at UCL frame as ‘unfair advantage

Two tools have caught my eye recently and in this post I want to reflect on the different impressions they give in terms utility and ethical use for students wanting to write an essay. 

Writesonic /chatsonic

Chatsonic is one aspect of the ‘writesonic’ suite of tools – reviewed in considerable detail here with pricing and potentials. It caught my eye because of the bold claim to have addressed ‘limitations of chatGPT’ and to be ‘the best alternative’. I thought I’d ask it why it was better and I feel like the responses do a lot of talking for themselves:

Prompt: why is chatsonic better than chatgpt? 

ChatSonic is a powerful AI writing assistant that offers superior accuracy and speed compared to ChatGPT. Unlike ChatGPT, ChatSonic produces 100% original content that is thoroughly checked for plagiarism. In addition, ChatSonic offers detailed analytics and feedback to help you track your progress in writing and make improvements. ChatSonic also provides an extensive library of templates to save you time and energy when you’re writing, allowing you to focus on the quality of your content. Finally, ChatSonic is backed by a team of expert writers who can help you refine and polish your work to perfection.

Prompt: Explain the process you use for plagiarism checking

ChatSonic uses a combination of automated checks and manual reviews to ensure that all content produced is free of plagiarism. Our automated system checks the content against the web and other sources to detect any potential plagiarism. If anything suspicious is detected, it is flagged and sent to our team of expert writers for a manual review. We also use specialized tools such as TurnItIn and Copyscape to detect any possible plagiarism. All content is thoroughly checked to ensure it is completely original before it is delivered to our customers.

Prompt: If I use chatsonic to write an essay will I get into trouble at my university?

No, you should not get into trouble when using ChatSonic to write an essay. Our content is completely original and plagiarism-free so it should not be flagged as such by your university. However, it is always a good idea to review the essay yourself before submission to ensure it meets all the requirements of your assignment. Additionally, make sure the content you create with ChatSonic is unique and tailored to your own voice and style, otherwise it may be noticeable that you have used a writing assistant.

To say this makes me uncomfortable is a real understatement. It only tangentially addresses the wider integrity issues and in its first response is pushing me to conflate plagiarism (and its avoidance) with those wider integrity issues. Two posts in its resources section are worth noting. This one on plagiarism reinforces the message the chatbot produced. This post on replacing writers asserts that AI is an assistant but I couldn’t find much more detail on the ethical dimensions.

Maybe though I was ‘unfair’ (is that the right word?) with my prompts so it’s worth looking at some of the other functionality of the wider writesonic suite. The writesonic suite offers a dazzling array of tools for a range of purposes from website copy to social media content:

Screenshot from the writesonic user interface showing links to a wide range of AI text generation tools

I was keen to look at the ‘citations generator’ as this is an aspect of ChatGPT that is a recognised weakness. You can use a URL prompt and a text based prompt. The text based prompt I used was itself generated in chatsonic.  It takes the text in the linked article or whatever you paste in and identifies ‘facts’ with suggested citations. The web articles are mostly relevant though the first journal article it suggested was a little off the mark and I’d need to be lazy, in a massive hurry or ignorant of better ways of sourcing appropriate resources to rely on this. At this stage!

Screenshot of the writesonic citation generator showing a url paste don the left of the screen then on the write suggestions for citations based on facts identified in the linked post

Jenni.ai

The second tool that I noticed (via the prolific AI researcher Mushtaq Bilal) was Jenni. The interface is well worth a look as I feel as if this foreshadows what we are likely to expect from generative text AI integration into tools like Microsoft Word. 

The first thing I noticed, however, is the blog with the most prominent word across posts being ‘essays’. Each is designed to address an approach to a different type of essay such as the compare and contrast essay. It offers clear suggestions for different approaches, a worked example and then, right at the end says: 

“ If you want your compare-and-contrast essays done 10x faster, we recommend usingJenni.ai along with the tips and guidelines we provided you in this post.Jenni.ai is a fantastic AI software that aids your essay writing process so that you could produce your writing needs faster and better than ever.”

Another post deals head on with the ethical and integrity issues of using AI to help write essays and makes a case for use of ‘AI as a tool, not a solution’ where the goal is a “symbiotic relationship between the critical thought of a writer and the processing speed of AI” 

The tool itself, unlike the huge range of offerings in Writesonic is a relatively uncluttered interface where you start by typing a title, it offers a judgement and suggestions if appropriate. 

screenshot from jenni.ai showing a tyyped heading and pop up underneath saying this is a strong heading

In addition, it offers in-line suggestions from whatever has come before. The prompt engineering continues through what feels like a single document rather than a chat. If you don’t like the suggestion you can get another. Here I typed a prompt and these are the first three options it gave me. Note the positive aspect on my worried prompt in every case! 

My prompt sentence….The worry is that these tools will be used by students to cheat  

but the reality is that most AI tools are designed to help students learn and improve their writing skills. [first suggested Jenni response]

The worry is that these tools will be used by students to cheat 

on their assignments by generating essays automatically, but they can also be used to provide feedback on areas where students need…[the second option]

The worry is that these tools will be used by students to cheat 

but their true purpose is to assist in the learning process by providing immediate feedback and identifying areas where improvement is…[third option]

The other noticeable aspect is the option to ‘cite’ – Here it offers a choice of MLA or APA 7th and the sources are, unlike ChatGPT’s famous hallucinations, genuine articles (at least in my limited testing). You can select ‘websites’ or ‘journals’ though I found the websites tended to be much more directly relevant than the journals. 

I really have only just started to play with these though and new things are popping up all over the place every day. Most educators will not have the time to do so though. Students may see and use these tools as an extension of those they use already for translation or improving writing. The blurry zone between acceptable and unacceptable is getting more ill-defined by the day.

What can I conclude from this? Well, firstly,  whatever the motivation on the continuum ranging from ‘give us all your money’ to ‘I believe the children are our future’, the underlying technology is being adapted rapidly to address perceived limitations in the tool that has brought generative text AI tools to our attention. We may not like the motivations or the ethics but we’ll not get far by ‘making like an ostrich’. Secondly,  It’s not good enough for us (educators) to dismiss things because the tool that many are now familiar with, ChatGPT, makes up citations. That’s being addressed as I type.  The number of these tools proliferating will soon be too huge to keep a decent handle on so we need to understand broadly how discrete tools might be used (ethically and unethically) and how many will integrate into tools we use daily already. In so doing we need to work out what that means for our students, their studies, their assessment and the careers our education is ostensibly preparing them for. Thirdly, we need to open up the discussions and debates around academic integrity and move on from ‘plagiarism’ as public Enemy No 1. Finally, where there are necessitated changes so there are resource implications. We need to accept that to prepare ourselves, our colleagues and our students we will need to adapt much faster than we are used to and properly resource however we attempt to address the challenges and opportunities ahead.  

Note: This post is not an endorsement or recommendation of any products mentioned and should be read with that clearly in mind! 

AI + AI = AI

To be honest, I really can’t believe no-one appears to have ‘generated’ this equation yet amongst the kerfuffle around generative AI. So, let this post be me staking a claim for what I hope will be a ‘go-to’ simplification of the key issue that educators in both the compulsory and post-compulsory sectors are (or likely should be) grappling with. I know it might ruffle mathematician or science colleagues’ ordered feathers but I feel like it could be a useful starting or continuation point for dialogue about productive and future-facing responses to the popularisation of AI generators like ChatGPT (text) and Midjourney (Image).

Basically, this is what each AI means:

  • First AI = Artificial Intelligence
  • Second AI = Academic Integrity
  • Third AI = Assessment Innovation

In other words, we have these generative tools popularly and collectively referred to as AI (Please don’t @ me on any inaccuracy of this naming phenomenon, I have nailed my colours to the ‘path of least resistance’ mast already) which when coupled with implications of and concerns about academic integrity lead us to the necessity of assessment innovation. 

I mean it really as a provocation or as a stimulus for critique so the most important question is: What do you think?

Perhaps also, we might consider other possible solutions to the ‘sum’?

  • E.g. AI + AI = BI? (Ban it)

Or maybe there’s a better or alternative way of representing the debates pseudo-mathematically?

  • AI³ ? 
  • AI – AI = AA (academic anarchy)?

Incidentally, I asked Midjourney to generate a suitable image using only “AI + AI = AI” as the prompt and this is one of its offerings:

Image generated using midjourney using the prompt AI+AI=AI. The image is futuristic and shows two young women facing away from one another but blended complex code and machinery. They are simultaneously blended and exploding apart.

Make of that what you will. Finally, and just because it’s what we all seem to be doing now in this context, I also asked ChatGPT what AI+AI=AI meant and I actually quite appreciated the response:

“If you wrote “AI + AI = AI,” you may be referring to the concept of artificial intelligence (AI) self-improvement. The equation suggests that when you add an AI system to another AI system, the result is still AI. This could imply that AI has the potential to improve itself through learning from other AI systems, leading to even more advanced AI.”

Not as a temporary fluke … audio version

Alex, Ben and I are delighted our call to keep open discussion about flexible modalites has been published. You can read the full article on the JLDHE site. Below is an audio version recorded by Martin if you’d rather listen. It is roughly 16 minutes long, including the odd stumble, cough and hiccough.

We would also recommend browsing the many other excellent papers in the whole special edition

Here is the abstract for our paper (This is just me fiddling with D-ID video creator!).