Using AI tools for Science Communication projects in Natural Sciences

Dr Amy Unsworth and student Ben Vengerik share the findings from their project on using AI tools for student Science Communication projects in Natural Sciences.

They discuss:

  • Auditing AI tools for image generation
  • The pros and cons of two tools in particular (Midjourney and Bing)
  • Implications for education and supporting students
  • Equity of access to tools
  • Legal and ethical issues around image generation

 

What is gained and lost from the writing process when using AI tools?

Peter Puxon, Ayanna Prevatt-Goldstein and Jessica Brooks in conversation with their three ChangeMakers Co-Creator students Anenyah Venkatesan, Zsofia Varga and Yishan Li. They reflect on what is gained and lost from the process of writing and reading after engaging with AI tools to work on an assignment.

This project was from UCL’s Academic Communications Centre.

Slides presented at the ChangeMakers Lunch & Learn event here: SLIDES AI Co-Creator – ACC P Puxon J Brook + A Prevatt-Goldstein

Generative AI: Friend or Foe?

In this post I share two videos on generative AI including (of course) reference to ChatPT.  These are designed for a general audience at UCL and will hopefully be of relevance to academic and professional service colleagues as well as students. In these unscripted videos I, a human, talk in a non-technical way about some of the tools, their affordances and implications. The summaries below were generated in GPT4 using the transcripts of the videos.
Video 1:
In this video, Martin Compton from Arena discusses the phenomenon of generative AI, using Chat GPT as a prime example. He addresses the question of whether generative AI is a friend or foe, and suggests that how we react, utilise, and learn from these technologies will determine the outcome. He provides an example of a generative image created with AI, raising ethical concerns such as copyright infringement and the carbon footprint of AI technologies. He also talks about different manifestations of ‘large language models’ and raise questions about the ways members of the academic community could use them.

Access details and transcript for video 1 here

————————————
m
m
m
m
m
m
m
Video 2
In the second video about generative AI, Martin Compton from Arena builds on discussions with a colleague, Professor Susan Smith, and explores whether generative AI is a friend or enemy. He acknowledges the power and remarkable capabilities of AI tools like ChatGPT (a large language model text generator) and Midjourney, an AI image generator. However, he advises against panicking or feeling anxious about the impact of these technologies. Instead, Martin suggests that we should adapt, adjust, and learn from the ethical issues and implications these tools present. By finding ways to accommodate, embrace, and exploit the potential of generative AI, we can utilize these technologies for labor-saving purposes and ultimately enhance various aspects of our lives.
———————————
m
m
m
m
m
 Podcast

AI text generation: Should we get students back in exam halls?

There’s a lot of talk about in-person, invigilated, hand-written exams being the obvious solution to assessment concerns being discussed across education in light of developments in what is porpularly referred to as AI.  Putting aside scalability issues for now, I have looked at some of the literature on utility and impact of such exams so that we might remind ourselves that there is no such thing as a simple and obvious solution!

According to Williams and Wong (2009) in-person, closed-book exams are: 

an anachronism given the human capital needs of a knowledge economy, not just because of the absence of technology that is used routinely in everyday business and commerce, but because this type of examination instrument is incompatible with constructivist learning theory that facilitates deep learning (pp. 233-234). 

My own sense was that during the pandemic we were finally able to leverage circumstance along with similar arguments to effect change. We saw successful implementation of alternative assessments such as ‘capstones’, grade averaging and take-home exams as the examinations themselves were cancelled, modified or replaced. But since the great return to campus,  we have witnessed a reinvigoration of enthusiasm for the return of exams, the re-booking of exhibition centres and conference halls to host them and hear many academic colleagues doubling down on the exam as a panacea as the capabilities of generative AI tools have caught the World’s attention. 

Non-pedagogic reasons are often heard in support of the ‘traditional’ exam (imagine, red bricks, sun shining through windows and squeaky invigilator shoes).  These may invoke convention and tradition as well as pragmatic reasons of identity confirmation and significant reductions in marking time where feedback is not required to be given on examinations (Lawrence & Day, 2021). It has to be said, that the widely held belief that examinations promote rigour is supported by some research (especially in medical education). So, for example, students spend more time preparing for traditional exams and attend to studies more assiduously (Durning et al. , 2016). Durning et al. also argue that medical students need to have the knowledge to hand and that the students who do well in these exams do better by their patients. Misunderstandings about the nature of open book exams and (over) confidence in their ability to find answers in sources available leads to less preparation for open book exams and can lead some students to spend more time searching than producing (Johanns et al., 2017).   In addition, closed-book, in-person exams are believed to reduce cheating in comparison to open book exams or other assessment types (Downes, 2017; D’Souza and Siegfeldt, 2017). Although exams are seen to favour high-achieving students (Simonite, 2010), it is interesting to note that high achievers are more likely to cheat in exams (Ottaway et al., 2017).   

Online exams in particular are found to increase the likelihood of ‘cheating’ and lead to confusions about what is permitted and what constitutes collusion (Downes, 2017). However, whether cheating is less likely in closed book exams is contested (Williams, 2006). Williams and Wong (2009) argue that of open book exams where the pressure and dependency on memorization are reduced:

“The opportunity for academically dishonest practice is less because of the way these examinations are structured, but so is the temptation to resort to this kind of behaviour in the first place” (p.230).

Whilst online exams  are perceived to be more reliable and efficient (sample student group n=342) compared to paper-based exams (Shraim, 2019), both staff and students perceive opportunities for cheating to be easier in online modes (Chirumamilla et al., 2020) 

There are three dominant themes in the literature which focus on issues with traditional examinations: pedagogic, wellbeing and inclusivity. Closed exams tend to focus on recall and memorization at expense of higher order/ critical thinking (Bengtsson, 2019). Significant proportions of students use memorization techniques and consequently can perceive exams as unfair when exam questions do not mirror problems or content they have practiced (Clemmer et al., 2018). Open book exams de-emphasize memorisation imperatives (Johanns et al., 2017). Open book/ open web – when well-designed (e.g. problem based) is seen as more authentic, more applicable to real-world scenarios, and more learner-directed and bridges the learning with social context (Williams and Wong, 2009). 

Exams put ‘unnatural pressure’ (Bengtsson, 2019, p.1) on students that affects performance. The common perception that stress is ‘good for students’ is undermined by studies that show impeded cognition and outcome in stressed students (Rich, 2011). Students tend to prefer coursework or coursework + exams rather than exams alone (Richardson, 2015; Turner and Briggs, 2018). A small study of student perceptions of alternatives offered due to Covid-19 found that replacing traditional examinations with open-book, take home examinations found the stresses reported were replaced by technical anxieties and a sense that the papers were much harder than traditional invigilated exams would have been (Tam, 2021). A study in New Zealand of ‘take home tests’ however, found students performed better and saw learning and anxiety reduction benefits (Hall, 2001).  

A comparative study of undergraduate psychology students found greater student satisfaction and pass rates for students undertaking coursework, slightly lower satisfaction and pass rates for seen exams and lowest satisfaction and pass rate for the unseen exams which meant students saw as unfair, stressful and invalid due to need to memorize (Turner and Briggs, 2018).  

Although Richardson’ s (2014) review found studies offer contradictory findings in terms of ethnicity and performance in exams and coursework, all ethnicities tend to do better in terms of grade profile with coursework.  However, markers are idiosyncratic, privilege ‘good’ language and expression (Brown, 2010) and this contributes to higher degree outcomes for primary/ first language English speakers over English as second language speakers (Smith, 2011). Coursework increases consistency of marks across types of assessment, improves mean performance in terms of final degree outcomes and counter-balances disproportionate disadvantage of exams faced by students whose means scores are low (Simonite, 2010).  

It goes without saying that there is no ‘one size fits all’ solution but we do need to think carefully, in light of research, of the consequences of the decisions we make now about how we manage assessment in the future. It would be foolish to knee-jerk our  responses though. Just because the wheels of change move so slowly in universities, shifts back to exams may appear to offer a path of least resistance. Instead, our first consideration must be modifications and innovations that address issues but are also positive in their own right. We need to consider the possibilities of more programmatic assessment for example or perhaps learn from medical education ‘OSCE’ assessments where knowledge and communication are assessed in simulated settings or even look further to other higher education cultures where oral assessments are already the default. To achieve this level of change we need to recognise that AI is a catalyst to changes that many have been advocating (from a research-based position) for a long time but have often only achieved limited success if the resource for change has not accompanied that advocacy.

References 

Bengtsson, L. (2019). Take-home exams in higher education: a systematic review. Education Sciences, 9(4), 267. 

Brown, Gavin. (2010). The Validity of Examination Essays in Higher Education: Issues and Responses. Higher Education Quarterly. 64. 276 – 291. 10.1111/j.1468-2273.2010.00460.x. 

Chirumamilla, A., Sindre, G., & Nguyen-Duc, A. (2020). Cheating in e-exams and paper exams: the perceptions of engineering students and teachers in Norway. Assessment & Evaluation in Higher Education, 45(7), 940-957. 

Clemmer, R., Gordon, K., & Vale, J. (2018). Will that be on the exam?-Student perceptions of memorization and success in engineering. Proceedings of the Canadian Engineering Education Association (CEEA). 

Downes, M. (2017). University scandal, reputation and governance. International Journal for Educational Integrity, 13(1), 1-20. 

D’Souza, K. A., & Siegfeldt, D. V. (2017). A conceptual framework for detecting cheating in online and take‐home exams. Decision Sciences Journal of Innovative Education, 15(4), 370-391. 

Durning, S. J., Dong, T., Ratcliffe, T., Schuwirth, L., Artino, A. R., Boulet, J. R., & Eva, K. (2016). Comparing open-book and closed-book examinations: a systematic review. Academic Medicine, 91(4), 583-599. 

Hall, L. (2001). Take-Home Tests: Educational Fast Food for the New Millennium? Journal of the Australian and New Zealand Academy of Management, 7(2), 50-57. doi:10.5172/jmo.2001.7.2.50 

Johanns, B., Dinkens, A., & Moore, J. (2017). A systematic review comparing open-book and closed- book examinations: Evaluating effects on development of critical thinking skills. Nurse Education in Practice, 27, 89-94. https://doi.org/10.1016/j.nepr.2017.08.018  

Lawrence, J. & Day, K. (2021) How do we navigate the brave new world of online exams? Times Higher Available: https://www.timeshighereducation.com/opinion/how-do-we-navigate-brave-new-world-online-exams [accessed 17/6/21] 

Ottaway, K., Murrant, C., & Ritchie, K. (2017). Cheating after the test: who does it and how often?. Advances in physiology education, 41(3), 368-374. 

Rich, J. D. (2011). An experimental study of differences in study habits and long-term retention rates between take-home and in-class examinations. International Journal of University Teaching and Faculty Development, 2(2), 121. 

Richardson, J. T. (2015). Coursework versus examinations in end-of-module assessment: a literature review. Assessment & Evaluation in Higher Education, 40(3), 439-455. 

Shraim, K. (2019). Online examination practices in higher education institutions: learners’ perspectives. Turkish Online Journal of Distance Education, 20(4), 185-196. 

Simonite, V. (2003). The impact of coursework on degree classifications and the performance of individual students. Assessment & Evaluation in Higher Education, 28(5), 459-470. 

Smith, C. (2011). Examinations and the ESL student–more evidence of particular disadvantages. Assessment & Evaluation in Higher Education, 36(1), 13-25. 

Tam, A. C. F. (2021). Students’ perceptions of and learning practices in online timed take-home examinations during Covid-19. Assessment & Evaluation in Higher Education, 1-16. 

Turner, J., & Briggs, G. (2018). To see or not to see? Comparing the effectiveness of examinations and end of module assessments in online distance learning. Assessment & Evaluation in Higher Education, 43(7), 1048-1060. 

Williams, J. B., & Wong, A. (2009). The efficacy of final examinations: A comparative study of closed‐book, invigilated exams and open‐book, open‐web exams. British Journal of Educational Technology, 40(2), 227-236. 

Williams, J. B. (2006). The place of the closed book, invigilated final examination in a knowledge economy. Educational Media International, 43, 2, 107–119. 

AI text generators (not chatGPT) on essays, citations and plagiarism

I like to think of myself as generally optimistic. Like everyone, I make jokes about the machines taking over but, by and large, I feel excited by the potentials of AI (whilst keeping one hand behind my back tightly gripping a broken bottle just in case). In higher education the most pressing fears relate to threats to academic integrity and, as much as some might like, this is an issue that can’t be addressed with a broken bottle. Instead, it’s forcing/ encouraging/ catalysing (choose as per your perspective) us to reflect on how we assess students (see AI+AI=AI) . Whilst ChatGPT caught the world’s attention and millions of educators and students went slack-jawed and wide-eyed at first exposure (most soon of course started to see how frequently it ‘may occasionally generate incorrect information’ such as in this experiment I did). Whilst ChatGPT allowed us to draw our own (often erroneous) conclusions about its capabilities, other tools built on the same parameters as ChatGPT are, in different ways, honing in on the ‘help with essays’ market. On the one hand, this may be the death of the essay mill (who, after all, once they have decided to go down an academic dishonesty path, would decide to pay an essay mill to do what anyone with an internet connection can do for themselves?). On the other, it really blurs the picture of what is honest or dishonest or what we at UCL frame as ‘unfair advantage

Two tools have caught my eye recently and in this post I want to reflect on the different impressions they give in terms utility and ethical use for students wanting to write an essay. 

Writesonic /chatsonic

Chatsonic is one aspect of the ‘writesonic’ suite of tools – reviewed in considerable detail here with pricing and potentials. It caught my eye because of the bold claim to have addressed ‘limitations of chatGPT’ and to be ‘the best alternative’. I thought I’d ask it why it was better and I feel like the responses do a lot of talking for themselves:

Prompt: why is chatsonic better than chatgpt? 

ChatSonic is a powerful AI writing assistant that offers superior accuracy and speed compared to ChatGPT. Unlike ChatGPT, ChatSonic produces 100% original content that is thoroughly checked for plagiarism. In addition, ChatSonic offers detailed analytics and feedback to help you track your progress in writing and make improvements. ChatSonic also provides an extensive library of templates to save you time and energy when you’re writing, allowing you to focus on the quality of your content. Finally, ChatSonic is backed by a team of expert writers who can help you refine and polish your work to perfection.

Prompt: Explain the process you use for plagiarism checking

ChatSonic uses a combination of automated checks and manual reviews to ensure that all content produced is free of plagiarism. Our automated system checks the content against the web and other sources to detect any potential plagiarism. If anything suspicious is detected, it is flagged and sent to our team of expert writers for a manual review. We also use specialized tools such as TurnItIn and Copyscape to detect any possible plagiarism. All content is thoroughly checked to ensure it is completely original before it is delivered to our customers.

Prompt: If I use chatsonic to write an essay will I get into trouble at my university?

No, you should not get into trouble when using ChatSonic to write an essay. Our content is completely original and plagiarism-free so it should not be flagged as such by your university. However, it is always a good idea to review the essay yourself before submission to ensure it meets all the requirements of your assignment. Additionally, make sure the content you create with ChatSonic is unique and tailored to your own voice and style, otherwise it may be noticeable that you have used a writing assistant.

To say this makes me uncomfortable is a real understatement. It only tangentially addresses the wider integrity issues and in its first response is pushing me to conflate plagiarism (and its avoidance) with those wider integrity issues. Two posts in its resources section are worth noting. This one on plagiarism reinforces the message the chatbot produced. This post on replacing writers asserts that AI is an assistant but I couldn’t find much more detail on the ethical dimensions.

Maybe though I was ‘unfair’ (is that the right word?) with my prompts so it’s worth looking at some of the other functionality of the wider writesonic suite. The writesonic suite offers a dazzling array of tools for a range of purposes from website copy to social media content:

Screenshot from the writesonic user interface showing links to a wide range of AI text generation tools

I was keen to look at the ‘citations generator’ as this is an aspect of ChatGPT that is a recognised weakness. You can use a URL prompt and a text based prompt. The text based prompt I used was itself generated in chatsonic.  It takes the text in the linked article or whatever you paste in and identifies ‘facts’ with suggested citations. The web articles are mostly relevant though the first journal article it suggested was a little off the mark and I’d need to be lazy, in a massive hurry or ignorant of better ways of sourcing appropriate resources to rely on this. At this stage!

Screenshot of the writesonic citation generator showing a url paste don the left of the screen then on the write suggestions for citations based on facts identified in the linked post

Jenni.ai

The second tool that I noticed (via the prolific AI researcher Mushtaq Bilal) was Jenni. The interface is well worth a look as I feel as if this foreshadows what we are likely to expect from generative text AI integration into tools like Microsoft Word. 

The first thing I noticed, however, is the blog with the most prominent word across posts being ‘essays’. Each is designed to address an approach to a different type of essay such as the compare and contrast essay. It offers clear suggestions for different approaches, a worked example and then, right at the end says: 

“ If you want your compare-and-contrast essays done 10x faster, we recommend usingJenni.ai along with the tips and guidelines we provided you in this post.Jenni.ai is a fantastic AI software that aids your essay writing process so that you could produce your writing needs faster and better than ever.”

Another post deals head on with the ethical and integrity issues of using AI to help write essays and makes a case for use of ‘AI as a tool, not a solution’ where the goal is a “symbiotic relationship between the critical thought of a writer and the processing speed of AI” 

The tool itself, unlike the huge range of offerings in Writesonic is a relatively uncluttered interface where you start by typing a title, it offers a judgement and suggestions if appropriate. 

screenshot from jenni.ai showing a tyyped heading and pop up underneath saying this is a strong heading

In addition, it offers in-line suggestions from whatever has come before. The prompt engineering continues through what feels like a single document rather than a chat. If you don’t like the suggestion you can get another. Here I typed a prompt and these are the first three options it gave me. Note the positive aspect on my worried prompt in every case! 

My prompt sentence….The worry is that these tools will be used by students to cheat  

but the reality is that most AI tools are designed to help students learn and improve their writing skills. [first suggested Jenni response]

The worry is that these tools will be used by students to cheat 

on their assignments by generating essays automatically, but they can also be used to provide feedback on areas where students need…[the second option]

The worry is that these tools will be used by students to cheat 

but their true purpose is to assist in the learning process by providing immediate feedback and identifying areas where improvement is…[third option]

The other noticeable aspect is the option to ‘cite’ – Here it offers a choice of MLA or APA 7th and the sources are, unlike ChatGPT’s famous hallucinations, genuine articles (at least in my limited testing). You can select ‘websites’ or ‘journals’ though I found the websites tended to be much more directly relevant than the journals. 

I really have only just started to play with these though and new things are popping up all over the place every day. Most educators will not have the time to do so though. Students may see and use these tools as an extension of those they use already for translation or improving writing. The blurry zone between acceptable and unacceptable is getting more ill-defined by the day.

What can I conclude from this? Well, firstly,  whatever the motivation on the continuum ranging from ‘give us all your money’ to ‘I believe the children are our future’, the underlying technology is being adapted rapidly to address perceived limitations in the tool that has brought generative text AI tools to our attention. We may not like the motivations or the ethics but we’ll not get far by ‘making like an ostrich’. Secondly,  It’s not good enough for us (educators) to dismiss things because the tool that many are now familiar with, ChatGPT, makes up citations. That’s being addressed as I type.  The number of these tools proliferating will soon be too huge to keep a decent handle on so we need to understand broadly how discrete tools might be used (ethically and unethically) and how many will integrate into tools we use daily already. In so doing we need to work out what that means for our students, their studies, their assessment and the careers our education is ostensibly preparing them for. Thirdly, we need to open up the discussions and debates around academic integrity and move on from ‘plagiarism’ as public Enemy No 1. Finally, where there are necessitated changes so there are resource implications. We need to accept that to prepare ourselves, our colleagues and our students we will need to adapt much faster than we are used to and properly resource however we attempt to address the challenges and opportunities ahead.  

Note: This post is not an endorsement or recommendation of any products mentioned and should be read with that clearly in mind! 

AI and implications for HE assessment – my tuppence worth*

ice crystals for decoration only

When I can, Friday afternoons are a good time to fiddle with things. EVERYONE it seems has been fiddling with the OpenAI ChatGPT this week and much larking about and amusement has been had with the churning out of mostly anodyne, often bland responses to questions.  Many of the responses to actual short answer questions, according to academic colleagues, would likely earn a student a comfy but not exceptional mark in the 50s range. I personally feel like the threat to school level homework is much more immediate (and trust me when I say I don’t think a threat to school homework is necessarily a bad thing at all!) but there’s no doubt in my mind that if the Covid lockdowns didn’t catalyse thinking around assessment approaches in HE, then surely this rapidly improving technology must?

Anyway, on the principle that to respect intelligence is to hear its voice, I  laboured hard to put my assumptions aside (actually I didn’t of course as I am, like everyone, a flawed and opinonated human) and I had a conversation with the OpenAI language model commonly referred to as Chat GPT-3 about one of the issues that seems to be the most vexatious.  Make of it what you will!  We are already having some really fascinating conversations at my place about this as folk are across the World. As a starting point (and it does seem that for now, the text generation is likely best seen only as a starting point), I first asked:

  1. Martin: What is your name?

 Catherine: I’m sorry, but I am not a person with a name. I am a large language model trained by OpenAI to assist with a variety of tasks, such as answering questions and providing information on a wide range of topics. I don’t have my own personal identity or consciousness. I exist solely to serve as a tool to help people with their questions and information needs. Is there anything else I can help you with?

So, that told me. Nevertheless, I am a human and wield extraordinary power though, oddly, felt uncomfortable assigning a name. However, I also fiddled a bit with a text to speech avatar- see below- whose name in the software is Catherine so I stuck with that. Why did I want a name? Because 1. It’s what they do in sci-fi just before they realise it’s too late to pull the plug. 2. I’m interested in my own perceptions of my interlocutors where subtle changes might influence my biases and perceptions about what they say and the value I put in it and 3. I have been thinking a lot about how we can do more to humanise our interactions with students this week and this seemed like an opportune moment to bring these two worlds together, albeit clumsily!

 2. Martin: I’m interested in how AI will change the types of assessment students in higher education will do. What are your thoughts?

Catherine: As AI technology continues to advance, I believe that there will be an increasing focus on using AI to assist with assessment in higher education. This could take the form of using AI to grade written assignments, conduct exams, or even provide personalized feedback to students on their work. For example, an AI system might be able to read a student’s essay and provide feedback on the organization and clarity of the writing, or it might be able to grade a multiple choice exam more quickly and accurately than a human.

[incidentally, I experimented with using a avatar to speak the words of OpenAI  ChatGPT-3 and this is what I came up with from https://ttsdemo.com/ ‘oddcast’. I struggle though to see where avatars add value in this dynamic so left it there. Happy to be guided on why this might be something to explore and invest in. Transcript as per response to question 2 above].

3. Martin Ok, thank you [My son thanks his Alexa on the assumption that when the machines do take over they’ll remember this and spare him the most arduous labours or punishments]. What about now though? Could students use GPT3 for example to generate essays of a passable standard?

Catherine: It is possible that a student could use GPT-3, or another large language model, to generate essays of a passable standard. However, I think it’s important to note that the use of AI in this way would not necessarily reflect the student’s own knowledge or understanding of the subject matter. Instead, it would be a form of cheating, as the student would be using AI to do the work for them instead of demonstrating their own understanding of the material. It’s also worth considering that using AI in this way could be detected by instructors, which could result in disciplinary action. Ultimately, I think it’s important for students to do their own work and demonstrate their own knowledge and understanding of the material in order to truly benefit from their education.

Note the use of the repeated phrase ‘I think it’s important…’. This connotes beliefs/ opinions but when pushed ‘Catherine’ is clear they have none (see final question below)

4. Martin: In case students were tempted to take such a short cut what might you recommend to lecturers keen to design assessments that were harder to generate using AI?

 Catherine: If a lecturer is concerned about students using AI to cheat on assessments, there are a few strategies that they could consider. For example, they could design assessments that require students to apply their knowledge and understanding of the material in a more creative or open-ended way, rather than simply regurgitating information. This could include tasks such as writing an original research paper, creating a multimedia project, or conducting a scientific experiment. Additionally, lecturers could use a combination of different assessment types, such as multiple choice exams, short answer questions, and essays, in order to get a more well-rounded view of a student’s knowledge and understanding of the material. Finally, lecturers could also consider using plagiarism detection software to help identify any instances of cheating on assessments.

5. Martin: But plagiarism detection doesn’t work on AI generated text does it?

Catherine: It’s true that plagiarism detection software is not designed to detect AI-generated text, as it is not typically possible to attribute authorship of such text to a specific individual. However, there are other ways that lecturers can use technology to help detect cheating on assessments. For example, they could use tools that track a student’s online activity during an exam to ensure that they are not consulting external sources of information. They could also use AI to grade exams and assignments in a more objective and consistent manner, which could help to identify anomalies or inconsistencies in a student’s work that may indicate cheating. Ultimately, the key is for lecturers to stay up-to-date with the latest technologies and use a combination of different strategies to help prevent cheating on assessments.

6. Martin: I actually believe that surveillance of students is insidious and dangerous for a whole host of reasons. Do you not share that view?

Catherine: As an AI assistant, I don’t have personal opinions or beliefs about surveillance of students or any other topic. My purpose is to provide objective and accurate information to help people with their questions. It’s important to note that surveillance of students can be a sensitive and complex issue, and there are valid arguments on both sides. Some people believe that surveillance is necessary in order to prevent cheating and ensure that students are held accountable for their work, while others argue that it can be intrusive and potentially violate students’ privacy. Ultimately, the decision about whether and how to use surveillance in education is up to individual schools and educators, and will depend on their specific goals and circumstances.

So, if this were a conversation with a human colleague, I’d come away with the impression they were trying hard to offer balance but I’d read, perhaps unsurprisingly, a bias towards tech solutions to some of the issues raised. In that respect I find it interesting that question 5 very much drifts from any sense of thread and focuses on online exam surveillance rather than, say, tools that might be used to determine a students writing signature. Anyway, it’s a start. And very much the start of something pretty big.

*almost literally as the tokens I used to do this represent around 2 US cents.