Developing AI literacy – learning by fiddling

Despite ongoing debates about whether so called large language models /generative language  (and other media) tools are ‘proper’ AI (I’m sticking with the shorthand), my own approach to trying to make sense of the ‘what’, ‘how’, ‘why’ and ‘to what end?’ is to use spare moments to read articles, listen to podcasts, watch videos, scroll through AI enthusiasts’ Twitter feeds and, above all, fiddle with various tools on my desktop or phone. When I find a tool or an approach that  I think might be useful for colleagues with better things to do with their spare time I will jot notes in my sandpit, make a note like this blog post comparing different tools or record a video or podcast like those collected here or, if prodded hard enough, try to cohere my tumbling thoughts in writing. The two videos I recorded last week are an effort to help non-experts like me to think, with exemplification, about what different tools can and can’t do and how we might find benefit in amongst the uncertainty, ethical challenges, privacy questions and academic integrity anxieties.

The video summaries were generated using GPT4 based on the video transcripts:

Can I use generative AI tools to summarise web content?

In this video, Martin Compton explores the limitations and potential inaccuracies of ChatGPT, Google Bard, and Microsoft Bing chat, particularly when it comes to summarizing external texts or web content. By testing these AI tools on an article he co-authored with Dr Rebecca Lindner, the speaker demonstrates that while ChatGPT and Google Bard may produce seemingly authoritative but false summaries, Microsoft Bing chat, which integrates GPT-4 with search functionality, can provide a more accurate summary. The speaker emphasizes the importance of understanding the limitations of these tools and communicating these limitations to students. Experimentation and keeping up to date with the latest AI tools can help educators better integrate them into their teaching and assessment practices, while also supporting students in developing AI literacy. (Transcript available via Media Central)

 

Using a marking rubric and ChatGPT to generate extended boilerplate (and tailored) feedback

In this video, Martin Compton explores the potential of ChatGPT, a large language model, as a labour-saving tool in higher education, particularly for generating boilerplate feedback on student assessments. Using the paid GPT-4 Plus version, the speaker demonstrates how to use a marking rubric for take-home papers to create personalized feedback for students. By pasting the rubric into ChatGPT and providing specific instructions, the AI generates tailored feedback that educators can then refine and customize further. The speaker emphasizes the importance of using this technology with care and ensuring that feedback remains personalized and relevant to each student’s work. This approach is already being used by some educators and is expected to improve over time. (Transcript available via Media Central)

m
m
m
I should say that in the time since I made the first video (4 days ago) I have been shown a tool that web connects ChatGPT and my initial fiddling there has re-dropped my jaw! More on that soon I hope.

 

Generative AI: Friend or Foe?

In this post I share two videos on generative AI including (of course) reference to ChatPT.  These are designed for a general audience at UCL and will hopefully be of relevance to academic and professional service colleagues as well as students. In these unscripted videos I, a human, talk in a non-technical way about some of the tools, their affordances and implications. The summaries below were generated in GPT4 using the transcripts of the videos.
Video 1:
In this video, Martin Compton from Arena discusses the phenomenon of generative AI, using Chat GPT as a prime example. He addresses the question of whether generative AI is a friend or foe, and suggests that how we react, utilise, and learn from these technologies will determine the outcome. He provides an example of a generative image created with AI, raising ethical concerns such as copyright infringement and the carbon footprint of AI technologies. He also talks about different manifestations of ‘large language models’ and raise questions about the ways members of the academic community could use them.

Access details and transcript for video 1 here

————————————
m
m
m
m
m
m
m
Video 2
In the second video about generative AI, Martin Compton from Arena builds on discussions with a colleague, Professor Susan Smith, and explores whether generative AI is a friend or enemy. He acknowledges the power and remarkable capabilities of AI tools like ChatGPT (a large language model text generator) and Midjourney, an AI image generator. However, he advises against panicking or feeling anxious about the impact of these technologies. Instead, Martin suggests that we should adapt, adjust, and learn from the ethical issues and implications these tools present. By finding ways to accommodate, embrace, and exploit the potential of generative AI, we can utilize these technologies for labor-saving purposes and ultimately enhance various aspects of our lives.
———————————
m
m
m
m
m
 Podcast