Since the release of large language models (LLMs), higher education has alternated between apprehension and optimism. In my recent paper, I argue that both positions are rational. The technology offers numerous opportunities for teaching and learning, yet it introduces new forms of risk and responsibility.
https://doi.org/10.46328/ijres.1302
What can educators practically do, and how can we do it well?
In what follows I will distil the paper’s main themes for colleagues who want to adopt AI without abandoning academic standards. The message is simple: start small, design holistically and maintain high standards of ethics.
What are LLMs good for?
The practical roles of LLMs in higher education fall into two overlapping categories: student-facing learning support and educator-facing teaching support. Briefly, LLMs could provide the following:
- personalise tutoring and explanations, especially in concept-dense areas
- generate and enhance learning resources (from prompts and problem sets to diagrams and simulations)
- assist with summarising lectures, research and scaffolding scholarly reading
- support language access and academic communication
- offer quicker, more continuous feedback loops
This is not a promise of frictionless teaching, and we are only beginning to realise what works well and what doesn’t. LLMs still need human oversight for scientific accuracy and contextual judgement, whether you are a student or an educator, and the danger of over-reliance on AI tools risks replacing critical engagement and the development of essential cognitive proficiency.

Personalised learning that respects learning
The most persuasive AI applications are probably in personalised learning. This could include adaptive materials, interactive tutoring, immediate feedback and structured study plans. These can be integrated into learning management systems, with careful attention to privacy and accessibility. Done well, LLMs may help support diverse student cohorts at scale, with a generous sprinkling of individual and personalised assistance.
LLMs can tailor difficulty level to help focus student needs, provide step-by-step guidance, generate formative quizzes with explanatory feedback, and keep students oriented with goals and progress.
However, implementation should emphasise integration with existing platforms and with existing programme learning outcomes. This will inevitably require both student and educator training, meaning that AI literacy will become a critical component of curricula and professional development strategies.
Creating (and improving) teaching resources
For over-stretched academics, with limit time and resource budgets, LLMs may offer a pragmatic tool as a co-designer. They can draft lesson outlines, case studies, propose activities aligned to intended learning outcomes, produce exam scripts, suggest rubrics, and even outline virtual labs or simulations. These capabilities are likely to be most effective when educators keep themselves and their students in the loop as integral parts of the design process.
The importance of professional development and AI literacy raises its head once again! As tools evolve, so must our literacy, whether this is technical, ethical or pedagogical literacy. Without that, we risk reliance on AI automation, rather than using these new tools to rethink what and how we teach and assess.

Inclusion as a design principle, not afterthought
LLMs can support inclusion in several ways. For example, multi-modal content for different preferences, translation and simplification for multilingual cohorts, text-to-speech and speech-to-text to widen access, and adaptive pathways that respect different starting points and levels of academic development.
However, equity is not a given. The digital divide is still a real challenge, whether this concerns connectivity, access to hardware or digital skills. Any “AI for all” claims must be backed by institutional provision and explicit teaching of AI literacy, otherwise the danger is exclusion.
Well-designed prompts can also improve how we interact with generative AI. Rather than using AI as a short-cut to the answer, asking LLMs to provide multiple responses that reflect diverse examples and perspectives will ultimately provide a richer learning experience.
We must also consider the negative implications of models trained on unbalanced data and the reproduction of representational biases. Inclusion therefore requires both curation and critique, which will require a deep and robust understanding of AI ethics.
Curriculum design and quality assurance
Beyond transforming the classroom, LLMs can help map and strengthen curricula. They can rapidly align module outcomes, teaching activities and assessment tasks to subject benchmark statements, highlight gaps, and suggest strategies for the inclusion of generic competencies (critical thinking, communication, digital literacy, collaboration). However, human judgement remains necessary to decide what “good” looks like, and this should be compromised in favour of perceived efficiencies.
In the UK, this is especially useful when aligning programmes with QAA benchmarks while retaining discipline or subject-specific identity. LLMs can draft curricula mappings, propose assessment blueprints and authentic tasks but quality assurance remains a staff responsibility, not an AI system responsibility.
Ethics: integrity, privacy, bias, and the “black box”
The ethical concerns are not merely cosmetic but rather they govern whether AI adoption is defensible, desirable, effective and accountable. There are several important ethical issues:
- Academic integrity. Detection tools are unreliable. Therefore, assessment redesign is the more sustainable path. Viva, practicals, collaborative tasks, iterative drafts and critical reflection on AI use can preserve standards while also teaching students vital AI literacy skills.
- Data privacy and consent. Many online, public platforms retain user inputs and use data to train models. These may therefore be refractory uploading student content that, especially if identifiable. The preference is for institutionally provisioned tools or local LLMs to minimise data security and privacy issues.
- Bias and representation. Training data and algorithms encode patterns that may reinforce societal biases or maleficent content. Educators and students must therefore evaluate outputs for bias and make personal judgements on AI generated content.
- Opacity. The “black box” problem further complicates accountability. No one really knows how AI systems make decisions or generate content. Who is responsible for an AI system that gets things wrong? The AI, the parent company or the user? This poses significant challenges when attributing accountability for AI generated content.
None of this argues against adoption. It argues for responsible adoption, where ethical literacy sits alongside technical proficiency.

A pragmatic adoption
Drawing together the strands, the following practices have proven most workable in my opinion and might be useful for formulating conversations with colleagues:
- Start with one low-risk adoption. For example, use an LLM to generate formative question banks with explanations, then check them and pilot in a single module. Work with students as partners to ensure effectiveness of use.
- Design prompts as pedagogy. For example, specify specific roles for the LLM (e.g., as a Socratic tutor), integrate guardrails (no final answers before probing), and ask students to critique outputs against set criteria.
- Make assessments AI-literate. State permitted uses clearly from the outset. Ask students to provide evidence (notes, drafts, prompts) and allow students to experiment and explore possibilities with AI in safe environment.
- Keep humans in the loop. Use LLMs to draft or present options, not decide on the outcome. Aim for educational augmentation, not replacement.
- Plan for equity. Provide AI literacy training (e.g., workshops on effective and ethical AI use), ensure equal access (devices, licences, internet), and design alternatives where needed. Don’t let AI widen attainment gaps.
- Invest in staff development. Prioritise professional development on AI capabilities, limits, privacy and bias, and create communities of practice that share areas of best practice.
Where does this leave us?
It is tempting to talk about LLMs as if they were either a pedagogical panacea or an existential threat. They are neither. They are powerful language interfaces trained on a vast corpus of data. They have their capabilities and limitations. Therefore, it’s important we design new pedagogies around AI’s limitations and not despite them. They will require human judgement, which will be necessary to ensure responsible and ethical use. AI should be an adjunct to educator-oriented student learning and not a replacement for it.
The real opportunity is to use LLMs as a catalyst for better pedagogy. For example, more formative feedback, clearer alignment and richer authenticity. The risk is to outsource thinking, cognitive processing, assessment or ethics to a system that cannot bear those responsibilities.
Our task is not to decide whether AI has a place in higher education but to decide what kind of higher education we will build with it.
Reference:
Andrew Williams (2025). Critical Evaluation of the Potential of Large Language Models in Bioscience Higher Education: A Conversation with ChatGPT. International Journal of Research in Education and Science (IJRES), 11(3), 667-701. https://doi.org/10.46328/ijres.1302