AI Deep Research – Trouble for Higher Education?

Generative artificial intelligence (AI) has rapidly emerged as a transformative technology in higher education. Since the release in early 2025 of advanced deep research capabilities for models like ChatGPT 4o and 3o, academia has been astonished and concerned by AI’s new capabilities. 

What is Deep Research?

Deep research was rolled out to various ChatGPT tiers, including a lightweight version for free users, and functions as an AI-powered research analyst. It is designed to autonomously browse the web, analyse information from multiple sources (including text, images and PDFs), and synthesise findings into comprehensive reports.  

These models can produce human-like essays, solve complex problems and even outperform students on certain tasks – allegedly! 

The capabilities afforded by deep research have triggered an urgent debate in universities, with fears of rampant cheating mixed with cheers of opportunity to reinvent teaching and assessment.  

OpenAI’s deep research models are directly related to the concept of agentic AI. Generative AI coupled with emerging agentic AI represent autonomous systems that can execute multi-step tasks. In other words, much of the control is taken out of the hands of the user.

They are capable of ‘reasoning ‘and ‘decision-making’ by synthesising information, identifying key points and searching key sources across the internet, much like a human researcher would. They can even use Python to analyse data and create graphs, ultimately working to generate a comprehensive report based on the user’s initial query.  

No wonder higher education is concerned! 

Academic Integrity and Assessment Challenges 

One of the foremost concerns is how generative AI may undermine academic integrity. Students now have unprecedented access to AI systems that can generate assignments, code or exam responses on demand. This raises obvious risks of plagiarism and cheating. 

Traditional plagiarism detectors struggle to recognize AI-generated text and experts acknowledge that these tools are too unreliable for widespread use.  

Compounding the challenge, many students are already using generative AI regularly, whether or not it is officially allowed (HEPI AI Survey). The standard assessment paradigm of essays, MCQs and SAQs (among other assignment formats) are increasingly vulnerable to AI assistance.  

The challenge of upholding academic integrity has intensified with the emergence of deep research and agentic AI.

The additional risk of degrading learning, critical thinking and problem-solving skills is also strikingly apparent if students become over-reliant on generative AI.  

These integrity challenges are forcing educators to rethink assessment strategies. Simply banning AI tools or trying to “catch” cheaters is impractical in the long run, given the technology’s ubiquity. Instead, many advocate for redesigning assessments and academic policies to uphold honesty while adapting to the new AI reality. 

Redesigning Assessment Challenges 

  • Redesigning assignments for authenticity: in-person, higher-order and creative assessments are harder for AI to complete independently. For instance, educators can employ more oral exams, in-class writing tasks and project-based work that require personal input. Such evaluation methods prioritise original thought and problem-solving – things that generative AI cannot reliably mimic. 
  • Integrating AI with transparency and guidance: Instead of futile bans, allow students to use AI under defined conditions. For example, with required attribution of AI assistance and a reflective commentary on how AI was used. Clear policies can delineate acceptable versus dishonest use of AI. By treating AI as a tool (to be used ethically and responsibly), educators can turn it into a learning aid rather than a cheating shortcut (Integrating AI into Higher Education Assessments). 

Ultimately, a proactive redesign of teaching and assessment can maintain academic integrity in the age of AI. By shifting assessments toward authentic, higher-level tasks and fostering a culture of honesty and transparency about AI use, educators can preserve rigour and fairness, while introducing more authenticity and critical thinking. 

AI as a Tool for Learning 

While generative AI poses challenges, it also offers significant opportunities for teaching and learning.  

  • Personalised tutoring – once a luxury for the few, personal tutoring could become accessible to all with AI assistance.
  • Democratise academic support – by giving every student on-demand, one-on-one assistance, an AI tutor can walk a student through a problem step by step or explain a difficult concept, adjusting to the student’s learning preferences.  
  • Individualised feedback – AI can deliver guidance and feedback anytime and can be especially valuable for those who struggle in large classes or cannot access human tutors.  
  • Idea generator – AI systems are adept at brainstorming ideas, suggesting alternate approaches and even generating initial drafts or prototypes that students can then refine. 
  • The language barrier – AI has potential to enhance accessibility and inclusion, by removing language barriers to assist students with cognitive or sensory challenges, or non-native English speakers. 
  • The caveat – generative AI is not infallible. It can generate incorrect answers (known as hallucinations) or simplistic explanations and is never a replacement for human instruction. 

The Rise of Agentic AI 

The next wave of AI in education is arriving in the form of agentic AI – systems that can autonomously carry out multi-step tasks and act as virtual “agents”. Agentic AI can execute complex sequences of actions autonomously, hinting at the growing capacity of AI not only to inform but to act on behalf of users.  

The development and deployment of autonomous agentic AI raises a number of significant ethical concerns. 

  • Accountability and responsibility – when an autonomous agent makes an error or causes harm, determining who is responsible becomes complex. The lack of clear responsibility can hinder redress for those affected by AI errors. Robust ethical frameworks therefore need to be put in place. 
  • Transparency and explainability – advanced AI models can be “black boxes”, where their decision-making processes are opaque and difficult to understand. This lack of transparency is challenging and for agentic AI to be trustworthy, it needs to be explainable (IBM – AI and Explainability). 
  • Bias and discrimination – as autonomous agents learn from the data they are trained on, the output reflects existing societal biases. The AI agent could perpetuate and amplify these biases leading to discriminatory outcomes. 
  • Safety and control – maintaining meaningful human oversight and the ability to intervene or shut down an agent when necessary is a critical ethical consideration.  
  • Privacy and data security – autonomous agents often require access to vast amounts of data, including sensitive personal and proprietary information, to function effectively. This raises significant privacy concerns regarding the collection, storage and use of this data.  

As AI agents become more capable, educators will need to rethink curricular goals and teaching practices. If an AI agent can quickly gather sources, generate outlines or even complete projects on command, the value of certain traditional assignments is significantly compromised.  

Student (and staff) training on foundational AI literacy (understanding how AI works, its strengths and limitations) becomes imperative. To prepare students for an AI-driven world, universities have a responsibility to integrate AI training and ethics into curricula. 

The rise of deep research and agentic AI could profoundly influence both what we teach and how we teach. Curriculum design will need to account for the advances in AI capability. 

Universities can still honour the core values of academia – honesty, rigour, creativity, equity – while embracing a new era of human–AI collaboration. 

 

References:

(Generative AI opens up vast opportunities for education | World Economic Forum). 

HEPI AI Survey 2025 (https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/)

Williams, A. (2025). Integrating Artificial Intelligence Into Higher Education Assessment. Intersection: A Journal at the Intersection of Assessment and Learning6(1), 128–154. https:/​/​doi.org/​10.61669/​001c.131915

Explainability and AI – IBM


Leave a Reply

Your email address will not be published. Required fields are marked *