The gap between student GenAI use and the support students are offered

I argued a couple of days ago that the sector is unprepared for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found 92% of undergraduates using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (huge caveat). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon.

The problem arises from the gap between near uniform use of LLMs in some way and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: a 56% gap. Only 26% of students say their university provides access to AI tools: a 66% gap. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning.

How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be how to learn with LLMs rather than outsource learning to them. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas:

  • What are students in our discipline using AI for, which tools, at what stages of their work?
  • Which foundational skills and ways of thinking in our discipline are enhanced vs threatened by AI use?
  • When does AI use shift from “learning with” to “outsourcing learning” in our specific field?
  • What forms of assessment still make sense and what new approaches do we need in an AI-saturated environment?
  • What discipline-specific scaffolding helps students use AI as a thinking partner rather than a thinking replacement?

Furthermore answering these questions is a process taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is already in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy.

#higherEducation #technology #university #academic #students #generativeAI #malpractice #LLMs #HEPI

The gap between student GenAI use and the support students are offered

I argued a couple of days ago that the sector is unprepared for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found 92% of undergraduates using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (huge caveat). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon.

The problem arises from the gap between near uniform use of LLMs in some way and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: a 56% gap. Only 26% of students say their university provides access to AI tools: a 66% gap. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning.

How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be how to learn with LLMs rather than outsource learning to them. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas:

  • What are students in our discipline using AI for, which tools, at what stages of their work?
  • Which foundational skills and ways of thinking in our discipline are enhanced vs threatened by AI use?
  • When does AI use shift from “learning with” to “outsourcing learning” in our specific field?
  • What forms of assessment still make sense and what new approaches do we need in an AI-saturated environment?
  • What discipline-specific scaffolding helps students use AI as a thinking partner rather than a thinking replacement?

Furthermore answering these questions is a process taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is already in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy.

#higherEducation #technology #university #academic #students #generativeAI #malpractice #LLMs #HEPI

#Extortion

#Trump Wants #UCLA to Pay $1 Billion to Restore Its #Research#Funding

The Trump administration has ended about $500 million for the #LosAngeles based #university. Trump said he wanted nearly double that to restart the flow of funds.

#law#Constitution#AcademicFreedom
https://www.nytimes.com/2025/08/08/us/trump-ucla-research-funding-deal.html?smid=nytcore-ios-share&referringSource=articleShare

The #Trump proposal calls for the #university to make a $1 billion payment to the #US government & to contribute $172 million to a claims fund that would compensate victims [privileged white men] of #CivilRights violations.

If #UCLA accedes to the demand, it would be the largest payout—by far—of any university that has so far reached a deal with Trump. #Columbia agreed to pay $221M in connection with its settlement, & Brown pledged to spend $50M with state work force programs.

#law

The #Trump proposal calls for the #university to make a $1 billion payment to the #US government & to contribute $172 million to a claims fund that would compensate victims [privileged white men] of #CivilRights violations.

If #UCLA accedes to the demand, it would be the largest payout—by far—of any university that has so far reached a deal with Trump. #Columbia agreed to pay $221M in connection with its settlement, & Brown pledged to spend $50M with state work force programs.

#law

#Extortion

#Trump Wants #UCLA to Pay $1 Billion to Restore Its #Research#Funding

The Trump administration has ended about $500 million for the #LosAngeles based #university. Trump said he wanted nearly double that to restart the flow of funds.

#law#Constitution#AcademicFreedom
https://www.nytimes.com/2025/08/08/us/trump-ucla-research-funding-deal.html?smid=nytcore-ios-share&referringSource=articleShare