The UK General Election - 4th July 2024


Has the higher education sector embraced AI?

Myles Hanlon

Senior Account Manager

In recent years, advancements in artificial intelligence (AI) have taken the world by storm. When the generative AI tool Chat GPT launched in November 2022, it took just five days for the number of users to reach one million. Now in 2024, interest remains strong with 1.8 billion people accessing the website in March alone. Yet, despite this level of interest, the integration of AI in education settings, including universities is, in many ways, in its infancy.

Sector debates about AI have in large focused on academic integrity, with claims that plagiarism and cheating are inevitable if students are allowed to use AI tools. This led to several universities implementing bans on the use of AI last year.

But now that we know far more about the reality of AI, understandably the sector seems to be making positive strides towards further embedding AI into teaching and learning, professional services, and student support. It has taken time to find a balance between being overly cautious whilst acknowledging the power of AI in delivering efficacy and productivity for both students and staff.

The challenges

For many in higher education (HE), generative AI posed an inherent threat to academic integrity. It is standard academic practice to include accurate references for sources when undertaking research. Yet, tools like ChatGPT and Google Bard do not always provide references within the responses they produce, meaning that students are then unable to reference where their information has been sourced. This therefore calls into question the credibility of the information these tools generate and has led to some suggesting that the validity of the academic research process is at risk.

Likewise, commentators have claimed that an over-reliance on AI for generating ideas for research may lead to students losing core academic skills. Times Higher Education recently reported that using AI for idea-generation risks students producing unimaginative arguments. A poll by the Higher Education Policy Institute (HEPI) also found that 54% of the UK undergraduate students polled use generative AI to suggest research ideas. In response, many continue to question how AI can be used in harmony with ethical research practices.

However, the same poll by HEPI revealed that only 3% of students think it is acceptable to use AI text in assessments before being submitted, and only 5% who used AI-generated responses did not edit the text personally. Despite students using AI in some capacity, it seems that most students do, in fact, appreciate the importance of academic integrity and are trying to find the right balance between using AI and adhering to academic good practice.

The opportunities

Interestingly for students, HEPI found that nearly three quarters (73%) expect to use AI after they finish their studies. It is therefore unsurprising that students are considering how they can use AI while studying, and it is up to universities to support them to use it ethically, responsibly and effectively.

Earlier this year the Department for Education (DfE) published its guidance on generative AI in education which calls on educators to “make the most of the opportunity that technology provides.” Similarly the Russell Group released its principles on using generative AI in higher education which the University of Oxford and University of Bath – among others – have implemented, and are now actively working to embed AI throughout their institutions. Now that we are beginning to see more forthcoming guidance on the ethical and fair use of AI, we are starting to see universities use AI to its full advantage and communicate its impact.

For example, at Staffordshire University, an automated digital assistant supports students with academic and pastoral wellbeing. According to HE Professional, the assistant also supports students with mental health check-ins.

At Kings College London, researchers used the Remesh AI platform to facilitate real-time discussions with large groups about how to eliminate awarding gaps. The research resulted in significant findings about the benefits of using AI in research, notably students preferring AI interaction due to increased anonymity, safety, and a non-judgemental environment.

The University of Warwick also received coverage for its practical use of AI in medical diagnostics.

These are just a few examples of how universities are embracing AI to understand its applications to research and professional services, and there are far more in development. Yet by exploring the innovative ways of utilising AI, these universities have been able to successfully communicate its potential and thereby become trailblazers in the sector.

Following guidance from the DfE and Russell Group the sector is now being bold in its approach to embedding AI into HE – making significant strides towards utilising its full potential. If the sector continues its trajectory of embracing the use of AI and engages with existing guidance, it is arguable that students will be well-equipped with the tools to use AI after graduating, basic and non-complex work processes will become streamlined and more efficient, and productivity will be increased across the sector.

After all, if AI is the future, it seems only right that the next generation of workers are brought on the journey.

Thanking our veterans for today’s General Election announcement

Grand Visions Versus Ts and Cs – The Contested, Overlooked Future of Adult Social Care

Add PLMR to your contacts

PLMR’s crisis communications experience is second to none, and includes pre-emptive and reactive work across traditional and social media channels. We work with a range of organisations to offer critical communication support when they are faced with difficult and challenging scenarios.