It’s not the first time that universities have gotten tangled up with developments that would later come to haunt them, explains Olivia Guest, computational cognitive scientist at Radboud University and lead author of the paper. ‘From combustion engines to tobacco, universities have been used in the past to whitewash now-controversial products. For a long time, the tobacco industry pointed to research it subsidized at universities to claim its products were healthy.’

In their article, a position paper released as a pre-print this month, the researchers warn similar entanglements are happening with artificial intelligence technologies now. ‘A lot of academic research on AI currently is also funded by the AI industry, which creates the risk of distorting scientific knowledge, similar to how we’ve seen happen in the past’, adds Iris van Rooij, co-author and professor of computational cognitive science at Radboud University [in the Netherlands].

The researchers explain that the current uncritical adoption of AI at the top level of universities actually is counter to what most students and staff want. ‘AI is often introduced into our classrooms and research environments without proper debate or consent,’ says van Rooij. ‘This is not just about using tools like ChatGPT. It’s about the broader influence of the tech industry on how we teach, how we think, and how we define knowledge.’

‘Study after study shows that students want to develop these critical thinking skills, are not lazy, and large numbers of them would be in favor of banning ChatGPT and similar tools in universities’, says Guest. By speaking up, the researchers aim to show that the ‘inevitability’ of AI is just a marketing frame perpetrated by the industry and that pushback is a lot more possible than we often see.

Guest, van Rooij and colleagues list a vast number of problematic aspects of AI technology in their paper. These range from the environmental issues (using vast amounts of energy and resources), illegal labor practices (such as plagiarism and theft of others’ writing), to risks of deskilling of students. Guest: ‘The uncritical adoption of AI can lead to students not developing essential academic skills such as critical thinking and writing. If students are taught to learn through automation, without learning about how and why things work, they won’t be able to solve problems when something actually breaks – which will be often, based on the AI output we now see.’

The researchers also warn of AI technology harming future research and enabling the spread of misinformation. ‘Within just a few years, AI has turbocharged the spread of bullshit and falsehoods. It is not able to produce actual, qualitative academic work, despite the claims of some in the AI industry. As researchers, as universities, we should be clearer about pushing back against these false claims by the AI industry. We are told that AI is inevitable, that we must adapt or be left behind. But universities are not tech companies. Our role is to foster critical thinking, not to follow industry trends uncritically.’