Industry Trends

AI and HE around the world – Evolution and revolution?

GLOBAL

It has been said that journalists write the first draft of history. This is the first draft of the history of generative AI as it intersects with higher education, according to the writers of University World News. Many of these writers are leading academics across the world, giving this ‘first draft’ greater depth.

University World News is influential among university leaders, policy-makers and higher education thought leaders worldwide and we recognise the growing importance of AI in higher education. That is why we have run more than 100 articles over the past 15 months exploring developments around AI with implications for higher education institutions and systems, students and staff, teaching and research.

University World News has been capturing the ways in which AI has been challenging and transforming higher education, using our trademark approach that draws on an international network of quality journalists and commentators, with a global perspective but also drilling down to regional, national and local levels.

Join us on a journey around the globe, through the eyes of our expert commentators and correspondents. How are universities engaging with the rise and rapid evolution of GenAI? With the ethical challenges and assessment opportunities? With the research opportunities and threats? With the AI race to lead the world of science?

As Professor Rick Stevens of the University of Chicago told a United States senate committee in September 2023: “Whoever leads the world in AI will lead in science.”

The advent of ChatGPT

The advanced chatbot ChatGPT (Chat Generative Pre-trained Transformer) was launched on 30 November 2022 by the American technology start-up OpenAI. It represented a huge leap in generative AI. OpenAI described ChatGPT as a large language model (LLM) that interacts with people in a conversational way and can “answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests”.

Despite limitations, ChatGPT quickly gained worldwide acceptance, reportedly accumulating 57 million monthly active users in its first month of availability and surpassing 100 million active users in January 2023.

Students, faculty and staff the world over were also quick to react, and within months had begun using ChatGPT in myriad ways.

For their part, universities have responded very differently to each other. Some are fully involved, an example being America’s Arizona State University and its new partnership with ChatGPT’s creator. Others are cautiously engaged, while some have completely ignored generative AI.

In late 2023 a national survey of 1,250 students in the United Kingdom fleshed out this impression, and was reported by University World News: only 63% of students said they thought their university had a clear policy on AI.

Yojana Sharma, University World News’ Asia editor, reported that ChatGPT had upended China’s declared bid to emerge as a global AI innovation power. The Chinese government has been pouring billions of yuan into R&D in the AI sector and into developing AI talent.

“But ChatGPT has left Chinese companies and universities playing catch-up and ignited a new ‘arms race’ with the United States, focused on generative AI. Since the emergence of ChatGPT, the central government and city authorities have announced new funds to entice promising start-ups that might develop a Chinese ChatGPT,” Sharma wrote.

“China wants its own intelligent natural language systems for reasons that range from the need for Chinese language systems to keep up with English and other global languages, to purely political reasons relating to its goals as a global science and technology power,” she stated.

Korea also believes that having its own LLMs using Korean language and knowledge, will be useful. In addition to improving its own language GenAI applications, the country aims to build 200 types of specialised learning and language data models for non-English speaking markets, reported Sharma. The Korea Advanced Institute of Science and Technology (KAIST) is among a number of AI graduate schools that hope to train 200,000 advanced software and AI professionals by 2027 to prepare for generative AI technologies to be rolled out into the economy.

Soon after its launch, ChatGPT was followed by Google’s Bard (now Gemini), Microsoft’s AI-powered Bing and its Co-pilot, and a plethora of others such as HIX Chat, Chatsonic, YouChat, AlphaCOde and MusicGen. Many edtech services, such as the student-directed Socratic, have been around for years but improved with AI advances.

Writing in March on the need for African universities to prepare for generative AI, Dr Eyualem Abebe, dean of STEM at Eastern Florida State College, Cocoa in the United States, noted: “[there are] currently more than 1,000 generative AI tools, some of them for various specialised fields of study-interest, many of which are great tools for educators, for example, Eduaide.AI, which has a content generator, teaching assistant, feedback bot, free-form chat, and assessment builder”.

ChatGPT as a ‘game-changer’

The initial view of ChatGPT as ‘gamechanger’ and a tricky tool for cheating and plagiarism dominated the higher education narrative, and so the focus first fell on teaching and learning. Perhaps for that reason, the early response to ChatGPT in universities was panic.

By January 2023, universities in many countries had urged academics to review how their courses were assessed amid warnings that students were already using generative AI to produce high quality essays, with insufficient input. Many institutions around the world, including France’s renowned Sciences Po and England’s Oxford and Cambridge, outlawed the unacknowledged use of AI among students.

But there were also many calm voices.

Dr Samuel Saunders, an educational developer in the Centre for Innovation in Education at the University of Liverpool in the United Kingdom, argued that banning technologies was neither a solution nor even a possibility. “In fact, banning generative AI services simply broadcasts a very strong message that universities assume that students will use them to cheat, which does students a severe disservice,” he said.

In the United States, the Council for Higher Education Accreditation told University World News that universities and accrediting organisations needed to engage in AI so that it supported – rather than replaced – authentic learning.

In a strategy paper reported by University World News in February, the European University Association (EUA) argued that banning the use of AI tools and other new technologies would be futile. “Consequently, the higher education sector must adapt its learning, teaching and assessment approaches in such a way that AI is used effectively and appropriately,” they stated.

The EUA was one of the first voices to point out shortcomings associated with GenAI, “such as lack of references to sources of information, biases in data and algorithms, intellectual property and copyright, or issues related to privacy, data security and fairness”. ChatGPT’s propensity to ‘hallucinate’, producing wildly incorrect answers, has been a major problem.

In a position paper on 14 February, the Guild of European Research-Intensive Universities – 21 European research universities in 16 countries – urged investment in fundamental research into AI as “quintessential to the European Union’s capacity to be globally competitive in artificial intelligence and digital research”.

There was no going back on the need for universities to engage with generative AI, and the Guild’s position was one of the first to raise the importance of AI to research, and to signal the implications of generative AI for the development and competitiveness of countries.

The potential benefits for universities and teaching were equally clear, such as improved efficiency and personalised learning. Soon there were working groups in universities and higher education stakeholders investigating GenAI and what it means for just about every facet of university work – teaching and learning, assessment and accreditation, research and publishing, university administration and student services.

All in all, for the most part , the responses of universities around the world were what you would hope: cautious but open-minded, well thought through and smart.

AI and higher education internationalisation

University World News, as its name implies, is a publication with an international perspective. We are proud to have a network of local journalists around the world who write about universities on the ground, from a local standpoint – no ‘helicoptering’ involved.

Feedback from readers shows that our international approach, and particularly our coverage of regions not usually reported on by Western higher education publications, is welcome and much needed by universities and staff operating in a global higher education environment.

An interesting international perspective was brought to the topic of generative AI by Sarah Knight, head of learning and teaching transformation at Jisc, Britain’s digital agency. She wrote about the ‘digital shock’ that international students from very different cultures and educational backgrounds may face when confronting digital technology, including AI, in their host institutions – and how it impacted on their learning and wellbeing.

This problem “can be tackled by universities taking a more thoughtful and inclusive approach, focusing on providing equitable digital experiences. This requires a clear commitment from leadership and a dedicated digital team”, wrote Knight – and a focus on equitable access will benefit all students, as well as staff.

Counterintuitively, research shows that universities with a more international student population tend to be more cautious and pessimistic about adoption of generative AI.

Based on a survey of the online AI guidelines of 100 highly ranked universities, Tomohiro Ioku and Sachihiko Kondo of Osaka University in Japan and Yasuhisa Watanabe of the University of Melbourne in Australia argued this caution could be due to bias.

Diverse student populations can make it difficult to anticipate how students will react to technology, and this uncertainty could have been perceived as a heightened risk in implementing AI. If so, engaging more with their students about their attitudes to AI is vital.

Another interesting perspective – on growing digital internationalisation in universities driven by new technologies such as extended reality and generative AI – was provided last month by Chris R Glass of Boston College, Melissa Whatley of the SIT Graduate Institute and Taylor C Woodman of the University of Maryland in the US: digital internationalisation needs to be about more than simply delivering Western modes of learning to global populations.

Glass and others looked at the significant impacts of technology on international education – just one example would be virtual (instead of physical) student exchange – and concerns about unequal partnerships and persistent digital divides. Universities must build partnerships that prioritise mutual respect, resource sharing and the co-creation of digital spaces that integrate non-Western paradigms and include marginalised perspectives, they argued.

Teaching and learning – a new era

It has been astonishing how many facets of higher education are being transformed by generative AI. As the months unfolded, so did the ideas and explorations of how ChatGPT and subsequent AI tools will affect teaching, learning, and how universities are managed.

Higher education expert Louise Nicol suggested in February this year that online education would continue to grow and there would be increased integration with AI for more personalised learning.

Highlighting the remarkable potential of technology Dr Paul LeBlanc, president of Southern New Hampshire University, said AI is forcing a paradigm shift in higher education from ‘what you know’ to ‘how you will be’, with knowledge no longer the foundation it has been in the past.

“Knowledge is no longer scarce. It is just one prompt away on your phone,” he told the 2023 ASU+GSV Summit in the US, as reported by Nic Mitchell.

“To harness the full benefits of technology, we need to rethink the system and we’re only just beginning to understand this question,” LeBlanc told the audience. Under his direction, the Southern New Hampshire University has grown to become the largest non-profit provider of online higher education in America with 180,000 students

So how are students currently using generative AI?

Unsurprisingly, many students responded rapidly to ChatGPT and soon it was commonly used. Some 53% of students in the United Kingdom have used generative AI to help with their studies, according to the first national survey of students and AI since ChatGPT arrived, published in February this year and reported by Karen MacGregor. The most common use is as an ‘AI private tutor’, with 36% using AI to help explain concepts.

The survey by think tank HEPI found that 37% of students used AI for enhancing and editing writing, while 30% said they had used AI tools like ChatGPT to generate text and 25% have used AI for translation. Worryingly, few students see AI ‘hallucinations’ as a problem, which may suggest they are not verifying information and may be using inaccurate information.

“Only 5% of students put AI-generated text into assessments without editing it personally,” said the report, and 65% of students are ‘quite’ or ‘very’ confident that lecturers can detect if AI is used.

Report author Josh Freeman said “students trust institutions to identify the use of AI tools and they feel staff understand how AI works. As a result, rather than having AI chatbots write their essays, students are using AI in more limited ways: to help them study but not to do all the work”.

The 2023 Global Student Survey

Beyond the UK, the 2023 Global Student Survey, produced by Chegg.org, spoke to 12,000 students across 15 countries and was published late last year.

It showed that globally, students are also embracing generative AI, with 40% reporting using it in their studies. Like their British counterparts, students want training in AI tools.

Augmenting the quality of their education and boosting skills needed to compete in the global economy were important motivations for students to use generative AI, according to the survey. Crucially, the top priority for improving GenAI (selected by 55% of all those surveyed worldwide) was the involvement of human expertise.

Also importantly, 65% of students worldwide said they would like their curriculum to include training in AI tools relevant to their future career.

Demand for AI training was particularly high in India (83% of students), Türkiye (73%) and Indonesia (72%). In contrast, only 47% of students in the US expressed an interest in training in career-relevant AI tools.

The possibility of using AI to improve the way students are taught to think was also a focus of a delightful article by our North America correspondent, Nathan M Greenfield, who kicked off with a limerick ChatGPT wrote in response to a prompt:

There once was a student in need,

Of an essay, a difficult deed,

So, they turned to ChatGPT,

For help, you see,

And aced the assignment with speed!

Through interviews with English and other professors, Greenfield, himself a college English professor, described GenAI problems and challenges for literature – along with the numerous errors the chatbot makes, that have solidified into a great operational weakness, together with a 2021 cut-off date for content that makes open source ChatGPT increasingly out of date.

Greenfield described the ‘moral panic’, focus on plagiarism and declarations on the death of the literary essay that greeted ChatGPT. But he also spoke to experts who argued that AI can be used to improve the way students are taught to think.

Art schools are AI-immersed

One of the unexpected outcomes of the ChatGPT phenomenon was the news that generative AI was considerably more prevalent in the arts than elsewhere in higher education, except perhaps in some areas of research. In June 2024, European art schools said that most students had integrated AI into their work, wrote Karen MacGregor, reporting on a European University Association webinar. But few art universities had an AI policy.

Pawel Pokutycki, a researcher and senior lecturer at the Design Academy Eindhoven and Royal Academy of Art in The Hague, Netherlands, said of GenAI: “It’s our daily bread today. Our students and tutors have been engaging in creative work with AI machine learning quite enthusiastically, full of curiosity but also with a certain level of criticality.” Pokutycki is enthusiastic about the role of GenAI in the arts if references are mentioned. If there is critical reflection on the tools used, there is a level of transparency, he said.

The extraordinary development of creative technologies, tools and websites have led art experts “to perhaps see AI as a form of artistic intelligence. Obviously, it’s a discussion point to what extent AI is artistic or not, and how it is challenging us with our artistry,” he said.

First big AI fear: student assessment

Very early on it became clear that generative AI would have a major impact on student assessment – and thus also on accreditation of courses, qualifications and universities. Colleges and universities around the world formed AI working groups and held seminars and webinars for staff on what the new technology meant.

At a symposium at the University of Mississippi in March last year, reported Natalie Simon, academics looked at integrating AI into evaluation, stressing ChatGPT’s ability to aid staff, rather than replace them.

Risks such as inaccuracies and biases were acknowledged, including the flawed behaviour of AI models based on biased training data, and concerns were raised about AI exacerbating societal inequalities. The symposium agreed that use of AI in evaluation was inevitable and highlighted the importance of engagement and regulation to tackle ethical and equity issues it kicks up, and the importance of responsible implementation.

Another big fear: the AI problem for linguistic and stylistic diversity

Linguists were also concerned, warning that widespread use of ChatGPT around the world could reduce linguistic as well as stylistic diversity. University World News Asia editor Yojana Sharma reported concerns that with widespread use of AI-assisted tools, “the gap between the languages where such tools work well and those where it does not work so well will become larger, and in some scenarios create a new global educational and digital divide based on language”.

As AI-assisted writing takes hold in some parts of the world, in other parts of the world – such as some Asian countries with complex writing systems – students could be left behind if AI companies do not swiftly develop other language versions of ChatGPT.

Jieun Kiaer, a professor of Korean linguistics at the University of Oxford in the United Kingdom, warned that tools such as ChatGPT could reinforce the global dominance of English and some other European languages. It could also reduce the richness of language.

The World Innovation Summit for Education (WISE) late last year discussed strategies to tackle these challenges, focusing on the potential of small language models (SLMs), reported Sharma.

SLMs offer efficient alternatives to costly LLMs. They require less data and computing power, making them accessible to regions that lack extensive resources. SLMs are also easier to control and correct, offering greater transparency and oversight, especially in education contexts.

Researchers at WISE, hosted by the Qatar Foundation, stressed the importance of developing localised SLMs that are also tailored to specific languages, such as Arabic, to ensure accuracy and cultural relevance. The push for SLMs represents a shift towards more sustainable and inclusive AI in diverse linguistic settings.

Scientists at the world’s first dedicated AI postgraduate research university, the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) in the United Arab Emirates, have also been exploring smaller models that can be trained more efficiently, reducing their carbon footprint and making models much more accessible, wrote Karen McGregor.

Generative AI and student recruitment

Beyond teaching and learning, generative AI is also set to revolutionise student recruitment, according to Louise Nicol and fellow expert Alan Preece – as long as they get recruitment criteria in order and make key data about price, excellence and graduate employability accessible on their websites.

ChatGPT offers personalised course searches, challenging traditional aggregators and favouring universities that meet student criteria, among many other things.

“For smart universities, the changes will allow them to focus on their unique selling points with high quality customer information about price, excellence and graduate employability. The onus will be on institutions to genuinely respond to student expectations rather than using rankings as a surrogate for quality. No need to pay middle men in the form of agents and aggregators because the students can do the searching for themselves,” they said.

Generative AI and admissions diversity

Last October, a study published in Science Advances reported that an algorithm had been developed by a University of Pennsylvania PHD student and nine researchers that is able to ‘read’ university application essays and determine pro-social and leadership qualities, Nathan Greenfield reported. The algorithm reportedly makes it possible to filter out readers’ biases and contribute towards a more diverse student intake.

The research can help solve a time-consuming and exhausting problem that admissions officers can have in reading through the sometimes tens of thousands of student applications.

“It all comes down to a qualitative judgement by an expert [who has] years of expertise,” says Benjamin Lira Luttges, the PhD student who led the study. “But human judgement is subject to noises and biases. So what we set out to see is: if you could have a human with an AI collaborating, could you reduce this kind of noise, because the algorithm doesn’t get tired? The algorithm uses the exact same judgement call for every essay; in that sense, there is no ‘noise’.”

AI’s impact on qualifications recognition

AI is likely to have an impact on the recognition of qualifications as much as on any other area of education. Laws, regulation and practice will need to be reviewed, and a common understanding and practice developed in the higher education community.

Any approach should recognise the use of AI and seek to counter its abuse, wrote Sjur Bergan, one of Europe’s leading higher education thinkers and former head of education for the Council of Europe, last April.

AI in universities raises questions about learning outcomes beyond knowledge and skills to include ethical considerations and the willingness to act. Recognising qualifications in the AI era poses challenges, especially regarding fraudulent documents and the role of AI in academic work, Bergan argued. While AI can assist in identifying fraud, it also raises concerns about the authenticity of academic achievements.

Collaboration between universities, public authorities and qualifications specialists is essential to tackle these challenges. Providing clear information on AI’s implications for recognition is crucial for students, employers and society.

AI and data, equity and ethics

AI and academic integrity, post-plagiarism

One of our best-read AI articles is on plagiarism – 43,000 reads so far – by Sarah Elaine Eaton, associate professor of education at the University of Calgary in Canada. Eaton wrote that AI would soon be built into everyday word processing programmes, becoming as commonplace as predictive text on phones.

In her 2021 book on plagiarism in higher education, she argued that technology would deliver an age of post-plagiarism in which humans and technology co-write and the result is hybrid human-technology output.

“We will soon be unable to detect where the human written text ends and where the robot writing begins, as the outputs of both become intertwined and indistinguishable. The key is that even though people can relinquish full or partial control to artificial intelligence apps, allowing the technology either to write for them or to write with them, humans remain responsible for the result. It is important to prepare young learners and university students for this reality, which is not a distant future, but already the present,” she said.

Algorithms that bias searches of academic literature

But first, academics need to learn how generative AI works and what they can do about algorithms that bias searches of academic literature – favouring authors who are white, Western and male, commented Katy Jordan, a senior research associate in education at the University of Cambridge and co-author of a report on this topic for the Society for Research in Higher Education.

Many researchers are unaware of how widespread this problem is, but there are several ways in which it can be addressed. “At a minimum, databases should be more transparent about the use of ranking algorithms, making clear the risk of bias. Developers should also carefully consider whether ranking by relevance is really necessary at all,” she wrote.

The dearth of accessible African datasets

For academics in the Global South this bias is a daily reality. Winston Ojenge, a senior research fellow at the Africa Centre for Technology Studies in Kenya, wrote that despite there being a great deal of local data, there is a dearth of accessible African datasets and there are challenges around local languages.

However, broadband connectivity, 5G and the internet of things are spreading to the remotest corners of Africa, and will make data collection for powerful AI analytics in education possible in Africa.

Instead of striving to eliminate biases in datasets, researchers should ask why biases are there in the first place and what power and politico-economic factors cause them, argued Chantelle Gray, a philosophy professor and chair of the Institute for Contemporary Ethics at North-West University in South Africa.

Much of the work being done to address issues of bias and other challenges are covered by ‘AI ethics’. “This work is extremely valuable, but it does not sufficiently address the deeper individual and collective consequences of digitalisation on our societies, our psychological health and even our thought processes.”

Gray wrote that the philosopher Bernard Stiegler explored these elusive ‘disorders’ of digitisation, which he saw as a ‘generalised arrested development’ that materialises as symptoms of widespread disaffection and withdrawal.

Technophilosophy and ethics

In early 2023, researcher Rens van der Vorst of Fontys University in the Netherlands wrote that universities would need to open up debates about the ethical challenges new technologies throw up. He introduced a ‘Moral Design Game’ – an innovation that enables students to talk to each other about ethical issues around new technologies, from multiple points of view.

Fontys University teaches students to think about the impacts of technology – something Van der Vorst calls technophilosophy.

“Thinking about the impact of technology is a crucial competence, not only for IT students but also for students from all other courses. After all, disciplines such as healthcare, logistics, economics and journalism are also becoming increasingly imbued with digital technology,” he said.

It was a point also made by physicist and economist Professor Sergei Guriev, provost of research university Sciences Po who pointed to social scientists as key to developing AI tools and to understanding the implications of their introduction. “The most important questions are about philosophy and ethics,” he said.

He told University World News’ Karen MacGregor: “The political and social implications of AI are directly related to the ethics of AI. A lot of social scientists educate computer science colleagues to think about what they invent and how they invent it. Economists and sociologists who work on inequality, for example, can teach AI colleagues how to develop AI algorithms that do not reproduce discrimination and bias. This is a huge issue.”

Guriev who is advocate of interdisplinarity – and indeed embodies it, as an economist, physicist and mathematician at a social sciences university, explained: “The idea that a machine can replace a biased, discriminating human, and fix the discrimination problem is a utopia – unless we involve social sciences, understand the biases and correct them at the level of the algorithm.”

Governance, regulation, guidelines and geopolitics

It soon became clear that education needed specific attention when it came to regulation, as a sector set to be profoundly affected by generative AI.

Universities and indeed governments around the world sat up and took proper notice of the imperative to keep some sort of watch over AI technology when the developers of ChatGPT predicted that within 10 years, “AI systems will exceed expert skill level in most domains” and be powerfully productive.

OpenAI leaders Greg Brockman, Ilya Sutskever and Sam Altman, said in a note published to the OpenAI website on 22 May 2023: “We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.

“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”

Making laws is usually a lengthy process, but technology does not linger. So in the past year, universities and higher education bodies have stepped up by debating and agreeing on guidelines to help steer university leaders, staff and faculty.

Here, we highlight just some examples of what universities and governments have been doing in the areas of AI regulation and guidance.

UK university principles promote AI literacy and integrity

Last July, Britain’s Russell Group of top research universities published a set of principles that promote AI literacy and use by staff and students, wrote Karen MacGregor. Universities are urged to adapt teaching and assessment to support the equitable and ethical deployment of AI, to uphold academic rigour and integrity, and to collaborate around evolving technologies.

The Russell Group AI principles are:

• Increasing AI literacy: Universities will support students and staff to become AI literate.

• Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience.

• Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access.

• Universities will ensure that academic rigour and integrity is upheld.

• Universities to “work collaboratively to share best practice as the technology and its application in education evolves”.

China gets tough on AI cheats

China’s generative Artificial Intelligence Law, which came into effect last August (2023), was the first specific law on generative AI in the world, reported Yojana Sharma.

The law attempted to balance R&D and censorship. While regulations outlined restrictions, they were not as limiting for researchers in universities and companies as initial drafts had proposed – “a signal that China is keen not to stall research in this area,” she wrote.

The law stresses that generative AI products must be in line with China’s ‘core socialist values’ and only ‘legitimate data sources’ should be used in developing generative AI products. The provisions include exemptions from registering and obtaining licences for AI programmes that are at the research stage, and from other restrictions that kick in for software designed for use by the general public.

In September we reported that students and academics in China who use AI tools to ‘ghost write’ essays or dissertations risk having their degrees revoked. A draft Degree Law to tackle misconduct, including plagiarism, went before the National People’s Congress in September.

AI and Research

Teaching and learning has stolen the limelight in generative AI discussions in higher education over the past year. But some of the most impactful developments are unfolding in the research and academic publishing arenas. AI tools are playing an expanding role in research, and are said to be transforming and accelerating scientific knowledge.

Research that uses AI is expanding rapidly across fields. In some, such as astronomy, AI is indispensable and is raising research productivity and possibilities. As said last December by Thomas Jorgensen of the EUA: “AI in science is a gift that keeps on giving”.

This year, University World News has turned attention to AI and research. A special article series on ‘AI and Research’ launched on 7 April, and it will culminate in a special briefing in mid-2024.

Our journalists and experts are investigating an array of questions, such as: What are the trends around AI in science? What challenges are facing AI in research, and what benefits? How are universities and higher education systems implementing AI in research? How can universities and governments encourage trust and ensure quality in AI generated science? Will AI change the way scientists write and communicate about science?

Let us start with academic publishing.

AI and research publication

ChatGPT quickly caused disruptions within academic publishing. Some of the major journal publishers immediately banned or curbed researchers from using the AI tool due to concerns about inaccuracy or plagiarised work – both of which had been growing problems for years. Several researchers had already listed ChatGPT as their co-author.

A study by Brian Lucey from Trinity College Dublin and Michael Downing from Dublin City University, showed that ChatGPT could write a finance paper that would be accepted for an academic journal.

“This has some clear ethical implications. Research integrity is already a pressing problem in academia and websites such as Retraction Watch convey a steady stream of fake, plagiarised and just plain wrong research studies. Might ChatGPT make this problem even worse?

It might, is the short answer. But there is no putting the genie back in the bottle. The technology will also only get better (and quickly),” Lucey and Downing wrote.

Researchers should see ChatGPT as an aide, not a threat. “It may particularly be an aide for groups of researchers who tend to lack the financial resources for traditional (human) research assistance: emerging economy researchers, graduate students and early career researchers. It is just possible that ChatGPT (and similar programs) could help democratise the research process,” they said.

Dr Mark Carrigan, a lecturer in education at the University of Manchester, explored effects that AI might have on research publication, and argued that it could significantly boost the already soaring production of scholarly papers.

“If we accept the premise that generative AI has the potential to automate parts of the writing process, then it increases how many outputs we can produce in the same amount of time. Imagine what annual outputs might look like globally if generative AI becomes a routine feature of scholarly publishing,” he said, and added: “Academics need to consider why they publish, and harness the new technology to reduce time spent on routine tasks and contribute towards a more creatively fulfilling life of the mind.”

The Nobel Prize Dialogue

On 5 March 2024, a Nobel Prize Dialogue held in Brussels, on “Fact & Fiction: The future of democracy”, discussed the huge potential of AI to support research, the contributions of science to democracy and the importance of critical thinking in the age of AI.

Demis Hassabis CBE, a British AI researcher and co-founder and CEO of Google DeepMind, told the audience: “We’re now at an incredible inflexion point. We’re about to enter, maybe in the next 10 years, a new golden era of scientific discovery, helped by AI in many fields.” His lab is working on a LLM that could work like a research assistant.

Hassabis is the developer of AlphaFold, a deep learning-based algorithm for accurately predicting the structure of proteins. It has predicted the structures of more than 200 million proteins – pretty much every protein known to science. “We’re seeing a revolution in biology. It’s going to apply to other areas too, like chemistry, material science, physics and mathematics. All these scientific disciplines will benefit from AI,” he said.

Nobel Prize scientists were also on the panel: Ben Feringa, a Dutch professor who won the 2016 Nobel Prize in Chemistry for research on molecular machines, and Sir Paul Nurse, British winner of the 2001 Nobel Prize in Physiology or Medicine for work in genetics.

Feringa, of the University of Groningen, gave an example of how AI is being used. To make a new drug to treat cancer, you need maybe 35 or 40 different chemical steps – like Lego – to build a complex molecule that treats breast cancer, for instance.

“To design these routes, there are hundreds or maybe thousands of possibilities. So from all the collective information that we have in the chemical literature and in the physical literature, we then use these programmes to design routes.” Generative AI does the donkey work, humans decide on the best routes.

Nurse suggested that time liberated by AI from routine tasks should be applied by university scientists to different purposes, such as encouraging critical thinking in students.

A primary job of university scientists is to train students to be critical, stressed Nurse. Academics must ensure that data used to generate analysis is good. Further, it is crucial to ensure that the algorithms used perform in the ways that researchers want. Finally, scientists must understand what is going on: “We can’t be stupid and just press the button on the computer.”

New areas of research, and scientific integrity

GenAI has catalysed new areas of university research. In February this year, Yojana Sharma wrote that in Taiwan – whose leading position in the semi-conductor industry has helped to attract talent and investment in research – there has been “a mushrooming of research projects to resolve problems stemming from the application of generative AI technologies”.

She quoted Hsuan-Tien Lin, a computer scientist at the National Taiwan University, as saying that generative AI research is different from traditional scientific research: “Generative AI needs us to define a new process of how we do research.”

She said a careful evaluation process in research is critical because the results from GenAI are not always reproducible, “so how can we convince ourselves that we are getting true research results rather than ‘hallucinations’?”

Using AI to research AI research

The use of LLMs in research includes using AI to research AI. Scandanavian correspondent Jan Petter Myklebust just wrote about two major studies at the University of Copenhagen in Denmark. Experts are mapping the growing influence of AI on science, how researchers are using AI, and how AI technology is spreading across scientific communities.

“Development is moving so fast at the moment. But right now, we have a window of opportunity where we can compare the knowledge produced by humans with the work of AI,” said research leader Professor Roberta Sinatra. “Ultimately, the aim of the project is to give us all a much clearer picture of the new and unexpected consequences of AI-infused science.”

Professor Morten Goodwin of the University of Agder in Norway, told University World News that such projects “are vital for academic AI research as they provide empirical data on AI’s current and potential uses in science, helping to shape guidelines in academia and understand its transformative impact. It also prepares the academic community for AI’s evolving role, ensuring its effective and responsible integration into future research.”

‘Calm your inner Luddite …’

In conclusion, we have shown that generative AI represents both ‘evolution and revolution’ for higher education. ChatGPT and its fellow popular generative AI tools are a giant step up from the AI that has been evolving over years, and it is driving a revolution in higher education across the world.

For the last word, let us travel to an Academy of Science of South Africa webinar held a year ago, where futurist Dr Roze Phillips warned that trying to outsmart AI was not a viable strategy.

“Our technologies are progressing faster than our wisdom,” she said before delivering her key message to educators on the best approach to AI: “Calm your inner Luddite, hold on to your inner sceptic.”

This article formed the basis of a keynote speech presented by University World News Editor-in-Chief Brendan O’Malley at the 2024 ABET Symposium earlier this month. It is co-written with Karen MacGregor.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button