The use of artificial intelligence (AI) tools to assist in research remains a fiercely debated issue in the academic community. While proponents contend that technology is here to stay and should be harnessed ethically, opponents counter that essential critical thinking skills could be a casualty.
A former Director of Graduate Studies and Research at the Cave Hill campus Dr. Peter Chami is among those championing its ethical use but said there are caveats for those willing to embrace the technology.
In his highly anticipated webinar on the topic, “Revolutionising Research: The AI Advantage for Graduates” on 26 June 2024, Dr. Chami described the technology as a gamechanger that has levelled the playing field between West Indian researchers and their global counterparts.
Hundreds of workshop attendees were shown practical examples of the benefits of generative AI and large language models (LLMs) like Gemini, Elicit, and Unriddle, all of which supported the director’s argument that AI can boost research capacity and decrease the time taken to investigate issues.
“The paradigm has shifted. There are always going to be people saying, ‘This is unethical, this doesn’t work’. We hear you and appreciate you, but we can’t hide it away.”
Dr. Chami is Senior Lecturer in Mathematics at the Cave Hill campus, holds a doctorate in Mathematical Statistics and an MSc in Biostatistics and Data Mining. His scholarly publications include over 40 peer-reviewed journal articles.
He had high praise for AI’s ability to level the playing field for students.
“This is an equaliser for West Indian students … this puts us at the level of Israel which is a real centre for tech innovation. We now have a person from a village in the heights of Guapo in Trinidad who now has the same access to the kind of answers, coding, tools that an American student at Stanford has. It’s just how we use it … I still say the students’ critical thinking skills, their content expertise, their command still have to shine through.”
At the same time, Dr. Chami issued words of caution. He said all LLMs have leaky algorithms, meaning they store all data uploaded to their system.
“At no time should you put confidential information into these things, or data that’s under IRB (Institutional Review Board) approval. Once you put the information in there, it’s going to bank it because the model trains off the data it gets. Gemini is trained off Google [which has access to] any search that anybody has ever done in Google or any piece of document that’s on the internet.”
He said Google has warned that some AI-generated responses may be inaccurate and reflect biases in certain historical and other information. This was demonstrated when an internet search for one of the references cited in an AI-generated article showed it did not exist.
With this in mind, the former Deputy Dean for Research and Outreach at the Faculty of Science and Technology warned against blindly accepting information generated by this system and stressed that critical thinking remained the bedrock of academic research.
“The student still has to be a content expert; they have to show they understand the underlying theory; they still have to show they can connect concepts; [and] they still have to show they have a command of what they’re doing. The AI tool is just that — a tool.”
Dr. Chami suggested that students consult with their supervisors before utilising AI in their research and stressed that each researcher needs to be guided by their moral compass.
He made it clear he was speaking from his perspective when he stated: “We have to change how we examine. Written exams are circling back around because it’s important to check if the knowledge base is there.”
He said the university should also place more focus on oral exams to ensure students grasp the underlying theories used in their research.
For those considering utilising AI, Dr. Chami noted that international refereed journals have stipulations for acknowledging the use of AI in writing. His stated preference was to cite AI tools, much like is done for the statistical software SPSS and others.
The researcher also advised students to speak to their supervisor or lecturer upfront or declare whether they had used AI tools to avoid AI detection problems and to promote transparency.