How Malta’s academic institutions are navigating the AI revolution
Universities and other higher education institutions in Malta are embracing the ethical use of artificial intelligence, developing comprehensive policies, and emphasising effective detection methods
In recent years, one of the most significant technological disruptions to hit education has been the rise of artificial intelligence.
Since the launch of ChatGPT in late 2022, along with the spread of image generators and advanced writing tools, these systems can now produce essays, solve complex problems, and create content that rivals human work.
While being an asset to expedite certain jobs, AI has also created problems in several spheres, not least the education system, where concerns over plagiarism and lack of student input are of concern.
Nonetheless, academic institutions are embracing the AI revolution, while ensuring its use is ethical.
MaltaToday spoke to Matthew Montebello, head of the AI Department at the University of Malta, Matthew Sant, President of the MCAST Student Council, and Charles Theuma, Principal of St Martin’s Institute of Higher Education, about how their institutions are navigating the rise of AI in education. The responses highlight different approaches: The University of Malta has embraced AI through comprehensive training programmes, MCAST has developed detailed policy frameworks, while St Martin’s has focused primarily on detection and control measures.
University chooses integration over restriction
University of Malta has positioned itself at the forefront of AI integration, adopting what Matthew Montebello calls an embrace-and-train approach. “The University of Malta embraces AI and generative AI. However, it embraces it when AI is used ethically, in an academically integral way, and not when it’s misused or when abused,” Montebello said.
The university runs three monthly workshops through its Office for Professional Academic Development, covering AI use in teaching, academic research, and data analysis. These sessions consistently reach capacity with waiting lists of academics seeking training.
Rather than prohibition, the approach centres around accountability. Students are encouraged to declare their AI use transparently, detailing specific prompts in bibliographies or appendices.
“I think once a student is transparent and honest, to me, integrity is actually even higher, because I know that all students are using it,” Montebello explained.
The philosophy requires students to defend their work regardless of the tools used. “I have the right as an educator to ask them to walk me through the process. Explain to me why they wrote this paragraph. I don’t care where they got it from, but do they own it now?”
MCAST develops comprehensive framework
MCAST has taken a more structured approach, developing what appears to be Malta’s most detailed AI policy framework.
Matthew Sant described the strategy as “proactive yet cautious”. Working with the Quality Assurance Team, MCAST has established guidelines focusing on guidance rather than restriction.
“The focus isn’t to restrict AI, but to guide both students and lecturers in using it as a supportive tool, for example, to personalise learning, streamline administrative tasks, and strengthen teaching efficiency,” Sant explains.
Sant explained that the Student Council had presented a document, Policy Paper On The Use Of Artificial Intelligence At MCAST. This was the first of its kind produced by a student organisation in Malta.
The framework covers five areas: The EU AI Act alignment, data protection, academic integrity, AI literacy, and transparency.
MCAST’s formal policy explicitly allows AI use in coursework, provided students disclose it and the use doesn’t replace their skills or conflict with learning outcomes. Sant emphasised that “fairness must come before suspicion”, advocating dialogue-based approaches when AI detection tools flag potential issues.
He draws clear ethical boundaries around “intellectual honesty”. Moreover, he said: “Using AI for brainstorming, improving structure, or checking grammar is responsible; it enhances learning. But when AI replaces a student’s original thought, research, or creativity, it becomes misconduct.”
Adopting a more traditional approach
St Martin’s Institute of Higher Education, a private tertiary level education provider, on the other hand has adopted a more traditional approach. Its principal, Charles Theuma, described using plagiarism detection software to identify AI-generated content and requiring students to complete declaration forms when submitting assignments.
Theuma discussed how the regulation and integration of AI in education is becoming increasingly inevitable, noting that different institutions are adopting varied approaches.
He explained that at St Martin’s, plagiarism detection tools are already in use, though they are not always reliable in identifying AI-generated content.
Students are often asked to declare their AI use, stating which prompts they used and how AI contributed to their assignments.
He emphasised the importance of fairness, ensuring that students aren’t wrongly accused of using AI. To address authenticity concerns, he explained that in cases like dissertations, a significant portion of the marks come from the viva, which tests whether students genuinely understand and can discuss their thesis—making it easier to detect AI-written material.
Looking ahead, he expects AI to reshape teaching methods and assessments, but warned that without clear policies and proper regulation, the current education system might soon feel “not fit for purpose.”
Contrasting approaches
The contrasting approaches reveal fundamental differences in educational philosophy. With some institutions viewing AI as inevitable and placing focus on ethical integration, whilst others prioritising detection and control.
For Montebello there are historical parallels to previous technological disruptions. “It happened 300 years ago when the book was introduced in the classroom, and the academics complained because now students would have the book, and so in their mind they didn’t need teachers anymore. The same thing happened in 1993 when the internet was introduced.”
He predicted AI integration will follow similar patterns but faster: “It took the internet 10 to 15 years to gain acceptance. Generative AI will likely be accepted in about five.”
However, he warned against “cognitive unloading”, the risk that over-reliance on technology could erode critical thinking skills. “Cognitive unloading is that point where the technology is not assisting me. It’s replacing me, and this is what we want to avoid.”
Sant also predicted AI would transform education “from being reactive to proactive”, with real-time progress tracking allowing early intervention for struggling students.
“Assessments will likely shift from memorisation-based exams to process-driven evaluations that value creativity, problem-solving, and critical thinking,” he explained.
Montebello also described AI as an advanced collaborator, explaining how researchers can now analyse hundreds of papers instantly, a process that previously took weeks.
However, Theuma said the tools must be “fit for purpose” and supported by proper training. “If AI tools are used blindly, without context or adaptation, they lose their value,” he explained. “Teachers need to know how to interpret AI outputs, not just rely on them.”
Theuma also noted that while AI can simplify certain academic processes, it must not replace the core of learning. “Students still need to think critically, analyse sources, and create their own arguments,” he emphasised.
