Looking forward 2026 | Navigating the AI labyrinth

Research suggests the share of work hours spent using generative AI rose from 4.1% in late 2024 to 5.7% by August 2025, which is meaningful growth

The age of Artificial Intelligence is here. AI has become embedded in our daily lives, from workplaces and smartphones to video games, cars, social media, healthcare and financial services.

Its rapid adoption and ongoing evolution mean the technology is in a constant state of change, making it difficult at times to keep up with.

AI expert Alexiei Dingli explained AI became widely available and increasingly normal in everyday work, but adoption was uneven. Many people tried it occasionally, a smaller group used it weekly, and an even smaller group used it daily, and this varied a lot by role and sector, he said.

In the US, during the third quarter of 2025, 45% of employees said they used AI at least a few times a year, but only about 10% used it daily. “[This] is a big difference between ‘available’ and ‘fully integrated’,” Dingli explained.

Research suggests the share of work hours spent using generative AI rose from 4.1% in late 2024 to 5.7% by August 2025, which is meaningful growth.

But where is AI headed in 2026?

Your friendly AI assistant

According to Dingli, the next big thing in 2026 is not smarter chatbots like ChatGPT or Gemini, but software that can run parts of a job end to end.

“Think of AI moving from being a helpful assistant to being a junior colleague who can prepare work, check it, and hand you something close to finished. You tell it the goal and the rules, and it takes care of the steps in between,” Dingli said.

This is not only powerful, but also practical, he added. “It saves time on coordination, chasing emails, filing systems, and repeating routine decisions. The real shift is that humans stop micromanaging tasks and start supervising outcomes. That is where the productivity jump will come from, and it is also where responsibility must stay clearly with people.”

AI you direct

During 2025 major changes came to the platforms we have known and integrated into workplaces for years like Outlook and Google Workspace, with built-in assistants in email and documents rather than separate websites or apps.

But in 2026, we will see the further integration of AI in existing technologies as well as new technologies designed around it.

Alexiei Dingli thinks we will see both, but the more interesting breakthroughs are new technologies being built around AI rather than simply adding AI to old tools.

“Existing platforms like email and document systems will keep getting smarter, but many new products will not look like traditional apps at all. Instead of clicking through menus, people will describe what they want done, and the system will organise the work for them,” he said.

This means screens will show what the AI did, what data it used, and what still needs human approval.

“In simple terms, software will start to feel less like tools you operate and more like systems you direct,” he said.

The dark side

But AI’s rapid advance is also opening the door to darker, more criminal uses often outpacing the laws and safeguards meant to contain it.

In 2025 a woman was charged for using an AI-generated video of Robert Abela to scam people into fake crypto investments. One victim told police they were scammed out of at least €52,000.

Cyber Crime Unit Superintendent Anna Maria Xuereb had explained in an interview with MaltaToday cybercriminals were generating AI child abuse material and selling it online.

Dingli said he thinks scams in 2026 will become more personal, more convincing and harder to spot.

“AI removes the effort and skill scammers used to need. Instead of poorly written emails sent to thousands of people, scams will be tailored to the individual. Messages will reference your job, your colleagues, recent events, and even your writing style. Voice scams will also grow rapidly, where a short audio clip from social media is enough to imitate a boss, a family member, or a company director asking for something ‘urgent’,” he said.

The danger, Dingli insisted, is not just that these scams exist, but they arrive through trusted channels like email, messaging apps, and even internal work tools, making them feel routine rather than suspicious.

Another major shift is speed and scale. He explained AI allows scammers to test hundreds of versions of a message, see which one works, and improve it almost in real time.

This means scams will adapt faster than people’s awareness. We will also see more “quiet scams”, where the goal is not to steal money immediately but to slowly gather information, build trust, and then strike later with something much bigger and more damaging.

“The most important defence in 2026 will not be technical alone, but behavioural. People will need to get used to pausing, verifying, and being comfortable saying ‘I will check this first’, even when a message sounds urgent or comes from a familiar voice. Scams will feel more human than ever, which is why our response must be more deliberate, not faster,” he said.

But Dingli’s outlook is a positive one, warning against the AI Bogeyman, saying it’s not a question of whether it is too clever, but whether we can use it wisely.

“Another risk is quiet deskilling, where people rely on AI so much that they stop understanding the work themselves. The answer is not fear or bans, but clear rules, good training, and a simple principle: AI can help decide and act, but humans must always stay responsible for the final call,” he said.

The AI-fication of Malta

For Malta, the biggest realistic opportunity in AI is at a national and local level, not just inside individual companies.

“This means AI that helps run transport, public services, education, and government administration in a joined-up way. Because Malta is small, it can move faster and test ideas that are difficult in bigger countries,” Dingli said.

He said the real win would be AI systems which help public servants make better decisions, reduce paperwork, and respond faster to citizens, while keeping humans clearly in charge.

AI is here to stay, and failing to keep up or adapt will be detrimental. As the technology becomes more powerful and more deeply integrated into everyday systems, the challenge is not stopping its progress but managing it responsibly. Used wisely, AI can boost productivity, improve public services and enhance quality of life. Used carelessly, it can

enable new forms of crime and render people irrelevant. Used recklessly, it can create new dangerous weapons of war but that is a conversation for another time.