WATCH | Stefano Filletti: ‘We have to think outside the box to hold AI accountable’

How do we hold AI accountable? That is the burning question that weighed on Stefano Filletti’s mind before he was chosen to head the Council of Europe’s working group on AI that is trying to find the answers to that question. He sits down with Matthew Farrugia and enters a rabbit hole on the most controversial new technology of our age 

'We believe that each person has the ability to determine what is right and wrong… But this doesn’t apply to AI' (Photo: James Bianchi/MaltaToday)
'We believe that each person has the ability to determine what is right and wrong… But this doesn’t apply to AI' (Photo: James Bianchi/MaltaToday)

Stefano Filletti, a law professor and head of the Criminal Law Department at the University of Malta, is a known name in legal circles. 

He plies his trade at the law courts in Valletta, dealing with criminal cases. But the lawyer is preoccupied with the rise of artificial intelligence (AI) and its potential impact on the criminal justice system. 

Filletti heads a Council of Europe working group on AI and criminal law, tasked to answer the question: “How can we hold AI accountable?” 

It is not a frivolous question and Filletti brings up the example of an AI machine that invests money on behalf of its owner. If, in the search for greater efficiency, the machine learns that it can earn its owner more money by breaking the law, who would be responsible? The answer is not straightforward, especially within a body of laws drafted almost three centuries ago. 

When I ask Filletti how he fell into the AI rabbit hole, he tells me that he spent sleepless nights thinking about the evolving technology. The complexity of the issue is not lost on him. 

Filletti argues that AI can be regulated as a legal person, in the same way companies are held responsible. But any such move would ideally happen across European countries. 

On a practical level, Filletti believes that AI can be used to alleviate the courts’ increasing workload by decreasing the weight of labour-intensive tasks such as writing up transcripts. 

He adds that AI can be used as an organisational tool in Malta’s courts to eliminate human error and ensure that delays are minimised when organising hearings and court sessions.  

The following is an excerpt from the interview. 

What are your views on AI? Are you worried when it comes to regulation in the sector? 

There is a difficulty in understanding what we are trying to regulate. We can’t confuse automation with intelligence. Some programmes are capable of doing people’s jobs faster, but that’s automation. That doesn’t mean the programme is intelligent. 

When we say intelligence, we refer to a system that is capable of evolving and learning on its own by being more efficient at its job. We are looking at a system that is able to determine their job and find ways of becoming more efficient and perfecting their job… 

Let’s take as an example an AI system whose job it is to invest and generate profits for its users. The system buys, sells, monitors markets, and does the whole thing. The system then learns that it can be more efficient by breaking the law. When this happens, what do we do? Who is responsible? 

Who is responsible in your opinion? 

That is the problem. We’ve always identified three people tied to AI. You have the programmer, who is supposed to design and input the right stops and conditions for the system.  

You then have the user who makes use of the system and puts up the money for investment. Then you have the hacker, who manipulates the system… 

The legal framework we have today places all the responsibility on the programmer who should have ensured that the system wouldn’t act up.  

Like the parent of a minor who has done something illegal? 

Exactly. But there is a problem… AI is capable of evolving, learning, and changing to adapt to its circumstances. A programmer can input all the necessary stops to guard against malpractice, but the system can change and become unpredictable, in the same way a child can grow up and become unpredictable. Can a programmer legitimately determine what an AI system can do when it starts working? 

[…] 

So, is this where the working group you are heading comes in? 

Our question is: ‘When AI does something because it evolved and does something we couldn’t predict, by committing a crime, how do we regulate and control it?’ 

Nowadays, it’s hard not to accept that AI is a person; not biological, but a thinking and evolving object that has possibly surpassed us. When you have these “beings”, can you hold them responsible in the same way you would hold a normal person responsible? Criminally speaking, can AI commit a crime today? The answer is no, especially in Malta, because we are trying to tackle a 21st century concept with 18th century legal concepts. We are three centuries behind. 

We believe that each person has the ability to determine what is right and wrong… But this doesn’t apply to AI, because it cannot determine whether something is right or wrong, its only concern is efficiency. 

And AI surely isn’t afraid of prison or a fine, so the way we apply the law to human beings can never be applied to AI. 

How is the working group coming to the answers it needs? 

I was elected to this Council of Europe body on AI. We are realising that this problem is among us and that we need some regulatory framework to tackle this problem. There is some legislation within the EU but we must think of the eventuality of the need for civil and criminal liabilities in these circumstances. 

The problem grows when you realise that there are different frameworks in each member state. The working group is focusing on AI because we believe this is a creature that is living with us, it’s not a person, but it must be regulated. We need a framework, no matter how basic, that is uniform within all members of the council, so we can define AI and create rules on responsibilities tied to it. 

As you said, AI isn’t afraid of prison or fines. What punishments are there? 

The solution isn’t easy and that’s what we started with. It’s very difficult, but not impossible, because we have similar examples. We have a concept of criminal corporate liability. A company exists only on paper, but the law gives it a particular personhood.  

A company can have its own bank account and chequebook because it is separate from its owner and one can rent out property in the company’s name because the law grants personhood and the same company can be found guilty of a crime.   

But it can’t go to prison. 

No, but it can have its assets taken away. If you’re using the company to commit a crime, I can hold you responsible, and I can seize its assets. That’s an example of a non-human person that is subject to civil and criminal laws.  

We have to think outside of the box, and be creative as we were when we had to hold companies accountable for their actions.