[WATCH] Brave, new (and slightly scary) world…

Sophia, the ‘intelligent robot’ unveiled in Malta this week, offered many people their first glimpse of an exciting future ahead. Some may be wary of the implications; however AI researcher ANGELO DALLI is enthusiastic that the artificial intelligence will bring enormous benefits… but only if it is properly regulated

Machine Learning Expert Angelo Dalli
Machine Learning Expert Angelo Dalli

Recent advances in artificial intelligence (AI) and robotics seem to indicate that we may well be on the threshold of an exciting technological revolution. Is that perception correct? Will ‘intelligent robots’ really change the way we live our everyday lives… or is it just a passing fad?

As someone who has been involved in AI research for over 20 years… yes, I think we really are on the verge of something new that is here to stay. Artificial intelligence has been around for more than 40 years, in various guises: whether we know it or not, we use it every time we use a Smartphone. What has changed recently, however, is the availability of a large quantity of data, and also cheap computing power. That is what has made the new AI revolution possible. I also think that it will, in fact, bring about an upheaval in society: like when we first produced electricity, or the invention of the steam engine... with all the associated benefits, but also all the changes it brought about in Victorian society at the time. I think AI will likewise bring about a new type of society soon, possibly in our lifetimes.

To extend the steam engine analogy: that invention also put a great many people out of work. Already, some people are concerned that artificial intelligence will come at the cost of human jobs. But surely, the technology will also create new jobs and new opportunities. How do you see this balancing out? Will the advantages outnumber the disadvantages?

As with anything new, there will be sectors affected in positive, and less positive ways. In fact, one of my visions for artificial intelligence is to create an AI that is ethical, and that impacts society positively: not aiming to replace people, but rather to assist them in achieving their full potential. It can also be used to help people achieve things that wouldn’t be possible otherwise. I do think that some people whose jobs can easily be automated, may have to re-skill.

This is inevitable, whenever introducing new technology that enables you to do things more efficiently. But I also think the educational system in Malta is good enough, and that there are enough support structures already in place, to enable people to be flexible. And for the children who are thinking, ‘How will my career be in the future?’, I think that ensuring that you have flexibility in the future will enable them to adapt to a fast-changing society. I don’t think that AI will lead to any ‘doom and gloom’ scenarios. I think it will allow us to have more free time; get things done more productively… and why not? Why not have a little bit more leisure time, and still enjoy the same quality of life? I don’t think there’s anything wrong with that, after all…

And yet, that same vision also evokes a few classic ‘science fiction’ nightmare scenarios. If future ‘intelligent’ (possibly even sentient) robots exist only to work for us… they would effectively be ‘slaves’. In fact, there is already talk of granting robots ‘rights’, specifically to avoid ‘robo-exploitation’. Are we jumping the gun with proposals such as these? Do you envisage a time when robots really will need – maybe even demand – ‘rights’?

Actually, I think that the timing is quite right. Of course, there is a lot of hype surrounding the popular idea of ‘sentient robots’. In reality, the consensus of most people who research AI is that [that level of sentience] is not going to be possible before around 30 to 50 years’ time…

Still, ‘30 to 50 years’ is not very far into the future…

[Nodding] It may happen within our lifetime. We may actually see it ourselves, and our children will definitely see it happening. When? [shrugs] It’s a matter of time; no one really knows for sure. But with these issues, it is important to have some form of regulation even before we get to that level. I don’t think we’re going to have robots, any time soon, waking up and saying: ‘I think, therefore I am.’ But consider, for example, the case of a driverless car. If it hits someone, who is going to be liable? The owner? The manufacturer? So even though AI has not yet reached the point where we can talk about ‘self-aware’ robots, or start discussing whether robots are ‘slaves’ or not… we do need to discuss issues such as liability, and what incentives AI may have. Right now, these issues are in a legal grey area, that I think needs to be clarified…

In fact, the European Commission has just come out with proposals for a legislative framework governing AI. Leaving aside future exigencies for now: in what ways do our laws need to change in order to accommodate the new reality as it stands today… and how urgently is this legislation needed?

I think there are two main dimensions. One is the framework that handles how AI is treated in terms of legal personae, in terms of the law, in terms of rights and responsibilities. At the moment, we’re still at a limited stage – what we call ‘narrow AI’: the sort we’re already used to. Then there is ‘general AI’ [AGI], which is closer to what people might imagine from science fiction. We’re still in the ‘narrow’ AI phase; but even with narrow AI systems… if you have a financial trading system, for instances, and all of a sudden it starts using its own trading patterns, and stocks go up and down, and crash – this has already happened, by the way – how do you prevent it? How do you regulate it? Because that can lead to serious, real-life effects. Another area, that I feel very strongly about myself, is the ‘weaponisation’ of AI. I wouldn’t want to have a ‘Terminator’ appearing any time soon…

That is precisely what many people might be afraid of. Isn’t it a valid cause for concern?

Yes, but that is why we need a legislative framework: to prevent weaponisation from happening in the first place. There are also concerns about data privacy. I wouldn’t want face-profiling information to be sent to other governments which do not adhere to EU rules. The privacy of data needs to be properly policed and properly enforced.

The second area, which is not directly related to any legal framework, is to establish a test of intelligence that can measure the capabilities of AI systems. As time goes by, AI systems are going to become more and more capable; but with higher capabilities come increased rights, and also responsibilities. So, what we need is a step-by-step, ‘milestone’ approach, that determines whether an AI system can perform a certain kind of interaction, or can achieve this kind of solution to a problem… and if so, it would be considered, for example, a ‘level one’ intelligent system. If it is capable of higher achievements, it will be ‘level two’, and so on.

Maybe the ‘sentient AI’ will be ‘level 10’. But over time, we will see systems hitting those milestones bit by bit. The law can evolve as the systems hit those targets. We could cover, say, from ‘level one’ to ‘level five’ – these are just hypothetical numbers, for now – and when there is an AI system that reaches ‘level five’, the law would need to be revised to take the new reality into
consideration…

Let’s talk about the level we’re at now. Sophia, for instance. We saw her interact with people, speak in Maltese, and so on. How much of that is pre-programming, and how much of it was actual ‘intelligence’? For example: can a robot like Sophia learn a language like Maltese, on her own, and use it to express her own ‘thoughts’?

Robots like Sophia tend to capture the popular imagination, because they’re anthropomorphised. There is a humanoid form in front of you; so when you see the AI in action, you imagine it to be actually ‘human’. But the level of AI involved is still very similar to the current, existing systems. Unfortunately, we’re still at the lower end of the scale…

So that projection of artificial intelligence was, in part, an illusion?

There is an illusion at work in it, yes; but there are also plans to make that intelligence grow more and more. For instance, you asked about machines learning languages… one example of that is Google Translate.
That is all powered by an AI system that would have actually learnt the language from scratch. The results are not perfect, but they’re getting there.

And Google Translate has, in fact, learnt Maltese […] so yes, I think we will be seeing more and more robots that are ‘human’ enough to start integrating into society…

In a sense, this may already be happening. There are companies manufacturing ‘sex-bots’ that are programmed to develop both a physical and (apparently) ‘emotional’ bond with humans. There are even reports of people wanting to ‘marry’ a robot. Do you see us ever reaching a point where AI extends to human emotions such as empathy… or even love?

This is actually one of the things I also point out to fellow AI researchers: intelligence isn’t just the purely mathematical definition of the word. It’s not just about giving a robot an equation to solve. Of course, a robot is going to solve an equation, and much better than most of us – better than I would, that’s for sure. But there is also the unwritten knowledge of how to integrate within society. There is a context: the knowledge of what rain feels like when it falls on your skin; whether it’s cold or not; or even things like interpreting the law. There is the actual written text of the law; but there are also society’s customs and norms. Understanding all that, I would say, also falls within the broader definition of ‘intelligence’.

Meanwhile, part of the discussion has also been about Malta’s possible role in this AI revolution. There is a perception out there that Malta is well-positioned to take full advantage of the new technology… but why, exactly? What (if anything) gives this country an advantage over others, when it comes to making the most of this new industry?  

I would say Malta is in a good position: first of all, there is a will to embrace new realities, in a way that other governments may either be hesitant to do, or simply too big to do things quickly. So I think we can take advantage of being small and ‘nimble’, so to speak, in order to legislate faster, and actually innovate. In itself, the push for innovation is a good thing for the country: because let’s face it, we don’t have many natural resources like huge oil reserves, or mineral deposits.

So we have to make the best use of what we have, and also of the infrastructure that has been built up over the years. I think the legal framework we already have is very good, and it will lead to incentives for more Blockchain companies, AI companies, etc. I think this is the right direction. We also happen to be blessed with good weather, so people actually do want to come and live here. It may seem trivial, but it’s another advantage we have. Everything helps…  

At the same time, there is also the perception that AI is the next big ‘cash-cow’ that will translate directly into huge economic and quality-of-life improvements. Is this perception justified? Viewed strictly as an industry: does AI/robotics really have the potential to bring about the economic revolution it seems to promise?

If you take Malta as a case study, there have already been industries like financial services, iGaming, etc., which started from nothing, and in the space of little more than a decade, have already grown to represent around 12% of the GDP. I do believe that AI has the same potential to expand Malta’s economy. And because it is applicable to so many industries, it will probably have an even greater effect than almost everything else…

Could you give a few examples of such industries? Where – apart from the online search engines and Smartphones that we’ve already mentioned – is robotics already being applied?

Medicine is one example. When it comes to X-rays or MRIs, there have been tests showing that AI systems can already match the same performance as a radiologist, or other human medical practitioner. In some cases, it may exceed human performance, because it never tires. It never needs to take a break. AI systems can also provide a consistent baseline to possibly assist a radiologist or doctor in detecting something that was overlooked. Or providing a second opinion; because again, I don’t believe in robots ‘replacing’ people, but assisting them. Surgery is another example: you could get better precision, because a robot’s hands don’t shake. If it’s a very long operation, the results might be better.

Obviously, however, it’s not all champagne and roses. You still need to have trained people to oversee all this. But I think AI will certainly bring about improvements in medical science, and other areas. Another example is personal finance. You could have a robot advisor, inbuilt into your Smartphone, advising you how to spend your money in a better way. It might observe certain patterns, and maybe advise against taking out this or that subscription, and suggest another one instead…

At the risk of asking the usual hackneyed, sci-fi inspired question: isn’t there also the risk of robots becoming ‘too intelligent’? ‘Intelligent’ enough to learn the science of their own creation, so that – for example – robots start creating other robots in future?

It’s a valid scenario; I do get asked that a lot. But this is also why I think we need to have an ethical framework from now: AI has to be regulated with ethics in mind from day one. Because we need to do it right. The possibility of robots creating other robots, and improving themselves, is not – in itself – ‘wrong’. But if there are weapons involved in the manufacture of those robots… then yes, things can go wrong. There is a line not to be crossed… but we need to define that line, and ultimately make sure that no one does anything stupid, that we might all regret in the future.