Like monks discussing the printing press… the future is now | Alexiei Dingli

‘Artificial Intelligence’ may be a frightening prospect to people brought up on dystopian sci-fi novels. But for Prof. ALEXIEI DINGLI, who lectures AI at UOM’s ICT department, it represents an exciting world of possibilities, that could – if judiciously used – radically improve our quality of life

The term ‘Artificial Intelligence’ has its roots in science fiction: where it is usually portrayed as a dystopian ‘threat’. (In ‘The Matrix’, for instance, machines enslave humanity, after the latter endows technology with ‘intelligence’.) We seem to living at a time when those futuristic predictions are actually coming true. AI is already impacting our daily lives, in ways we probably don’t even know about.  So, shouldn’t we be worried, at this stage? After all, those sci-fi authors were actually trying to warn us about something, weren’t they?

[Laughing] What can I say? ‘The Future is Now!’ If I had to be honest, however, one of the things that attracted me to AI in the first place, was precisely science fiction. I used to watch programmes like ‘Star Trek’, for instance: and I would say to myself, ‘Wow, what a cool technology! Can I do something like that, myself?’

The answer, then as now, is: ‘Probably not’. But at least, those programmes gave us dreams, and ideas; and to this day, ‘science fiction’ still forms the basis of what I do. That is to say: using technology, to enact those ideas, and achieve those dreams...

On top of that, I am also a very curious person, by nature. Not just about technology; but about anything, really. And AI is one of those fields where you can actually ‘do something’ about the things you are interested in. If there is a particular area you are curious about... probably, you can use technology to come up with some kind of practical application, in that area.

For example: one of the projects I am currently working on with my students is about using AI to ‘read the thoughts of people’...

But that’s precisely what I meant.  ‘Reading the thoughts of people’. How scary is that? It sounds like something straight out of George Orwell’s ‘Nineteen Eighty-Four’...

Well, it all depends on why you're doing it, at the end of the day. In my case, I can tell you the reason. A few months ago, I had the privilege of meeting somebody who is paralyzed and unable to move, or communicate in any way. And I would really like to do something to help people in that condition. So, from that end, at least, I think AI has massive potential.

When you imagine that you're giving these people back the authentic skill to communicate, via a new technology... that’s the sort of thing that I find exciting; and not ‘scary’ at all.

Hold on: you mean you’re working on a new technology that would enable a paralytic person to physically communicate?

Well... not exactly. Or at least, ‘not yet’. First of all, the technology I’m talking about is not something we are developing ourselves. It already exists – even if it came out only very recently – and there have already been attempts to apply it, to this particular area.

Secondly, it is still quite far away from physically enabling people in that condition, to actually communicate. At this point in time, it is more a question of trying to ‘read their thoughts’. To give you a rough idea of how it works, so far: an algorithm is trained to interpret the results of FMRI (Functional Magnetic Resonance Imaging) brain-scans, in order to associate the person’s responses to certain stimuli, with activity in certain parts of the brain.

Hypothetically speaking: if you show the person a picture of, say, a ‘fish’... the FMRI scan will register a corresponding level of brain-activity, associated with that image. The algorithm will recognise this, and – in this particular example – will conclude that ‘thoughts of fish’ will trigger a particular response, in a particular region of the brain.

Obviously, that’s just a random example. But even at this early stage, the results have been very promising. Being able to read more complex thoughts, will naturally bring us closer to what we’re actually aiming for...

... which, I imagine, would be some kind of device that – coupled with voice-simulation technology, etc. – would enable that person to physically speak. Right?

That’s the general idea, yes.

OK, I’ll admit that sounds a lot more ‘reassuring’. At the same time, however: part of what makes AI so scary, is precisely the fact that it involves ‘intelligence’. Apps like ChatGPT, for instance, are already writing things of their own; producing ‘art-work’, of their own. Surely, it’s a matter of time before they start to actually ‘think’ for themselves, too. So... how intelligent is AI, anyway?

ChatGPT is a good example, so let’s start with that. Personally, I think that it’s an amazing tool, at the end of the day. It has managed to achieve a lot, which had eluded us in the past 20-30 years: keeping in mind that it comes from a field called ‘natural language processing’ (NLP); and many of its functions are subfields of NLP, that have been studied individually in the past.

‘Machine translation’, in itself, has been studied for the past 50 years, since at least the Cold War.

Even longer, I would say. Alan Turing’s Enigma device – originally conceived to crack enemy codes in WW2 – is today recognised as a prototype for the modern computer...

Exactly. In fact, I'm glad you mentioned Enigma, because... what was Enigma, anyway? Basically, it was a translation machine: translating from one encoding, to another. And ChatGPT is exactly the same. It's just a translation machine. There is no, let's say, ‘intelligence’ within it. The application has no ‘internal thought processes’, of its own. It's just a translation between one language and another. You give it a prompt; and it gives you back another prompt.

So, to go back to your earlier question, how ‘intelligent’ is it? My answer is that ChatGPT is a very ‘smart’ thing. But I wouldn't describe it as being anywhere near the equivalent of human intelligence... yet. We are still very, very far from that, as things stand today.

That ‘yet’ sounded a little ominous, though. These technologies do, after all, tend to advance rather rapidly, over time. And if today’s ChatGPT can already threaten jobs in the creative industries... what will tomorrow’s ‘smart technology’ be capable of doing?

Well, some people have been raising the question, recently, that – with newer versions of ChatGPT (especially, ChatGPT4) - we may actually be witnessing the ‘emergence of intelligence’, right now. Because that's what we're after, at the end of the day. That, given this body of accumulated knowledge... ‘new knowledge’ emerges from it: which is how our own brain works, after all.

But recent studies by Stanford University have concluded that there is no such ‘emergence’ really happening. In other words: there aren't any processes and mechanisms, going on, which we are not understanding. So that's why I tend to dismiss people who say: ‘We're moving towards sentience’, and stuff like that. We're not, at the moment.

Now: I'm not saying we might not get there, in the end. But with our current technology, I think we are still very far away from achieving that level of actual intelligence.

Nonetheless: millions of people around the world today are genuinely concerned that AI might sooner or later ‘take their jobs’. And even with its current levels of intelligence: ChatGPT is already replacing human employees in certain industries (such as the media; and especially copywriting, translation, etc)...

Well, those people SHOULD be worried, quite frankly. But it still doesn’t mean that there is any ‘intelligence’, within the AI itself...

Ouch! OK, point taken; but... isn’t there also a human dimension, in all this?  If those people are right to be concerned: doesn’t that also mean that AI – even with its current limitations – is indeed a ‘threat’ (if only at this level)?

Let me put it this way. At the moment, I'm doing a study about the future of education and work. A few months ago, I interviewed an investment banker in Toronto: who told me something which set me thinking, at the time.

He said: ‘Listen, we do not employ juniors anymore. Because the same stuff can now all be done by AI: maybe better, and almost certainly cheaper...’

And we're seeing this in all aspects, everywhere. Some companies are even reporting up to 40% performance improvement, because their staff are using ChatGPT. So, I think it's a transformative technology; which will radically change the way in which we operate; and how the world works. Basically, we’re at that moment in time, when change is imminent; and I'm sure there were plenty of other similar moments in history, too.

Think, for instance, about the invention of the printing press. That was another ‘transformative technology’, which democratized reading, and learning, and so on...

Yes, and it also put a lot of mediaeval monks and scribes out of work....

[Laughing] That, too! There were probably monks having the same conversation we are having today, back in the Middle Ages!

Joking apart, however: while I understand that technological advancement is inevitable – probably, even ‘unstoppable’ – aren’t we also rushing into it a little blindly? In ‘The Time Machine’, for instance, H.G. Wells envisaged a future where the human race splits into two – one of which subspecies (the Eloi) being basically ‘useless’: their over-reliance on technology having rendered them incapable of actually ‘doing anything’, for themselves. Is something similar already happening, today? Aren’t we becoming too ‘over-reliant’ on technology, for our own good?

I know where you're coming from and I appreciate it. But I think that history shows us otherwise. The most classic example, perhaps, is the ‘automated teller machine’ (ATM). When they were introduced, back in the 60s or 70s, people thought that it was the end of the banking system, as we knew it. No more ‘bank clerks’: you would end up just speaking to machines...

Well: that prediction wasn’t all that far off, was it? In Malta, banks like HSBC are now closing down their local branches...

OK, but that’s a policy of that particular bank; and I can’t really comment about it. Statistically, however, it remains a fact that more people were employed by banks AFTER the introduction of ATMs, than before.  Even today, 40 years later: the human element still remains an important component, in the banking sector.

Having said that: if you look at things like ‘Revolut’, for example. Now, I think that IS creating a new kind of banking]. In fact, I can tell you about a personal experience of mine, if you don’t mind...

Sure, go ahead.

Now: I am a Revolut user, myself... but I only ever really use it when I’m abroad (on the basis that, if something were to happen, I’d much rather lose my Revolut card, than my credit card.)

In any case: last October, I was in Portugal for a conference... but when I tried to use Revolut to pay a restaurant bill: my card was declined. I tried to contact Revolut’s customer care department; but no answer whatsoever. Luckily, I had other cards on me, so paying the bill wasn’t the problem. The problem was that later on I received a message from Revolut telling me that my card was being ‘reviewed’; and that I myself was being ‘assessed’.

And then, a few days later, I received another message telling me: ‘Listen, we can't provide you the service any longer’. Basically, they had decided to terminate my account. And to this day, I still have no idea why...

What do you think happened, though?

The only thing I can imagine is that, most likely, an algorithm flagged me, and said: “Maybe he's not spending enough; or he’s only using it abroad, in ‘high risk situations’.” To be honest, however, I just don't know.

And this is the part that actually worries me, in all this. It’s not whether we’re becoming too reliant on technology... it’s whether we are believing AI systems too blindly, when it comes to taking decisions that are going to affect people’s lives.

What worries me is - and this has been quite discussed, in recent years -  the lack of what we call ‘Explainable AI’: whereby the AI not only takes a decision; but also gives a reason WHY it took that particular decision. So that then, of course, humans can check on it afterwards...

Ultimately, you always have to keep humans in the loop; because these are things which will affect people, in the end… But that's only one part of my concern. The second part is that I think we are still lacking the sort of basic rights that should accompany AI: such as – in my own case – the right to full disclosure, on how a decision about my future was actually taken.

Just a second ago, you said that ‘humans have to be kept in the loop’. Shouldn’t that also apply to all the jobs that will be no doubt be replaced’ by AI, in the near future? What about the human element in other sectors, apart from banking?

Well: that concern has always existed, to be fair. New technologies have, after all, displaced humans from the workplace before.

But let's not forget that new technologies also create new jobs. In fact, if you look at the World Economic Forum 2020’s report on the future of jobs: while it estimates that AI will displace around 80-85 million jobs globally, it also predicts that AI will create around 96 million jobs, over the same time period.

Having said that, though: the problem which we will face, as a society, is that you can't just take those 85 million redundant workers, and simply re-employ them somewhere else. They will need ‘upskilling’, ‘re-training’, and so on... and we also know - because it’s already happening today - that some of those people will simply not make the transition, whatever you do.

And there's a very good local example of this, that only I got to know about recently. Apparently, there's a new factory that’s just opened in Malta, which conforms to a trend that we've been seeing for a while now, in other parts of the world.

In the Far East, it’s referred to as ‘lights-out manufacturing’: you don't need any lights, as the factory is fully automated. All the work is done by robots. And this new local factory is one of the first – if not the first – of this kind in Malta. It doesn't need any lights; nor any manual labourers, either.

But then, they needed to employ 120 engineers: because, you know, somebody had to take care of all those robots. So, as you can see: what's really changing, it's not so much the ‘amount’ of jobs that are being created, or lost. It’s the kind of work that is needed: which is more specialized... and therefore, to a certain extent, more problematic.

Because let’s face it: to create a factory worker, you need... I don't know, maybe a couple of weeks/months of training. To create an engineer, on the other hand, you need three-or-four years of University. So that is the big challenge, which I believe we'll be facing in the coming years.