Francesco, in your opinion, what has been the impact of generative AI since the arrival of ChatGPT?
What has really changed is the democratisation of artificial intelligence. It used to be a tool reserved for technicians to make predictions or conduct analyses. ChatGPT has introduced a simple interface that is accessible to everyone. But this simplicity masks immense technological complexity. This is a non-deterministic technology: asking the same question twice can give two different answers. This introduces creativity, but also risks, such as hallucinations or bias.
Accordingly, what are the major risks that you have identified?
Bias is a central issue. At LIST we have developed an open-source tool, the “LIST AI Sandbox”, for testing model biases in all languages. A racist or sexist model, for example, can have serious consequences, especially in a professional context. There is also the cybersecurity risk: “vibe coding”, that is developing code using AI, can generate fragile code which is vulnerable to attack. So, we need more robust, better-trained models.
Europe seems to be lagging behind the US States and China. What is your view on this?
It's true that the Americans are leading the race, closely followed by the Chinese. Europe has not yet demonstrated equivalent technical capabilities. But it is reacting. The “AI Factories” project, including one in Luxembourg, is a good example. With “MeluXina AI”, a machine packed with more than 2.000 GPUs (graphics processing units), we will be able to train complex models and support SMEs and start-ups. LIST is playing a key role in this by contributing its scientific expertise.
And what about technological sovereignty?
I prefer to talk about strategic independence. Today, even European data centres use American chips. And Luxembourg is already affected: it is on a list restricting access to the latest-generation GPUs. Developing European alternatives, such as RISC-V chips (an open-source processor architecture) is therefore crucial. This will take time and investment, but it is essential.
The issue of data is also central. Is GDPR a hindrance?
Not necessarily. The real problem is that large models are trained using gigantic volumes of data, often without any oversight. This raises ethical and legal issues. But above all, this paradigm of “bigger and bigger” is not sustainable. We need to move towards frugal AI, with smaller, specialised, combinable models that consume less energy and resources. This is what we are developing at LIST.
And how do we deal with the shortage of AI skills?
There will never be enough experts to keep pace. That's why we're developing “BESSER”, an open-source platform that allows you to create software incorporating AI without having to be an expert. This involves talking to a chatbot to generate applications automatically and securely. It's a way of democratising access to AI while maintaining a high level of quality.
Finally, what are the major trends for the future?
Agentic AI is already emerging: agents capable of performing complex tasks autonomously. But this amplifies the risks mentioned: bias, errors, wrong decisions. We therefore need to bolster testing, validation and, above all, governance mechanisms. Luxembourg, with its tradition of regulatory innovation, has a role to play in becoming a leading centre for the ethical and legal certification of AI systems.