Skip to content

Navigating Artificial Intelligence: Expert Discussion on Innovation, Challenges, and Europe’s Regulatory Landscape

 

The emergence of Artificial Intelligence has impacted our daily lives in many aspects. As the technology continues to evolve, our interactions with AI will move more and more to the forefront of our public discussion. We talked to two experts, Dr. Patrick Glauner, Professor of Artificial Intelligence at the Deggendorf Institute of Technology, and FNR PEARL Chair Dr Jordi Cabot, Head of the Software Engineering Unit at LIST, about the current developments around AI. Among other things, we discussed the role of innovation, the current state of AI adaptation, bias in AI, and the EU AI Act.

Future forward: Luxembourg’s leap into Artificial Intelligence (AI) is an FNR feature series highlighting Luxembourg’s top A.I. researchers, showcasing findings and results of A.I. research and demonstrating practical applications of A.I. research and its impact on society.

Hello Dr. Glauner and Dr. Cabot. Thank you for taking the time to participate in this interview.

Where are we right now in terms of adapting Artificial Intelligence?

Patrick Glauner: “I think we have made a lot of progress in terms of deploying and adapting AI applications into our daily lives. One great example is smartphones. AI-powered features such as voice or face recognition are used frequently and exemplify the progress made in AI research. AI is also rapidly advancing in mechanical engineering, healthcare, and the automotive industry. I am very excited to see how AI will progress in the coming years and what new and real-world application types will emerge.”

Jordi Cabot: “We are still in an exploratory phase. We have built great models that are very powerful, but we are still learning how to use them most effectively. The average citizen is familiar with generative AI, such as Chat GPT, and knows how to use it on the most basic and elementary level. Beyond that, there are still many unknowns for the general population. The role of scientists and AI thought leaders is to map out and identify a clear path that allows us to maximize the potential of AI applications and clearly delineate its limitations.”

There are concerns around AI, such as job displacement and data/privacy concerns. What are your thoughts on this?

Patrick Glauner: “I don´t think AI will cause mass unemployment. Yes, some jobs are changing; others will disappear. But this has happened ever since the beginning of the Industrial Revolution. The best approach for us is to adapt and contribute to AI’s transformation. In Central Europe, we face a massive lack of talent and an aging society. We need more automation and scalability to solve this problem and maintain our prosperity; AI will be critical in this development.”

“Privacy is a more prominent concern. You should avoid putting sensitive or corporate data into ChatGPT, as this could leave you vulnerable to being exploited by malevolent actors. In Europe, we have very strict privacy laws, such as GDPR, which reduce the likelihood of personal data being exposed or taken advantage of.”

Jordi Cabot: “I agree that fear of job displacement should be at the bottom of our concerns list. In the past, any new technology has introduced job displacement, and we have always been able to adapt accordingly. AI allows us to get rid of tedious and repetitive tasks, which will free up time to do more meaningful work.”

“There are risks related to the adaptation of AI. First, generative AI, such as Chat GPT, does not reflect upon or critically evaluate the text it produces. LLMs simply make the most likely statement based on an internal set of rules, which can sometimes lead to hallucinations (i.e., statements that are factually incorrect or nonsensical). Secondly, LLMs are a reflection or a mirror of our society.”

Are you talking about bias…?

Jordi Cabot: “Exactly. We can´t expect AI models trained on biased data to act differently or morally superior to their human counterparts. If, as a society, we hold stereotypical or racist views, those will also be reflected in our AI applications. It is important to note that closed-source AI applications are generally less biased than open-source AI applications. However, open-source AI models and applications are easier to scrutinize, test and improve upon, which could be helpful in decreasing bias in the future.”

Dr. Glauner, what are your views on bias in AI applications, and how can we mitigate it?

Patrick Glauner: “I agree that AI models are biased because the underlying training data is biased. Humans are the originators of the data that AI models are trained upon, and as such, they are also responsible for any bias that may occur. There are ways to reduce or mitigate this bias. For example, you can look at datasets and identify which parts most represent the ground truth. You can also look at underrepresented viewpoints in datasets and give them more weight in the training of AI models so that they are more accurately reflected. Obviously, this will not completely eliminate bias from AI models, but as research keeps evolving, better options will become available to address these concerns.”

The EU AI Act is a hot topic at the moment. What is your position on the current developments around this legislation?

Patrick Glauner: “The first version of the EU AI act included around 120 pages, and after three years of discussion, it has now grown to over 400 pages. This is problematic on many levels. The EU AI Act needs to be narrower in scope and less stringent in its attempt to regulate AI innovation. Even the premise on which the EU Commission launched the regulation is faulty at best. The commission initially claimed AI was completely unregulated. This is not true. Many sophisticated systems, such as airplanes or cars, already use AI and have strict requirements. The EU AI act boils down to 400 pages of constraints and not a single page about innovation. This could jeopardize Europe´s competitive position in the global market.”

Jordi Cabot: “I understand the need for some regulation. However, the implementation of the EU AI Act needs to be clarified in many instances. How will we put these regulations into place, and what agency will be responsible for enforcing them? The EU Commission has also talked a lot about sandboxes, an experimental environment for companies to test AI products before launching them. But again, it is still very early, and we cannot yet know how this will play out in real-time.”

As you pointed out, regulations from the EU AI Act could potentially damage Europe´s competitiveness compared to other global players such as the US and China. Could you elaborate on your position on this point?

Patrick Glauner: “This is absolutely true. The EU AI Act will inevitably make innovation more expensive. This could create a butterfly effect, meaning companies will look to move to other markets that are less restrictive. The legal aspect will play a significant role moving forward. If companies plan to introduce Artificial Intelligence into their product line and daily operations, they must bring in lawyers to ensure they comply with the new regulations. Generally speaking, the EU AI Act is driven by fear and a lack of understanding of what AI actually is. The restrictive nature of the AI Act will have harmful effects on innovation and damage Europe’s chances to remain competitive with other big players such as China and the US.”

Jordi Cabot: “I would go back to my initial point. I think it´s too early to predict how the AI Act will impact Europe´s global competitiveness. It is possible that AI-focused startups might decide to leave Europe for the United States or China just to avoid strong regulations. However, I would need to know more about how the EU AI Act is implemented or who will be in charge of enacting the rules to make an accurate prediction.”

Some people are expressing concerns about the future of AI, suggesting that humans may eventually be removed from decision-making processes and that AI will take over. How realistic is this scenario?

Patrick Glauner: “There are fears about AI getting out of control. Especially with the anticipation of Artificial General Intelligence (i.e., an advanced form of artificial intelligence that can understand, learn, and apply knowledge across a wide range of tasks at super-human level), some people might be worried that they will be kept out of the loop. But those scenarios are science fiction and very unlikely to happen in the foreseeable future. We can´t prevent the advancement and integration of AI. So rather than focusing on these hypothetical scenarios, we should further invest in Research and Development to ensure we have the best version of AI possible.” 

Jordi Cabot: “I agree. This is all very hypothetical at this point in time. Many AI experts have concurred that LLMs’ current architecture and infrastructure are insufficient to allow for the emergence of Artificial General Intelligence. These fears are somewhat futuristic or apocalyptic. I also think that prominent figures such as Elon Musk are feeding into this dystopian narrative, causing some people to think about AI only negatively.”

Lastly, could you describe in three words what excites you most about the future of AI?

Patrick Glauner: “Innovation, transformation, efficiency.”

Jordi Cabot: “Exciting, limitless, transformative.”

Thank you very much for your time.


Interview by by John Petit

John Petit is a communication consultant, holding a PhD in the field. His expertise lies in exploring the intersection of technology and society, with a particular focus on Artificial Intelligence (AI) and its impact on our daily lives and broader societal norms. John combines his academic knowledge with practical experience to engage in and facilitate meaningful discussions about the role AI will play in shaping our future.