16 Nov 2018

Digital in Practice Programme: workshop on ‘Artificial Intelligence without frontiers: the EU gameplay’

The world’s best and brightest brains on AI flocked to town in the week of 5 November to attend the AI4People Summit and a meeting of the European Commission’s High-Level Expert Group (HLEG) on AI. DIGITALEUROPE would not pass a unique opportunity to discuss this red hot topic in the engaging, informal context of a DiPP workshop.

On 7 November, the following experts agreed to share their views on ‘AI without frontiers: the EU gameplay’:

  • Lucilla Sioli, Director, AI and Digital Industry, DG CNECT, European Commission
  • Raja Chatila, Director, Institute of Intelligent Systems and Robotics, Sorbonne University
  • Ursula Pachl, Deputy Director General, BEUC, The European Consumer Organisation
  • Leo Karkkainen, co-Chief AI Officer and Leader, Deep Learning Research Group, Nokia Bell Labs
  • Saskia Steinacker, Global Head, Digital Transformation, Bayer Business Service GmbH
  • Marc-Etienne Ouimette, Director, Public Policy and Government Relations, ElementAI.
  • Cecilia Bonefeld-Dahl, Director General, DIGITALEUROPE, moderated this informal conversation.

Making the most of a game-changer, Europe’s way

AI can help the EU stay together. It may even provide the best vindication of the third way Europe is trying to cut between the Chinese and US models.

To put the EU’s human-centric approach on AI’s world map and keep claiming global leadership in digital technology, the Commission is determined to design and implement a strand of AI that respect and promote EU values, AI for Good if you will. For this purpose, the dedicated HLEG – whose 2-year mandate makes it a body meant to span two Commissions – has been tasked with producing draft guidelines by year-end. Those will be probed on Member States in order to discourage too wide a variety of approaches. Success hinges on its ability to identify mechanisms likely to help technology deliver on core European values. In short, the Commission wants to foster investment in ethical-by-design AI, particularly those aimed to meet societal needs. 

 

The road ahead

This is a challenging path to tread since drivers of this approach may prove contradictory. Luckily, we are not starting from scratch. A lot of proven tools are at hand, actually:

a) standards are trust-builders of choice, particularly good at providing industry with pointers both easy to operate and effective at securing a modicum of harmonization;

b) sandbox experiments act as catalysts: adjusting local regulatory environments to allow for testing innovative products and services has a proven track record of informing their successful scaling up to national or regional level.

 

Digital technology has taken research to new heights, driven costs down drastically, given access to predictive healthcare and to all sorts of quality interactive entertainment. AI will only boost the prowess of ICT as a key enabler of progress. Consumers rightly take pride in these achievements. But they also fret about the risk that this bounty might turn into disaster. They are concerned about the overall need for appropriate, intelligeble regulatory framework, both at general level, to secure legitimacy at all times, and in specific areas such as privacy, competition, safety, liability, (cyber)security, etc. In this respect, a key difference seems worth bearing in mind: whereas fundamental rights (e.g. privacy or transparency) are to be addressed via legal and regulatory process, ethics lies on the voluntary side of the equation, like an invitation to go beyond mere legal compliance. Some deem it critical to keep this difference in perspective before bad faith or lack of basic caution will further undermine trust in whatever is ICT-enabled. Others think that it wouldn’t necessarily be wrong if continuing carelessness would switch the debate from ethics to human rights.

Industry ‘verticals’ provide a host of pointers on the regulatory front: aerospace, healthcare, banking, show that strict regulation is able to secure safety, privacy, security. Their longstanding experience will spare governments the trouble of starting from scratch or reinventing the wheel. Legal compliance by design was floated for consideration.

 

What we need and how to get there

Harmonized rules intelligible to citizens will go a long way to have everybody feel in a safe, trusted environment without borders. Europe should also consider switching gear and gather speed if it is to stay in step with the rest of the world.

This holds particularly true in healthcare, where old regulation keeps screening patients from state-of-the-art products and services, or in self-driving vehicles where the lack of proper regulation might spell disaster. For this to happen, the associated risk must be assessed in a forward-looking fashion. Accordingly, the Commission should work harder at fixing inconsistencies between new and existing law, or between pieces of new legislation.

Talent is a critical ingredient as some studies forecast 2.6 new jobs for every job lost. Europe has been working hard at rolling back an eSkills deficit estimated to reach 1 million by 2020, now cut by half.

Investment is another key driver. The Commission has taken advantage of the next MFF to intensify dialogue between stakeholders.

While not a panacea, transparency is all too often seen as a threat to IP rights: although conflict may rise occasionally, there are several shades of it, depending on timing, target constituency for disclosure and other perspectives.

Those are the assets that make the EU particularly strong: healthy industry across the board – particularly in B2B software -, prime education and research, huge and reasonably unified market, mission-ready immigration schemes, clean democratic record, etc. AI should be seen as enabler of all these strengths, just as the EU lead has proved contagious in climate change, GDPR, etc. Intriguingly, the EU’s legendary openness cuts both ways: makes it a magnet for talent and investment, also a rich stream prone to exploitation by abusive miners.

To facilitate this virtuous ‘contagion’, Canada has been using its current G7 presidency to punch above its weight: following a bilateral agreement with France signed in Montreal last June, ElementAI will host the next big G7 AI-dedicated event at its headquarters on 6 December.

 

Note that DIGITALEUROPE release its recommendations on Europe’s AI policy on 7 November 2018.

06 Mar 2024 resource
DIGITALEUROPE’s response to the Joint European Supervisory Authorities’ public consultation on the second batch of policy mandates under DORA
28 Feb 2024 resource
Elevating EU innovation through strategic investments and collaboration
20 Feb 2024 Position Paper
DIGITALEUROPE Executive Council for Health’s recommendations for EU digital health policy (2024-29)
Hit enter to search or ESC to close
This website uses cookies
We use cookies and similar techonologies to adjust your preferences, analyze traffic and measure the effectiveness of campaigns. You consent to the use of our cookies by continuing to browse this website.
Decline
Accept