DiPP - Artificial Intelligence: a winning mix, not a zero-sum game

09 Feb 2017
8:30 AM – 10:00 AM CEST
DIGITALEUROPE OFFICES
14 Rue de la Science (7th floor), 1040 Brussels

You must have been traveling on the outer reaches of the solar system lately if you have not heard about Artificial Intelligence, or AI. As would befit a brand new buzzword, AI lacks a clear definition.
More importantly, there are as many excited supporters of a bright AI-powered future as there are doomsayers who predict that AI-enabled robots are bound to put the last nail in mankind’s coffin.

The magnitude of the IoT phenomenon

McKinsey estimates that the total IoT market size in 2015 was up to $900m and set to grow to $3.7b in 2020 as a result of a stunning 32.4% CAGR. Within the EU, the number of IoT connections is estimated to increase from approximately 1.8 million in 2013 to almost 6 billion in 2020, leading to the EU IoT market being in excess of one trillion euros by 2020.

The EU has wasted no time to make the most of IoT by investing in all its facets. The previous Commission already signed a MoU with the ‘Big Data Value Association’ which includes ATOS, Nokia Solutions and Networks, Orange, SAP, Siemens and research bodies such as Fraunhofer and the German Centre for Artificial Intelligence. Over €500 million were earmarked for investment over 5 years (2016-2020) from Horizon 2020 which private partnersare expected to match at least four times over (€2 billion).

The impact of IoT and the most conspicuous of its siblings, AI, goes well beyond economic growth and jobs. Last year, US technology groups including Google, Facebook, Microsoft, Amazon and IBM have set up the ‘Partnership on Artificial Intelligence to benefit people and society’ with a view to conduct research into questions that surround AI such as ethics and to develop ways to make technology easier to understand.

Speakers

  • Mady Delvaux-Stehres

    MEP, Chair of the Working Group on Robotics

  • Juha Heikkila

    Head of Unit, Robotics & AI, DG CNECT, European Commission

  • Erik Mannens

    CTO, Data Science & Research Manager, Ghent University

  • Jonathan Sage

    AI Policy Lead EU, Government & Regulatory Affairs, IBM

Moderator

  • John Higgins

    Director General at DIGITALEUROPE

Definition

Even figuring out the perimeter of the issue looks challenging. Semantics reflects a top down approach: based on modelisation, it churns out answers that are all correct. You can think of it as a ‘white box’. In contrast, AI proper is bottom-up: you feed as much data as possible into the systems; resulting answers are approximate and unverifiable. This is the ‘black box’ side, a source of occasionally scary stories.

Bottom line, AI is computer power able to perform human tasks: why not call it ‘cognitive computing’ since it aims to improve our own act, to augment our own intelligence instead of replacing it? Cognitive computing includes machine learning, semantics, speech recognition and other human interfaces, high performance computing, etc. None of these AI instances is really new: from the first ever reference to AI in 1956 to IBM Deep Blue beating Kasparov or winning the ‘Jeopardy’ quiz, or Google AI beating the world champion of Go, massive amounts of data have been put to spectacular use thanks to AI. But behind the headlines AI has been powering leading edge diagnosis and other healthcare devices, cyber-security systems, anti-fraud software for the finance industry, automated transportation networks, retail business, etc.

Challenges ahead

Trust. Can we trust AI-enabled systems? Only huge metadata bases will be able to reduce errors and approximations. Experts the caliber of Tim Berners-Lee are haunted by the fact that most results are unverifiable. Accordingly, there is no denying that trust in AI will be an uphill battle to be fought by a coalition of the willing. While there is no reason for man, oftentimes portrayed as ‘Nature’s beautiful glitch’, to demand perfection across the board, systems tend to take short cuts of their own design that we’d better control at all times. With innovation comes responsibility: liability is a major challenge to be addressed before connected or autonomous cars are authorized to take to the road.

Privacy. This is a typical chicken and egg situation: in the era of Big Data, law-mandated anonymisation runs contrary to sound research, or so argue many researchers. For instance, the high standards set by the GDPR include a right of explanation in case an AI-enabled decision has discriminated against you. Research communities around the world are inclined to sweep away the need for privacy protection: they suggest, tongue in cheek, that anonymization should be left for after they’ve tapped the data concerned for their own purpose. Research shouldn’t be more Catholic than the Pope, they submit, pointing to the fact that leading platforms already know a lot about us. In essence, working on the assumption that privacy is a myth,processing data not anonymized shouldn’t harm anybody while putting European research on a par with the rest of the world. This is not to say that anonymisation, encryption, privacy by design bring no improvement but the effectiveness of those tools rings truer in theory than in practice, at least to the layman’s ears.

Safety and security. Devastating hacks make the headlines on a daily basis, so there is no need to elaborate on this side of the much-needed balance to be found between the general public’s aspirations to privacy and governments’ challenging remit to cater for security in a dangerous world.

Lesser challenges are met on the way to address the above list: transparency of algorithms is one, in order to make black boxes more predictable and correctable, hence more trustworthy. There is also the question of how to train algorithms so that they improve their act and match that of humans. It is worth noting that we are fallible too, as illustrated by the well-documented correlation between rising discrimination among judges as they get increasingly hungry when lunch time draws near… Skills are also a prerequisite to AI steering the right course.

By way of a conclusion

For lack of unveiling earth-shattering solutions, the conversation on February 9th agreed a number of critical points:

– The wonders of AI should be managed properly with a view to have them meet the overarching goal to make life easier or better for humans. Therefore, the debate must expand beyond the narrow circles of experts where it is confined at present. Indeed, those will inevitably focus on what is of interest to them in the course of a biased process that may leave issues of general interest unattended.

– In particular ethical considerations may fall by the wayside. They should be interjected in this debate urgently as hammered by an increasing number of politicians and as illustrated by the above mentioned initiative from US technology groups. Policy makers being expected to prepare the future cannot but monitor closely all AI-related developments. They welcome Codes of conduct and other tools for self-regulation as long as they manage to elicit public trust.

– Transparency always matters, even more so with algorithms: they must be human-readable at all times, for instance on annotating a decision tree. ‘Make AI as ‘white boxish’ as possible’ could be a driving principle if humans are to stay in control.

At the end of the day, this at least is crystal-clear: policy makers won’t be able to evaluate whether they need to jump in and how until they get their head around what makes AI a tremendous booster of public good, what could make it a threat to a better life for all.

Should you like to delve into these issues, the following videos are recommended:

EuroparlTV: RoboLaw: Regulating robotics

The Science and Technology Options Assessment (STOA) panel on Cyber-Physical Systems


For more information, please contact:
Our similar past events
Budapest
Presidency Summit in Budapest
European Parliament
Technology and Ukraine: Lessons from the Front Line
Hit enter to search or ESC to close
This website uses cookies
We use cookies and similar techonologies to adjust your preferences, analyze traffic and measure the effectiveness of campaigns. You consent to the use of our cookies by continuing to browse this website.
Decline
Accept