High-Level Expert Group points to lack of skills, data and infrastructure as main roadblocks to AI innovation in three key sectors
With the publication of the Sectoral Considerations and the Assessment List for Trustworthy AI, DIGITALEUROPE has concluded its work within the Commission’s Artificial Intelligence High-Level Expert Group.
The Group has concluded that European citizens are well protected from potential harm or discrimination, but it has identified lack of skills, data spaces and digital infrastructure as the main roadblocks to AI innovation in healthcare, manufacturing and government.
Today, the Commission’s Artificial Intelligence High-Level Expert Group (AI HLEG) published its final deliverable: the Sectoral Considerations for the healthcare, manufacturing and public sectors. Together with the launch on 17 July of the Trustworthy AI Assessment List, a practical checklist for businesses, this concludes the AI HLEG’s mandate.
Director-General of DIGITALEUROPE, Cecilia Bonefeld-Dahl, said:
“It hasn’t always been simple or straightforward, but this diverse group of experts from industry and civil society have made great strides in our understanding of how Europe should promote AI excellence and trustworthiness. Getting everyone on board and involving the key actors from an early stage are crucial to good policymaking.
Across the three sectors we considered, a lack of AI and data management skills, lack of digital upskilling within key sectors, lack of common data standards and data spaces, lack of high-speed digital infrastructure remain key barriers to Europe reaching its AI innovation potential. Regulatory uncertainty and a lack of harmonisation compound these issues.
Our workshops vindicated the risk-based sectoral approach. In most cases, we found that European citizens are well protected from potential harm or discrimination. Where potential issues are identified, policymaking must be agile, responsive and have a laser-like focus to avoid losing out on the huge potential benefits of AI to our economy.”
Background
DIGITALEUROPE has been a member of the AI HLEG since its creation in June 2018. These two deliverables follow the publication last year of the Ethics Guidelines – which inform the structure of the Assessment List – and the Policy & Investment Recommendations. Both the Guidelines and the Recommendations were influential on the Commission’s AI White Paper, published in early 2020.
In its sectoral approach – whether for manufacturing, healthcare or eGovernment – participants in the workshops considered that the body of existing law and protections, combined with the learnings from the Ethics Guidelines, provide a good framework for Trustworthy AI.
Following on from the risk-based approach put forward in the group’s recommendations last year, testing new laws in regulatory sandboxes would be a good way forward in those rare areas where a potential problem is identified that is outside the scope of existing regulations. The bigger roadblocks to address regard skills, data and infrastructure.