[BLOG] Seven questions to ask when looking at the EU's new AI regulation tomorrow
Tomorrow, the Commission will publish its long-awaited artificial intelligence regulation. DIGITALEUROPE’s Director-General Cecilia Bonefeld-Dahl presents here the 7 key points to look at when the final regulation drops.
By Cecilia Bonefeld-Dahl, Director-General, DIGITALEUROPE
Tomorrow’s AI regulation proposal is a pivotal moment for Europe’s Digital Decade. Here are the seven questions we should be asking ourselves when looking through the document:
1. What is high risk?
All indications so far are that the Commission will follow a risk-based approach. This is good, as we don’t want to duplicate many of the strong protections that already exist in sector-specific legislation. But this makes the definition of what is high-risk especially important. We need to be as precise as possible.
2. Have we learned our lessons from GDPR?
Remember the lessons of the implementation of GDPR. Compliance costs were huge, and implementation was bumpy. Especially for SMEs/start-ups, revising your entire workflow for GDPR compliance was a big challenge. New AI legislation should not introduce new and unfamiliar regulatory hurdles to all those AI-developing companies, at least not without proper training, guidance, sufficient implementation time. We also need to make sure that unlike GDPR, all Member States implement in the same way – if they don’t, then we will kill European AI innovation.
3. Is software included?
And if so, does it have special rules to reflect its specific needs? This is important because conformity assessments as part of the EU’s current market access regime are aimed at physical products. They take time, involve a lot of administrative work and expertise built over decades of experience within companies. Introducing and expanding this to software as is could create a tremendous bottleneck, which is not suitable for a world where flexibility and speed are king. For examples, companies may need to create quick software updates or patches to fix a cybersecurity flaw. They cannot be waiting weeks for such updates to be assessed and approved. In addition, most software companies working on AI are SMEs with no experience of conformity assessments, unlike makers of hardware. The situation is a bit like Brexit, where bigger companies were able to adapt to new rules whereas smaller companies have suffered because of the extra red tape.
4. Do SMEs have special rules to support them?
Some of this continent’s brightest innovators are working in small companies. For example, Europe has a growing hub of health scale-ups working with AI (see our Future Unicorn Award winners Corti and Oncompass). We need to support those smaller players working in areas deemed high risk with simpler rules, longer adaptation times and financial support.
5. Is innovation possible?
Even in high-risk areas, we want to make sure we are not shutting down innovation with a number of regulatory hoops. Here, the legislation should set out requirements that are clear and flexible, and we should focus on industry-driven standards. This is because AI standards are a work in progress, and it’s not always possible to update these fast enough to keep up with the latest developments – for example, research into reducing bias in datasets. Companies should have the opportunity to experiment and innovate with different and alternative solutions in a fast and agile manner, which is why sandboxing is so important.
6. Is sandboxing just a buzzword?
We need to have real possibilities for companies to test ideas in realistic settings. This should include limited liability and concrete guarantees for companies. Sandboxes should be an effective, though indeed limited, step outside of the burdensome market access regime, to test and try out innovative new AI solutions.
And the million euro question…
7. If you had €1 million to invest in an AI start-up, and after seeing this regulation, would you invest in Europe?