The regulation, however, will have a significant impact on AI companies and their investors. I will analyse the potential effects of this regulation on VC operations.
What is in the AI Regulation?
The proposed regulation defines AI as “Any software that is developed with one or more of the techniques and approaches listed in Annex-I and can, for a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing the environments they interact with.”.
Annex-I breaks this down into several AI techniques: supervised, unsupervised and reinforcement learning, logic programming, deductive engines, symbolic reasoning, and Bayesian estimation. This list will be amended and updated over time as technology evolves.
However, the core element in the proposal is the ‘Risk-based Framework’, which groups AI Systems under four categories and defines specific rules and regulations. Therefore, EC mainly aims to regulate the implementation of AI rather than the underlying technology through this framework.
- Category — 1 (Minimal/no risk): AI systems in this category (i.e., spam filters) will be permitted with no restrictions, but providers will be encouraged to adhere to voluntary codes of conduct.
- Category — 2 (Limited risk): AI systems in this category (i.e., chatbots) will be subject to transparency obligations (e.g., technical documentation on function and performance).
- Category — 3 (High Risk): This category is divided into two clusters. The first covers AI systems embedded in products already subject to 3rd party assessment under sectoral legislation (i.e., medical devices). The second focuses on non-embedded AI systems for specific use cases (i.e., biometric identification, infrastructure management, education, workers’ management, and public services and benefits).
- Category — 4 (Unacceptable Risk): AI systems against fundamental human rights (i.e., social scoring) and their usage will be prohibited.
Category 3 is the most challenging one among all four having a complex compliance process. To mitigate the risk posed by these systems, the proposal puts into place an updated CE marking process. To get the CE mark, the products will have to comply with five requirements:
- Data governance: establish and follow good data sourcing and management rules.
- Transparency for users: inform users that they interact with an AI technology.
- Human oversight: enable human monitoring and control of AI systems.
- Accuracy, robustness, and cybersecurity: ensure good data hygiene and security; and
- Traceability and auditability: identify and audit any AI process and output.
New Requirements in VC Due Diligence Processes
The regulation shall have a severe impact on VC operations. This section mainly concentrates on new investments, but many items also apply to their active portfolios.
The Regulation will undoubtedly increase the complexity of AI start-ups’ due diligence processes, primarily in the technical domain. Since most new technology ventures have an AI dimension in their products and processes, the coverage of the regulation will be pretty broad.
A typical VC processes several hundred deals per year. Hiring external experts for each will not come cheap. Therefore, VCs should internally enhance their familiarity with the regulation while reserving external advice only for a few ‘high probability to invest’ deals.
The risk taxonomy in the Regulation is an overgeneralisation of AI techniques. Assessment of mapping an actual AI application to a risk category is challenging. Additionally, there is a risk of crossover — i.e., a change in the implementation of AI that results in a shift of Risk Category.
Therefore, as a starting point, all active portfolios and the new investees should be encouraged to adopt codes of conduct of AI ethics. VCs may be instrumental and provide templates for their active portfolio as many did for ESG (Environmental, Social, and Governance) standards.
Publicising the identity, composition, risks, and shortcomings of AI systems in use is mostly what the EU is proposing. Building up some internal processes or contracting with 3rd party service providers will be necessary for documentation. The associated costs will undoubtedly add as a new budget item for start-ups, and the process itself will bring additional workload for co-founders. We could expect slightly larger checks for AI start-ups.
For the high-risk category, compliance can be complex and costly. The process involves conformity assessments (ex-ante and ex-post) of complying with the five requirements (defined above), followed by CE mark application and continuous AI auditing.
The broad impact of the Regulation on VC operations may exceed the resource capacities of many VCs (especially the ones with small teams). Accordingly, Invest Europe (the umbrella organisation of European VCs) should take the lead and consider developing a governance framework with the help of subject matter experts (data scientists, legal advisors, AI ethics experts, etc.).
Could It be A New Opportunity Space?
Despite the Regulation’s challenges, one might also expect new innovative start-ups that turn this into business opportunities.
- Data management practices — especially overcoming data shortcomings that might cause AI biases necessitates high-quality data. The synthetic data (information that’s artificially manufactured rather than generated by real-world events) market is expected to boom, so that AI Systems developed over limited data can overcome regulatory restrictions.
- Automation of AI system characterisation and automatic recording of logs that are ready for 3rd party evaluations may speed up the delivery of reporting demands.
- Cybersecurity innovators might prioritise incidents such as data poisoning (when an attacker infiltrates an ML database and insert incorrect or misleading information).
- Last but not least, algorithmic impact assessment tools could allow real-time ex-ante and ex-post conformity assessments.
Does it only for the EU companies?
The simple answer is ‘No’. Any AI system providing output within the European Union would be subject to it, regardless of the provider or user. To avoid preventing the innovativeness of the AI sector, some regulatory flexibilities (i.e., regulatory sandboxes) might be provided for start-ups.
What About the Penalties?
In a similar vein to GDPR, non-compliance with the regulation could cost between 2% to 6% of a company’s annual turnover.
The proposal still must be approved by the European Parliament and the Council of the European Union. Those familiar with the EU circles predict that in early 2024 the regulation could become applicable.
Note. This opinion piece is written with the collaborative effort of the ACT Venture Partners team.
- Alex van der Wolk, 2021. How to prepare for a new era of AI regulation.
- Alexandru Circiumaru, 2021. Three proposals to strengthen the EU Artificial Intelligence Act.
- Andrew Burt, 2021. New AI Regulations Are Coming. Is Your Organization Ready?
- Carlos Alvarenga, 2021. The EU’s new rules will completely rewrite the AI playbook.
- Diana Spehar, 2021. Europe wants to police AI. Here’s how startups can prepare.
- Daniel Mügge, 2021. Cooperation á la carte is the way forward for EU AI regulation.
- EC, 2020. White Paper On Artificial Intelligence — A European approach to excellence and trust.
- EC, 2021. Proposal for a Regulation. Laying Down Harmonised Rules on Artificial Intelligence.
- Eve Gaumond, 2021. Artificial Intelligence Act: What Is the European Approach for AI?