Setting the AI standards: the underlying struggle for the future of Artificial Intelligence – Tech.eu – Tech.eu

Npressfetimg 630.png

As the EU’s AI regulation continues its legislative journey, regulators, standard-setters and innovators are called to define how Artificial Intelligence will be developed in practice.

Artificial Intelligence as a technology dates back to the 1950s. Nevertheless, only the last decade has seen AI increasingly applied in a variety of fields, mostly thanks to major advancements in computing power, machine learning techniques and available data.

The growing reliance on AI-powered tools has raised questions about their reliability, trustworthiness and accountability, catching the attention of regulators worldwide. In April 2021, the European Commission presented a comprehensive regulatory framework on AI with a risk-based approach largely drawn from product safety rules.

The draft Act follows a New Legislative Framework (NFL) regime whereby manufacturers need to run a conformity assessment fulfilling certain essential requirements in terms of accuracy, robustness and cybersecurity. A quality management system must also be put in place to monitor the product throughout its lifecycle.

“The proposal defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards,” the draft proposal reads.

Thus, standards are to be developed to provide clear guidelines when designing AI systems. Following such standards leads to a presumption of conformity, bringing down the cost for compliance and uncertainty.

“Standardisation is arguably where the real rulemaking in the Draft AI Act will occur,” stated Michael Veale and Frederik Zuiderveen Borgesius in their thorough analysis of the AI Act.

It is therefore not surprising that the strategic role of technical standards for such a disruptive technology has attracted the attention of major regulatory powers, including the United States, Germany and China.

“The interpretative stances that will gather more traction will also help define the future of AI. This creates a geopolitical race to establish the main international ethical standards that will likely generate and increase power asymmetries as well as normative fragmentation,” reads a report of the Centre for European Policy Studies (CEPS).

Amid these geopolitical tensions, industry practitioners have been trying to work out ways to embed the new legal requirements in their business models. That is particularly the case for suppliers integrated into the AI supply chain, which according to the draft AI Act will need to support the providers with their compliance.

“If you are a third party in the AI supply chain, and you have to give assurance that you comply with your obligations, then there is a tool that helps you do it in the same format to all the providers you are servicing. Vice versa, if you are a provider, a standardised way of accepting assurance from your myriad of third parties makes your job easier,” explained John Higgins, chair of the Global Digital Foundation.

For instance, data suppliers will have to explain how the data was obtained and selected, the labelling procedure and the representativeness of the dataset. Similarly, accuracy and cybersecurity will also be key applications, as model builders will have to detail how the model was trained, its resilience to errors, measures to determine accuracy level, eventual fall-safe plans and so forth.

“Understanding which of the …….

Source: https://tech.eu/free/44836/setting-the-ai-standards-the-underlying-struggle-for-the-future-of-artificial-intelligence/

Leave a comment

Your email address will not be published. Required fields are marked *