Regulations in AI

A recent white paper of the European Commission lays out the framework for future regulations on AI applications.

EU regulations in AI

The paper is very clear: every organization needs to use high impact AI applications responsibly. They need to demonstrate this responsibility to society and its citizens and they need answers to the following questions: with which data and what code was the model trained? How are individual predictions and scores calculated? What measures are in place to safeguard that the application does not discriminate on non-permissible grounds?

In short, answering those questions requires that decisions by the AI application are fair, explainable, auditable, traceable, reproducible and transparent.

Lowering the burden of regulations

These requirements will won’t surprise anyone heavily involved in the field. For most organizations however, they sound like a very high burden to have to meet by default. This is especially the case in innovative environments that organizations prefer to create for data science and AI.

In our experience, many organizations primarily rely on procedures and protocols to achieve compliance – and this is where this burden comes from. The easiest and most visible effort to organize when new regulations are introduced is human labor. Letting people work on manually assessing potential regulatory issues by introducing human intervention is highly visible, but it’s also expensive and it doesn’t scale. It’s a burdensome waste of valuable talent, labor and time and it’s unlikely to yield full and durable compliance. 

Luckily, it doesn’t have to be like that! The burden is much lower when the right tools are used to standardize and automate the repeatable tasks that developing AI applications involve. If an automated standard complies with certain set of regulations, every output of that process is immediately compliant with that set of regulations! And for new regulations, only the automated process needs to change to enable compliance. Standardization and automation that fit naturally to the data science workflow will even work as a propellant for your data science development velocity.

The data snapshot and corresponding metadata is available directly from the model

The Cubonacci platform

Not by coincidence, achieving that is exactly the design goal of Cubonacci. 

The Cubonacci platform standardizes and automates repeating machine learning tasks, without trading off flexibility and the ability for customization. Every machine learning model on Cubonacci goes through the machine learning lifecycle in a transparent and reproducible process. Using your own model code, configuration and metrics, Cubonacci manages the training, maintenance and deployment of your model. This allows for fully explainable and traceable predictions, and for close monitoring of the model’s performance and its appropriate use.

These features enable data scientists to iterate fast during model development. It also gives them the power to quickly deploy their model and to provision APIs. At the same time, they are essential elements for regulatory compliance. 

If you are interested to learn how we can support you in speeding up data science and AI efforts and in achieving compliance with AI, let us know! We are happy to demonstrate the Cubonacci platform and to show how it addresses the challenges that your organization faces in extracting actual business value from your data science efforts.