skip to content

Department of Computer Science and Technology

Can we use AI to help regulate AI? That’s one of the intriguing question researchers in this Department will pose at a conference this Sunday, just weeks before parts of the EU AI Act – the world’s first comprehensive law on the use of AI – take effect.

From February 2025 onwards, providers of artificially intelligent systems in the EU will need to comply with the new regulations. The first to come into force will ban certain practices, like selling AI systems designed to manipulate or deceive people. Regulations introduced subsequently will govern the use of AI systems in areas from low risk (like video games) to high risk, such as law enforcement and critical infrastructure (for example, smart motorways).

The aim of the new Act is to ensure that AI systems used across the EU are safe, transparent, traceable and non-discriminatory to users. And the legislation is being followed with interest by countries (including the UK and the US) that are currently considering how best to regulate AI.

If the AI Act is going to work, it's got to be realistic and cost-effective for companies to comply with it. That's why we're interested in developing AI tools to help them.

Bill Marino

 

But many EU businesses – particularly smaller ones – are worried about the time and cost of complying with the new Act. This is being exacerbated by the fact that, in a growing and fragmenting industry, few systems are built by a single team anymore. Instead, developers are increasingly acquiring pre-trained AI models and datasets from a variety of sources online (including from platforms such as Hugging Face) and then incorporating them into the AI systems they create.

Making compliance simpler and easier
When the Act comes into effect, it will for the first time require developers to collect information on each one of these components, as well as on their overall AI system, and then analyse it all to determine compliance. And this is where the researchers are suggesting that AI could be used to make the process cheaper and easier.

At the moment, says Nic Lane, Professor of Machine Learning here, the compliance process "is highly manual, time-consuming and reliant on well-paid lawyers and machine learning experts. It's thus prohibitively expensive for small businesses and nonprofits." And this is a problem.

"If the AI Act is going to work, it's got to be realistic and cost-effective for companies to comply with these provisions," says Bill Marino (above) who is working under Nic Lane's supervision towards a PhD on responsible and regulation-compliant AI models and systems. "That's why we're interested in developing AI tools to help providers understand their responsibilities under the Act and comply with them."

The idea of using AI to help address issues in regulating AI may seem paradoxical, but the researchers argue it could offer real value. Automating the process, they say, would help democratise it.

And they have won industry support for their approach. They were recently awarded one of the inaugural Google Academic Research Awards for their proposal to develop an automated compliance tool using large language models and information labels that convey specific compliance-related data.

They'll be discussing a first step towards such a system this Sunday at the 2nd Workshop on Regulatable Machine Learning. (This is part of the NeurIPS 2024 Conference).

Bill, a former lawyer who spent 10 years working in AI, will present his paper on Compliance Cards: Automated EU AI Act Compliance Analyses amidst a Complex AI Supply Chain.

Information labels and algorithms
Authored by himself, Nic Lane and collaborators at Cambridge, Brown, Warwick and Oxford Universities, it proposes a two-step solution.

First is the development of information labels (or 'Compliance Cards') that contain data about the AI system and all the datasets and models it is composed of. These take the concept of existing information labels, such as Model and Data Cards, and evolve them so that, crucially, they capture the data "in a computational format conducive to algorithmic manipulation".

The cards can be assembled and inputted into a Compliance Cards Algorithm, developed by the researchers, that is based on the rules of the new AI Act. The algorithm then analyses the information to predict whether or not the AI system is compliant.

The research team has created an open source Python implementation of the Compliance Cards Algorithm. As they say in their paper, they are making it available to anyone interested in rendering a compliance analysis for an AI project.

"Our hope is that the algorithm will be integrated into various applications or platforms, and made available to their users to democratize its use," the researchers say. "We also plan to host the algorithm in a web application that anyone can use to run analyses across a set of Compliance Cards they generate and/or upload."

The future - AI systems that monitor and adjust themselves for compliance?
The goal of this work, Bill Marino says, "is to bring down the time and cost it takes to conduct these AI regulation compliance assessments. And that would be a first step towards democratizing and automating much more of this compliance work."

Further into the future, he adds, the hope is "to develop AI systems that are capable of sensing for themselves whether they are compliant with the regulations and if not, to adjust themselves so that they do comply".

 


Published by Rachel Gardner on Tuesday 10th December 2024