European Union legislators take action Over 20 hours of negotiation time Amid the marathon attempt to reach a consensus on how to regulate artificial intelligence, one thorny element remains unsolved: rules for foundational models/general purpose AI (GPAI), according to a leaked proposal reviewed by TechCrunch. A tentative agreement has been reached on how to handle the issue.
In recent weeks, there has been a concerted movement led by French AI startup Mistral to call for a complete regulatory separation of basic models/GPAI. But the proposal still has elements of the phased approach to regulating these advanced AIs that Parliament proposed earlier this year, so EU lawmakers are pushing for a full-throttle push to let the market make things right. seems to be resisting.
Having said that, some obligations of GPAI systems provided under free open source licenses are partially exempted (which is stipulated to mean: weights, information about the model architecture, and information about how to use the model) — with some exceptions, such as “high risk” models.
Reuters also reports on partial exceptions for open source advanced AI.
According to our sources, the open source exception is further limited by commercial deployment, so if such an open source model becomes available in the market or is otherwise provided as a service, the curve Out is no longer valid. “Therefore, depending on how ‘market availability’ and ‘commercialization’ are interpreted, this law could also apply to Mistral,” our source suggested.
The preliminary agreement we have seen maintains GPAI’s classification of so-called “systemic risk,” with models receiving this designation based on a measured cumulative amount of compute used for training. It means that it has “functions that have a large impact” such as. Greater than 10^25 for floating point operations (FLOPs).
at that level Few current models appear to meet systemic risk thresholds – Suggests that few state-of-the-art GPAIs need to fulfill their ex ante mandate to proactively assess and mitigate systemic risk. So Mistral’s lobbying efforts appear to have softened the blow of the regulation.
Under the preliminary agreement, other obligations for providers of systemic risk GPAIs include conducting assessments using standardized protocols and state-of-the-art tools. Document and report serious incidents “without undue delay.” Conduct and document adversarial testing. Ensure appropriate levels of cybersecurity. Report the actual or estimated energy consumption of your model.
Providers of GPAI have general obligations such as testing and evaluation of models and the creation and preservation of technical documentation, which must be made available to regulators and supervisory authorities upon request.
You should also provide downstream deployers of the model (aka AI app authors) with an overview of the model’s capabilities and limitations to support their ability to comply with AI laws.
The proposal also calls on basic model makers to put in place policies that respect EU copyright law, including restrictions placed on text and data mining by copyright holders. It also says it will provide a “sufficiently detailed” overview of the training data used to build and publish the model. Templates for disclosures are provided by the AI Office, the AI governance body that the regulations propose to establish.
We understand that this copyright disclosure summary continues to apply to open source models. This exists as one of the exceptions to the rule.
The documents we have seen include references to codes of practice, and the proposal states that GPAIs, and GPAIs with systemic risks, will demonstrate compliance until a ‘harmonized standard’ is published. It says that you can depend on this.
It is envisaged that the AI Office will be involved in the creation of such norms. The European Commission envisages issuing a standardization request from six months after the entry into force of the regulation on GPAI, but will also ask for deliverables on reporting and documentation on how to improve the energy and resource use of AI systems. It is assumed that standardization requests such as these will be issued and regular reports on their progress will be made. It also includes the development of these standardized elements (2 years after the date of application and every 4 years thereafter).
Today’s tripartite consultations on the AI Act actually began yesterday afternoon, but the European Commission is seeking opinions on this disputed file between the European Council, Parliament and Commission staff. It seems that they are determined to make this the final finishing touch. (If not, as we previously reported, there is a risk that the regulation will be put back on the shelf, with EU elections and new Commission appointments looming next year.)
At the time of this writing, negotiations are underway to resolve several other contentious elements of the file, with a number of highly sensitive issues still on the table (e.g., authentication monitoring, etc.). Therefore, it remains unclear whether the file will cross the line.
Without agreement on all elements, there will be no consensus to secure the law, leaving the fate of the AI law in limbo. But for those looking to understand where their co-legislators have arrived at their position on responsibility for advanced AI models, such as the large-scale language model that underpins the viral AI chatbot ChatGPT, this tentative agreement will help lawmakers provide some degree of steering as to where we are going.
In recent minutes, EU Internal Market Commissioner Thierry Breton tweeted confirmation that negotiations had finally broken down, but only until tomorrow. The European Commission still intends to obtain the April 2021 proposed file beyond the deadline this week, as the epic trilogue is scheduled to resume at 9 a.m. Brussels time.
Source: techcrunch.com