19 APRIL 2023 (Brussels, Belgium) - As I noted last month, the EU lawmakers who are in the midst of creating the world’s first binding AI rulebook were knocked off their butts by ChatGPT, forced to take an unexpected detour to address the matter. A member of the European Parliament who is handling the AI legislative proposal drawn up by the European Commission to regulate AI, said that generative AI models (like ChatGPT) had thrown a spanner in the works. In an interview last month he said:
"My God, our draft is already out of date. It is a conundrum. These systems have no ethical understanding of the world, have no sense of truth, and they’re not reliable. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose. We need to figure out how to make them fit into our proposal to regulate AI".
And so, the tech has prompted all EU institutions to go back and rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act this past December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs. But an EU Council representative said "My parliamentary colleague is right. Our stuff is already out of date".
And it is causing a fight. The AI team at the Parliament has now proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list - an effort to stop ChatGPT from churning out disinformation at scale.
But the idea was met with skepticism by many European Parliament members and groups who have been working with ChatGPT. Said Axel Voss, a prominent center-right lawmaker who has a formal say over Parliament’s position, these new amendments “would make numerous activities high-risk, and if you really look at the process they are not risky at all.”
In contrast, activists and observers feel that the proposal was just scratching the surface of the general-purpose AI conundrum, that nobody has done a deep dive into this technology, and take a wider look. Mark Brakel, a director of policy at the Future of Life Institute, a nonprofit focused on AI policy, said:
“It’s not great to just put text-making systems on the high-risk list: you have other general-purpose AI systems that present risks and also ought to be regulated. Nobody is stepping back and really looking at the entire AI ecosystem".
And so the "tweaking" began and the primary group of 12 lawmakers working on the European Union's AI Act - which they now describe as "a risk-based legislative tool, meant to cover specific high-risk use-cases of AI" - are stymied. As they worked on the bill they reached a "conviction that we also need a complementary set of preliminary rules for the development and deployment of powerful General Purpose AI Systems that can be easily adapted to a multitude of purposes."
So it was time to kick-the-can-down-the-read The group therefore penned an open letter calling on European Commission's President Ursula von der Leyen and U.S. President Joe Biden to organize a global AI summit at which representatives from around the world discuss and define governing principles aimed at controlling the development and deployment of AI models and ensure they are human-centric, safe, and trustworthy:
"The recent advent of and widespread public access to powerful AI, alongside the exponential performance improvements over the last year of AI trained to generate complex content, has prompted us to pause and reflect on our work".
Insiders tell me they just do not know how to proceed.
The letter lands just as many country regulators increase efforts to understand and manage AI. Canada, France, Italy, and Spain have each launched investigations into OpenAI's ChatGPT due to data privacy concerns and Italy's Guarantor for the Protection of Personal Data has temporarily blocked access to the AI chatbot, and said it will lift the current ban if OpenAI complies with privacy rules aimed at protecting personal data and minors by the end of the April. Meetings and negotiations are ongoing.
The European lawmakers aren't alone in calling for action on AI. In late March the UK's Department for Science, Innovation and Technology released a whitepaper outlining a framework aimed at regulating AI (specifically noting ChatGPT) without clamping down on investments and business. Last week, the U.S. Department of Commerce's National Telecommunications and Information Administration issued a formal request for public comment to guide potential policies aimed at auditing AI systems.
At the U.S. Congressional level, Senate Majority Leader Chuck Schumer also announced plans to pass bipartisan legislation to force companies to undergo independent assessment of AI products before they're released on the market - opening yet another round of Congressional hearings.
Meanwhile the European Commission's proposed draft rules for an AI Act (published nearly two years ago) will continue to work its way through the European Parliament. The bill is now 108 pages long (up from 52 pages) and sources tell me a parliamentary committee "hopes to at least reach a common position with all parties" by April 26th.
The GDPR (which in hindsight was far simpler compared to the EU AI Act) took 5+ years from first draft to enactment so I am not holding my breath on this one.