EU regulators:

"AI like ChatGPT has already outsmarted us. We're stuck!!"


__________________________


Project Counsel Media is a division of Luminative Media. We cover the areas of cyber security, digital technology, legal technology, media, and mobile technology.



About Luminative Media: our intention is to delve deeper into issues, at greater length and with more historical and social context, in order to illuminate pathways of thought that are not possible to pursue through the immediacy of daily media. For more on our vision please click on our logo:

________________


EU regulators:

"AI like ChatGPT has already outsmarted us. We're stuck!!"


I do not see any educated, better informed regulatory soldiers to throw into the breach. I can almost guarantee all future leadership will continue in systemic failure.


BY:

Julia Toussaint

Legislative Analyst / European Institutions

The Project Counsel Media Team


19 APRIL 2023 (Brussels, Belgium) - As I noted last month, the EU lawmakers who are in the midst of creating the world’s first binding AI rulebook were knocked off their butts by ChatGPT, forced to take an unexpected detour to address the matter. A member of the European Parliament who is handling the AI legislative proposal drawn up by the European Commission to regulate AI, said that generative AI models (like ChatGPT) had thrown a spanner in the works. In an interview last month he said:


"My God, our draft is already out of date. It is a conundrum. These systems have no ethical understanding of the world, have no sense of truth, and they’re not reliable. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose. We need to figure out how to make them fit into our proposal to regulate AI". 


And so, the tech has prompted all EU institutions to go back and rewrite their draft plans. The EU Council, which represents national capitals, approved its version of the draft AI Act this past December, which would entrust the Commission with establishing cybersecurity, transparency and risk-management requirements for general-purpose AIs. But an EU Council representative said "My parliamentary colleague is right. Our stuff is already out of date".


And it is causing a fight. The AI team at the Parliament has now proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list - an effort to stop ChatGPT from churning out disinformation at scale.


But the idea was met with skepticism by many European Parliament members and groups who have been working with ChatGPT. Said Axel Voss, a prominent center-right lawmaker who has a formal say over Parliament’s position, these new amendments “would make numerous activities high-risk, and if you really look at the process they are not risky at all.”


In contrast, activists and observers feel that the proposal was just scratching the surface of the general-purpose AI conundrum, that nobody has done a deep dive into this technology, and take a wider look. Mark Brakel, a director of policy at the Future of Life Institute, a nonprofit focused on AI policy, said:


“It’s not great to just put text-making systems on the high-risk list: you have other general-purpose AI systems that present risks and also ought to be regulated. Nobody is stepping back and really looking at the entire AI ecosystem".


And so the "tweaking" began and the primary group of 12 lawmakers working on the European Union's AI Act - which they now describe as "a risk-based legislative tool, meant to cover specific high-risk use-cases of AI" - are stymied. As they worked on the bill they reached a "conviction that we also need a complementary set of preliminary rules for the development and deployment of powerful General Purpose AI Systems that can be easily adapted to a multitude of purposes."


So it was time to kick-the-can-down-the-read The group therefore penned an open letter calling on European Commission's President Ursula von der Leyen and U.S. President Joe Biden to organize a global AI summit at which representatives from around the world discuss and define governing principles aimed at controlling the development and deployment of AI models and ensure they are human-centric, safe, and trustworthy:


"The recent advent of and widespread public access to powerful AI, alongside the exponential performance improvements over the last year of AI trained to generate complex content, has prompted us to pause and reflect on our work".


Insiders tell me they just do not know how to proceed.


The letter lands just as many country regulators increase efforts to understand and manage AI. Canada, France, Italy, and Spain have each launched investigations into OpenAI's ChatGPT due to data privacy concerns and Italy's Guarantor for the Protection of Personal Data has temporarily blocked access to the AI chatbot, and said it will lift the current ban if OpenAI complies with privacy rules aimed at protecting personal data and minors by the end of the April. Meetings and negotiations are ongoing.


The European lawmakers aren't alone in calling for action on AI. In late March the UK's Department for Science, Innovation and Technology released a whitepaper outlining a framework aimed at regulating AI (specifically noting ChatGPT) without clamping down on investments and business. Last week, the U.S. Department of Commerce's National Telecommunications and Information Administration issued a formal request for public comment to guide potential policies aimed at auditing AI systems.


At the U.S. Congressional level, Senate Majority Leader Chuck Schumer also announced plans to pass bipartisan legislation to force companies to undergo independent assessment of AI products before they're released on the market - opening yet another round of Congressional hearings.


Meanwhile the European Commission's proposed draft rules for an AI Act (published nearly two years ago) will continue to work its way through the European Parliament. The bill is now 108 pages long (up from 52 pages) and sources tell me a parliamentary committee "hopes to at least reach a common position with all parties" by April 26th.


The GDPR (which in hindsight was far simpler compared to the EU AI Act) took 5+ years from first draft to enactment so I am not holding my breath on this one.

But I have so many issues with all of this.


The first instinct of any legislator is to regulate "all of the things". As soon as they see something new (or something old that works perfectly well on the basis of common sense) they won't rest until they've vomited forth reams of impenetrable rules, regulations and laws.


I suspect that's partly to justify their own existence. But it also keeps lots of very well paid lawyers at work, both drafting the rules then explaining them to the unfortunate sods who have to try to work with them, and finally in prosecuting those who decline to follow them. There is absolutely no chance that they are going to simplify the rules - that would defeat the object.


I think it's fairly obvious that (almost) everyone building these LLMs wants to produce a system that works reliably. They are not deliberately producing systems that hallucinate - what would be the point? It's also fairly obvious that they have not yet achieved this goal, though GPT4 is supposedly more reliable that earlier versions.


Therefore, there seems little point in imposing regulations requiring them to do what they're already striving for - namely "safe, reliable systems".


And therein lies the rub. What counts as "safe"? Who decides if it's "safe"? Who owns the responsibility if it's proven to not be "safe"?


I also think that there's a bigger picture here, in that these machine *tools* don't have any inherent morals or ethics. Instead, they have a set of regulations imposed on them by whoever's performing the training of said tool. And that's going to be true for any future tools, and any true AIs that we eventually produce.


How far do you want to trust the "ethics" of an AI trained to the requirements of someone like Elon Musk? Or how about a military AI? Or an AI tailored for use by a dictator state?


"Hey, PutinGPT, can you produce evidence to justify invading Ukraine?"


"Hey, CommerceGPT, please prepare a list of ways to drive $rivalCompany into bankruptcy"


"Hey, MoralfreeGPT, give me a justification for exploiting workforces in third world countries"


Personally, I doubt that any regulation can be effective, given how so many of the tools and training materials are now out in the wild. Going forward, there's always going to be someone with the resources to spin up their own GPT-esque system, with (or without) any regulations they choose.


But we are still at the mercy of failed administrative states that in all of their recent battles against increasingly almighty powerful and exponentially improving AI models/systems/networks/entities have failed in all the tech wars they have waged.


They are confronted by such overwhelming forces (many designed to practically remain generally unknown and unknowable), legislators and regulators tragically ignorant and uneducated and serially misinformed ... basically just an undereducated mass.


I do not see any educated, better informed regulatory soldiers to throw into the breach. I can almost guarantee all future leadership will continue in systemic failure.



* * * * * * * * * * * * * * * 


For the URL to this post, please click here.


To read our other posts,

please visit our archive by clicking here



* * * * * * * * * * * * * * *