“a divine being walking a human into the gates of hell” / created using DALL-E
11 December 2022 - Yikes. It was almost 4 years ago and I was sitting in a conference in San Francisco as the editors of TechCrunch interviewed Sam Altman soon after he’d left his role as the president of Y Combinator to become CEO of the AI company he co-founded in 2015 with Elon Musk and others ... OpenAI.
At the time, Altman described OpenAI’s potential in language that sounded outlandish to some. Altman said, for example, that the opportunity with artificial general intelligence - machine intelligence that can solve problems as well as a human - is so great that if OpenAI managed to crack it, the outfit could “maybe capture the light cone of all future value in the universe.” He said that the company was “going to have to not release research” because it was so powerful. Asked if OpenAI was guilty of fear-mongering - Musk has repeatedly called all organizations developing AI to be regulated - Altman talked about the dangers of not thinking about “societal consequences” when “you’re building something on an exponential curve.”
The audience laughed at various points of the conversation, not certain how seriously to take Altman. No one is laughing now, however. While machines are not yet as intelligent as people, the tech that OpenAI has since released is taking many aback (including Musk), with some critics fearful that it could be our undoing, especially with more sophisticated tech reportedly coming soon.
Indeed, though heavy users insist it’s not so smart, the ChatGPT model that OpenAI made available to the general public last week is so capable of answering questions like a person that professionals across a range of industries are trying to process the implications. Educators, for example, wonder how they’ll be able to distinguish original writing from the algorithmically generated essays they are bound to receive — and that can evade anti-plagiarism software.
Paul Kedrosky isn’t an educator per se. He’s an economist, venture capitalist and MIT fellow who calls himself a “frustrated normal with a penchant for thinking about risks and unintended consequences in complex systems.” He saw the danger in that presentation and has written extensively about AI's danger "to our collective future". Many times over the years he has pointed me in many interesting investigative directions that have informed my posts, and I quote him quite a bit. More from Paul shortly.
And so millions of users been playing around with ChatGPT since it was launched last week by OpenAI. The machine-learning oracle creaked under the strain of demands from more than 1.5 mn users (as of this Sunday morning), ranging from producing short essays to answering questions. It wrote letters, dispensed basic medical advice, written legal letters, summarised history, etc., etc. It costs OpenAI 2-5 cents to run each query and the expectation is they'll switch to a "pay-as-you-go" model sometime next year.
ChatGPT is eerily impressive, as is Dall-E, the AI generator of digital images from text prompts first unveiled by OpenAI last year. I used it to create the graphic above. Once you have tried both out, it is impossible to avoid the sense that natural language agents are going to disrupt many fields, from music and video games to law, medicine and journalism. The chatbots are coming for us professionals rapidly. The legal field (always years behind technology) fears all of these generative AI programs will create deep fakes and other fake evidence and are hawking palliatives to authenticate data. Good for them but an impossible task. Wait until you see what GPT-4 has in store next year (the granddaddy of all this stuff). As Opus.AI and MargREV have noted, now that we have the tech to alter metadata (a set of data that describes and gives information about other data and is used to authenticate that data), the stack-ability of all these AI tools will allow a clever IT person to replace entire email chains. That is all in development. What fun to come.
NOTE TO READERS: there is nothing new here. The legal technology industry has developed software programs under the general rubric "technology assisted review" to parse terabytes of structured and unstructured data to find "relevant information" that pertains to a litigation or investigation. Not to be outdone, many parties use that same technology to "cleanse" their data silos before turning them over for examination - seen too clearly in the U.S. opioid litigations. Oh, yes, there are rules and regulations and sanctions to address/prevent this sort of thing. But when billions of dollars are at stake, well ...
The danger in all of this ... with ChatGPT and and all the other AI agents out there ... is that this has created a technology version of Gresham’s Law on the adulteration of 16th century coinage, that “bad money drives out good”. If an unreliable linguistic mash-up is freely accessible, while original research is costly and laborious, the former will thrive.
That is why Stack Overflow, an advice forum for coders and programmers, this week imposed a temporary ban on its users sharing answers from ChatGPT. “The primary problem is that while the answers which it produces have a high rate of being incorrect, they typically look like they might be good,” its moderators wrote.
Even Google has seen the problem as its search engine (and other search engines) are poisoned with LLM-generated content which might or might not be right. But now you understand my essay from earlier this year when I said it was no wonder why Google has been so insistent on making chat core to its future, and how it must deal with these mind-bending dynamic AI changes unfolding across the Web, across the internet.
ChatGPT’s creative works are less vulnerable to these issues. The creativity industry sees it as a great boon.
But all this chatter about "deploying ChatGPT carefully" is just asinine. As is the general chatter about "regulating technology". My God, what fools these mortals be. In the case of ChatGPT it has now been unleashed - and will probably get better (a new version due out by the end of 2023). In time, we will discover uses for natural language AI agents that we do not yet imagine. One can only hope destruction does not reign supreme.
There are, of course, valid points to be made. Later this week in one of my end-of-the-year essays I will address the hell that has been unleashed and include an analysis of the GPT-series of AI models training data - the tokens and vectors and latent space and all the obtuse coding issues as I dive down the rabbit hole of the real tubes and wires of ChatGPT.
Those valid points? I turn it over to Paul Kedrosky who had a lot to say on his blog. Here is a short bullet-point list from that blog post and his partial Twitter feed:
• I am very troubled by what I see everywhere all at once with ChatGPT in the last few days. College and high school essays, college applications, legal documents, coercion, threats, programming, etc. All fake, all highly credible.
• Society is not ready for a step function inflection in our ability to costlessly produce anything connected to text. Human morality does not move that quickly, and it's not obvious how we now catch up.
• It's as if EPO has been put in the drinking water and there is no test. As I've said elsewhere, shame on OpenAI for launching this pocket nuclear bomb without restrictions into an unprepared society.
• As an addendum, what I find remarkable and sad is the immediate politicization of this topic, as I predicted elsewhere. This is not a free speech issue, or an elites versus the rest issue. It is, as I said, an EPO in the drinking water problem, with no adequate test.
• All work produced while connected to the internet is now suspect, and guilty until proven innocent - a test we simply cannot do . It doesn't matter whether it's code, college applications, high school essays, or anything else. Everything is fraud. We di not have the technology to detect this stuff.
• And I'm getting all kinds of tech determinism bullshit, where people opine that we can't stop these "advances", etc., as if technology is some exogenous force that, like gravity, that it just does things to us. That fusion of learned helplessness and sociopathy is deeply toxic.
• Which is reality just might be true. This is accurate: we have just had trust hacked as a society by a bad actor. A societal trust collapse, at scale.
• I, obviously, feel ChatGPT (and its ilk) should be withdrawn immediately. And, if ever re-introduced, do so only with tight restrictions — like length of text, domains, etc. I had someone suggest, given the impossibility of watermarks & detection, include designed-in, random errors. But I am shouting into the wind.
• And my God, all the blithe nonsense. This is like advising Gary Kasparov on May 12, 1997, the day after losing to Big Blue, to ... "well, just get better at chess". No, this is what it feels like to be exceeded, as systems only gets better, faster and far, far more complex from here.
More from me later this week.