Will AI spell doom for humanity? One of ChatGPT’s creators thinks there’s a 50% chance
A researcher who was involved in the creation of ChatGPT has warned that AI could well lead to the doom of humankind – or at least there’s about a 50% chance of that scenario playing out.
Business Insider reports that Paul Christiano, who led the language model alignment team at OpenAI, but has since left the company and now heads up the non-profit Alignment Research Center, made the warning in the Bankless podcast.
During the interview, the hosts brought up the prospect of an ‘Eliezer Yudkowsky doom scenario’, with Yudkowsky being a well-known AI skeptic of many years (actually a couple of decades).
Christiano told the hosts: “Eliezer is into this extremely fast transformation once you develop AI. I have a little bit less of an extreme view on that.”
He then describes more of a gradual process of moving up gears with accelerating AI change, and observes that: “Overall, maybe you’re getting more up to a 50/50 chance of doom shortly after you have AI systems that are human level.”
Christiano also said on the podcast that there’s “something like a 10-20% chance of AI takeover” happening eventually, culminating in a pretty bleak scenario where many (or indeed most) humans are dead. “I take it quite seriously,” Christiano adds. Well, no kidding.
The mission of the Alignment Research Center is to “align future machine learning [AI] systems with human interests“.
Doom Eternal?
This is yet another in a fair old heap of recent warnings about how the world may end up negatively affected by AI. And one of the more extreme ones, for sure, given the talk of the doom of humanity and the earth’s population being mostly wiped out.
Granted, even Christiano doesn’t think there’s more than a relatively small chance of the latter happening, but still, a 20% roll of the dice (worst-case scenario) for a hostile AI takeover is not a prospect anyone would relish.
It is, of course, interesting that any AI takeover must be a hostile one. Can we not have the development of a considered and benevolent artificial intelligence that genuinely rules in our best interests, just for once? Well, no. Any AI may start out with good intentions, but they’ll inevitably come off the rails, and judgements for the ‘better’ will end up going awry in spectacular ways. You’ve seen the films, right?
In all seriousness, the point being made now is that while AI isn’t really intelligent – not as such just yet, it’s basically still a big (gargantuan) data hoover, crunching all that data and admittedly already making some impressive use of said material – we still need guidelines and rules in place sooner rather than later to head off any potential disasters in the future.
Those disasters may take the form of privacy violations, for example, rather than the end of the world as we know it (TM), but they still need to be guarded against.
The most recent warning on AI delivered by an expert comes from the so-called ‘Godfather of AI’ who just quit Google. Geoffrey Hinton basically outlined the broad case against AI, or at least, against its unchecked and rapid expansion – which is happening now – including the dangers of AI outsmarting us in a much swifter manner than he expected. Not to mention the threat to jobs, which is already a very real one. That’s the most pressing peril in the nearer-term in our book.
This follows an open letter calling for a pause with the development of ChatGPT and other AI systems for at least six months, signed by Elon Musk among others (who has his own answer in the form of an AI that he promises is “unlikely to annihilate humans”).
stereoguide-referencehometheater-techradar