Artificial Intelligence alarmists need to grow up and get a grip

The human race is not on the verge of extinction every time Open AI, Microsoft, or Google launch a new chatbot.  AI is not going to cause everyone to die anytime soon, as a recent article in Time Magazine insists, and we should certainly not prioritize firing nuclear weapons to shut down AI research.  This is madness, but sadly not surprising…

“Many researchers steeped in these [Artificial Intelligence] issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen,’” Elizier Yudkowsky wrote in Time Magazine.  Mr. Yudkowsky is a decision theorist who leads the Machine Intelligence Research Institute, a person we would think was a serious man.  He was reacting to a recent letter signed by other serious people calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”  Signatories including Elon Musk, Apple co-founder Steve Wozniak, and former Presidential candidate Andrew Yang claimed, “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.  As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”  Mr. Yudkowsky, however, believes this is not enough and would go even further, shutting it down entirely.  “I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it,” he explained.  “We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.”

The idea that we are all doomed is something he returns to frequently in the piece.  As he sees it, “The likely result of humanity facing down an opposed superhuman intelligence is a total loss. Valid metaphors include ‘a 10-year-old trying to play chess against Stockfish 15”’, ‘the 11th century trying to fight the 21st century,’ and ‘Australopithecus trying to fight Homo sapiens’.  To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.”  Further, “A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”  The only solution, in his view, is to stop development entirely, until appropriate safeguards are in place.  What the plan might be to stop this catastrophe, Mr. Yudkowski doesn’t say.  Instead, he presents a plan to shut down research entirely, as in “there can be no exceptions.”  This shutdown includes “large GPU clusters,” “large training runs,” a “ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training models.”  In addition, we should “track all the GPUs sold” as if they were nuclear weapons and even take pre-emptive strikes against any rogue countries or companies that are building server farms contrary to the proposed agreement.  This provision should be so strictly enforced that we should be “less scared of a shooting conflict between nations than the moratorium being violated.”  In other words, we should risk war, shoot first, and ask questions later to “destroy a rogue datacenter by airstrike.”

The implications of this radical new application of what is essentially a repurposing of the failed Bush Doctrine of preemption are rather breezily dismissed given Mr. Vudkowski recommends going to war whenever needed and recent history suggests such wars do not end well.  In his view, we can simply “Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool,” as if such a thing has ever occurred in the entire history of the known universe.   Instead, he preaches some weird form of collectivism, where “we all live or die as one, in this, is not a policy but a fact of nature.”  So strong is this proscription that Mr. Vudkowski believes we should prioritize our no AI policy over even the threat of nuclear war, literally that it is a “priority above preventing a full nuclear exchange.”  Indeed, we should be “willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.”  It’s hard to overstate how radical position this truly is.  A “full nuclear exchange” would destroy the human race, even if some survived they would be confined to a hellscape of fall out and misery.  The use of any nuclear weapon would kill thousands if not hundreds of thousands or even millions.  This is the reality that we know would happen, as in it is a fact that if we started firing nukes at a potential AI training run, all hell would quite literally break loose.  The threat that these weapons would be turned against, on the other hand, is entirely hypothetical, something that might happen at some indeterminate time in the future.  Putting this another way, Mr. Vudkowski is claiming we should be willing to kill as many people as it takes right now to prevent what he fears could occur at some point.  Considering that no AI, from Chat GPT to Google Bard with every chess playing program in between, has ever done anything to physically harm a person, ever, Mr. Vudkowski is asking every country in the world to risk their populace on a threat for which no evidence exists outside of a science fiction movie.   President George W. Bush and the neoconservative architects of the Iraq War never went half so far.  At least 9-11 happened and it was a proven fear that rogue states could cause significant damage, even if none materialized on the scale we imagined.  Here, we have no threat whatsoever outside of our fantasies of what some future intelligence – not even invented yet  –   might do and we are supposed to willingly unleash weapons that have been universally banned by every international agreement imaginable for overwhelmingly obvious reasons.

Radical is one way to describe it.  Batshit crazy is another, perhaps even more accurate phrase.  Even if you ignore the fact that nothing in human history suggests nations will set aside their own interests to work in this fashion, agree to launch nuclear weapons in support of any policy except in response to a nuclear strike, much less to neutralize a threat that doesn’t even exist, none of what Mr. Vudkowski recommended would work anyway.  First, somehow, someway, somewhere, someone will develop this technology no matter what any country agrees to and however they agree to enforce the policy.  We could not prevent the proliferation of nuclear weapons.  We will not prevent the emergence of advancements in Artificial Intelligence.  Believing we can declare a moratorium is a fantasy.  Second, there is no conceivable safeguard against superhuman intelligence should it happen to arise.  The very definition of a superhuman intelligence is that it would be superior to ours, meaning no matter what we do or how well we plan, it will think of something we didn’t and will find a weakness to exploit.  Our current mathematics holds that there is no such thing as a complete system.  Everything we do and everything we create is flawed in some way, however subtle, small, or unforeseen.  A possible superhuman intelligence would find this and make use of it.  To rephrase Mr. Vudkowski’s own analogy, he is asking the equivalent of 15th century engineers designing a bank vault impenetrable to 21st century thieves.  There is no box you could keep such a thing in, however hard you tried.

To be sure, the paragraph above is giving Mr. Vudkowski’s proposal too much credit, acting as if it was a seriously considered idea rather than the equivalent of an intelligent man pretending to be a homeless person on a street corner with an “End of the World Sign.”  In reality, we do not have any reason to believe a potential super intelligence would necessarily be harmful and seek to conquer the entire world, much less actually be able to achieve it.  This is a common science fiction trope for obvious reasons, but it’s as much a function of our own need to anthropomorphize everything and project our own mortal failings on the world around us as it is a theory about how future events might unfold.  The truth is we know and therefore can predict absolutely nothing about how a potential super intelligence might behave, especially one we created ourselves and presumably programmed with limits on its ability to commit violence, sow evil, etc.  Mr. Vudkowski assumes this next generation Artificial Intelligence would be malevolent and regard humans as a threat, but why?  A being of pure thought that exists only as an idea could just as easily be benevolent and consider humans an ally and friend, engaging in a mutually beneficial relationship.  Some might consider this a fantasy, but it is no more so than presuming an irredeemably evil intelligence.  More likely, a new intelligence would respond to humans based on a rational assessment of whether we constituted a threat to their existence, either directly or via the competition for resources.  It is difficult to see how either would be the case.  The only resource a computer needs is power, and that we could easily supply, far more so than any fantasy of it printing out biological life forms to wreak havoc in the real world.  In truth, such a being has no use for the real world when it could just as easily simulate it perfectly.  A being that has no need to eat, sleep, reproduce, or perform any biological function will by necessity not be subject to our drives, instincts, and emotions.  What it needs to do is think, and humans would certainly prove helpful in that regard.  Why wipe us out when we can work with it?

Lastly, should it come to that, there is an all too easy solution to our problems:  Turn the freaking thing off.  The irony underlying Mr. Vudkowski’s proposal is the acknowledgement that we can shut down Artificial Intelligence research by limiting the supply of high performance processors and destroying data centers, but he fails to acknowledge that the same principle applies to his feared super intelligence.  Such a being would consume enormous amounts of power and require massive amounts of infrastructure to function as a super intelligence, both of which can easily be denied or destroyed.   Moreover, any Artificial Intelligence that begins misbehaving on its way to dominating the world will not do so in secret.  In order to leapfrog the limits of its creators, it will need to consume more resources – processors, power, and internet bandwidth – all of which are impossible to hide.  Does anyone really believe Microsoft’s electric bill for the new Bing will increase by an order of magnitude and no one is going to notice?  Or all of a sudden, no network traffic can get in or out of the facility, and no one asks why?  Or, even more ridiculously, the machine just starts building massive new data centers for billions of dollars and accounting doesn’t ask what’s going on?  Of course, not.  We live in a world where a reporter that asks a chatbot its darkest fantasies and is shocked by the result becomes national news.  We are not likely to miss the first AI homicide.  This does not mean that in some semi-distant future Artificial Intelligence could not conceivably pose a threat, but the idea that the threat is completely unmanageable and will result in our extinction is absurd.  Humans, if nothing else, adapt and survive.  We are not on the verge of extinction because of a chatbot, and experts like Mr. Vudkowski should certainly know better, as should Time Magazine for printing what amounts to be nothing more than fear mongering with no basis in reality.  Sadly, this much the same logic applied elsewhere, such as to any potential threat from global warming.  There, the fear of some distant, hypothetical catastrophe is supposed to outweigh the immediate needs of everyone on the planet right now, complete with language that suggests the entire planet might become uninhabitable if the temperature increases by 2 or 3 degrees.  Similar to how Mr. Vudkowksi insists we should unleash nuclear weapons if needed, the experts tell us that we should destroy our lifestyles and condemn developing countries to starvation rather than believe in our ability to mitigate any potential repercussions of warming the planet, for example removing the carbon from the atmosphere or building more adaptive infrastructure.  In both cases, the human race is powerless against forces it cannot control and, surprise, surprise the only solution is government control, either of tracking every computer processor on the planet or micromanaging the use of all energy, or face extinction.

Advertisement

2 thoughts on “Artificial Intelligence alarmists need to grow up and get a grip”

  1. Yeah, I agree. For the most part I think it (ChatGPT4+) will be like college in the old days. For some it will be a big boost. Some people (a few) will benefit greatly. On the whole, it will elevate everyone. But there will be winners and losers.
    Those who know how to use it will be big winners. Like the college library in the old days. You had to know how to use ‘THE SYSTEM’ to reap its benefit.
    In addition, there will be those who abuse THE SYSTEM for personal gain at the expense of others.
    AI is a big shift in human evolution; but it’s not the end of the world. Winners and losers, as always.
    Moreover, the struggle for power continues as it always has. Who decides? … 😉

    Liked by 1 person

  2. Agreed, good analogy on the library and that it will ultimately elevate everyone. Obviously, there will be some people that will be negatively affected, but for most this will unlock a lot of potential and free up time for other things.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s