ChatGPT and the physics evolution built into our brains

Next generation chat software that delivers a more realistic conversational experience, can do research, and write stories has set the technology and media worlds on fire, but beneath the surface ChatGPT reveals one of the things that makes human intelligence so unique.  We have physics built into our brain by evolution.

The technology and media worlds are ablaze with stories about Open AI’s next generation natural language processing software, ChatGPT.  The GPT stands for “Generative Pre-trained Transformer.”  In plain language, it’s free software that powers the closest thing computers have come to the ability to carry on a meaningful conversation.  You can ask ChatGPT questions and receive answers in plain language.  It can write cover letters, reports, even tell stories.  There are also models to develop websites, code for different programming languages, and translate between those different languages.  At least one person built an entire business on the platform and sold it for $30,000.  Hence the astounding variety of headlines, from “Don’t Ban ChatGTP in Schools. Teach With It,” (The New York Times) to “Abstracts Written by ChatGPT fool scientists” (Nature Magazine).  The technology and lifestyle site CNET assures us that “ChatGPT Will Be Everywhere in 2023.”  In their view, “Chatbots aren’t new. They’ve existed in some form since as far back as the 1960s. But there’s something special about ChatGPT…The internet already abounds with ideas for how to put ChatGPT’s human-like dialogue to use, from creating custom chatbots to help fight traffic tickets to creating workout and diet plans.”  In “2023, artificial intelligence experts expect to see a wave of new products, apps and services powered by the tech behind ChatGPT. It could change the way we interact with customer service chatbots, voice-enabled virtual assistants like Alexa or Siri, search engines and even your email inbox.”  They quote Oren Etzioni, adviser and board member of the Allen Institute for Artificial Intelligence.  “I would say, within six months or so, we’re going to see a huge step-up in the conversational capabilities of chatbots and voice assistants.”  Microsoft, an investor in the OpenAI project, is already looking to upgrade their products by integrating ChatGPT into Bing, Outlook, Word, and PowerPoint.  There’s the obvious sense that this is a breakthrough, and we might be witnessing a paradigm shift to a new world of revolutionary technology akin to the rise of the internet and the emergence of smartphones.

Perhaps needless to say, I was a little more skeptical at first, instantly reminded of the nonsensical claims that Google’s (not publicly available) LaMDA software passed the Turing Test last year, but a friend of mine asked me to give it a try and I figured why not?  Surely, I can break this thing in five minutes or less, I said to myself with something like cartoonish malicious intent.  I began my interaction with ChatGPT by asking it a request that has tripped up many another purported Artificial Intelligence breakthrough, “Tell me how to walk down the stairs on my hands.”  At first, the response was literal, as though I should promptly try it out myself, but then something different and new happened.  ChatGPT recognized that walking down the stairs on your hands might be dangerous.  It recommended I use a spotter and that I have the appropriate skills and experience to attempt it.  In addition, the language produced by the software was natural, easy to read, conversational, and grammatically correct.  There were sentences and paragraph breaks that made sense, no mean achievement.  The same proved true when I asked ChatGPT to tell me a story about a knight, a wizard, and a dragon.  On its own, the machine added a backstory about a kingdom that needed a hero and a knight that rose to the challenge, seeking out the wizard and learning that a magic sword was needed to defeat the dragon.  Nor did the story lack for emotion, bravery was required in the knight’s battle and after the dragon was vanquished, the land lived in peace and the knight became that hero.  It wasn’t quite J.R.R. Tolkien, but certainly impressive, adding to the three things I mentioned should be in the story, and telling it in a narrative manner.  The same sort of quality responses applied when I asked ChatGPT to write an introductory letter to a healthcare organization that requires a website or a logistics company that needs a distributed ledger.  In both cases, the software provided additional detail; for the hospital it was the importance of security; for the logistics company it was how a distributed ledger makes it easier to manage and audit relationships with multiple organizations.  Overall, I am not exaggerating when I say these letters were better written and more insightful than many have been seen written by actual people.  They included details of the value proposition that I myself would include approaching either market.  I have never seen an Artificial Intelligence generate anything of this quality.

Based on these few scenarios, one might conclude ChatGPT is truly intelligent, though to be sure it is not designed to pass a Turing Test or fool a person into thinking they are interacting with another human.  When asked about emotions, it says it doesn’t have any unlike Google’s potential competitor, smartly avoiding that question entirely.  Astute readers probably sense a “but” coming, and sure enough a few more probing questions reveal significant limitations in the software as well as illuminate some aspects of our own intelligence.  Specifically, ChatGPT does not seem to have any rules of reality built in.  The kind of hardwiring in our own minds against which we compare experience and thoughts.  For example, when I asked it to tell me about Teddy Roosevelt’s first wife without using her name, it began to provide accurate information, but then confused dates in a manner a human simply wouldn’t.  “She was the first wife of Theodore Roosevelt, the 26th President of the United States. Alice was born in 1884 and married Theodore in 1906. Tragically, Alice died on February 14, 1884, just two days after giving birth to their daughter, also named Alice. Theodore was devastated by Alice’s death and never fully recovered from the loss.”  It is difficult to say where the mix up with the dates occurred in the processing, but she was born in 1861, they married in 1880, and she died in 1884.  Only her death date is correct, but then the computer goes on to say they married in 1906.  If a date was said to occur after a person’s death, a human would know something was wrong immediately.  ChatGPT, however, does not appear to have an understanding that death is final or an underlying arrow of time.  Instead, it is assembling information from its vast databases and organizing it in plain language without being aware of what the underlying words actually mean or the real concepts they refer to.

A friend of mine provided further evidence of this when asking ChatGPT about boxing in the 1980’s.  The software dutifully provided an answer in conversational language, but it was all wrong incorrectly identifying Muhamed Ali’s last fight, the outcome of a 1982 championship bout between Larry Holmes and Gerry Cooney, and mistakes regarding Mike Tyson.  Similarly, it is difficult to determine the precise source of the error and these are closer to the sort of mistakes a human would make, but once again the lack of a built in arrow of time seems likely.  Muhamed Ali’s fight against Larry Holmes in 1980 was billed as “The Last Hurrah” because it was assumed Ali would retire afterwards.  He did not, fighting again in 1981.  ChatGPT likely misread “Last Hurrah” and failed to consider that something might have come after that, mixing up a slogan for a fight with the actual last fight.  Interestingly, if you ask it specifically when was Ali’s last fight, the answer is correct, but when you ask it when Ali fought Larry Holmes it reports that “he retired from professional boxing after that fight,” suggesting ChatGPT generates a unique response each time and is not fully aware of what came before or after in the human fashion, making it incapable of logical consistency.  To be certain, I asked the software to tell me what we have been discussing so far.  In that response, it correctly noted that Ali retired after the 1981 fight with Trevor Burbick.  I asked ChatGPT why it told me he retired after fighting Holmes previously, and the machine reported that it had misspoke.  Likewise, I asked the software if it had ever provided incorrect information about Teddy’s Roosevelt’s first wife.  It offered a generic response about data being accurate through 2021, but then when I asked about the specific statement and what was wrong about it, ChatGPT corrected itself.  “I apologize, it appears that I have provided some incorrect information in my previous response. In the statement you have mentioned, the dates mentioned are incorrect.”  I followed up by asking if it could tell me why it got the dates wrong, and ChatGPT reported “it appears that the incorrect information I provided in my previous response was a result of an error in my training data. I apologize for any confusion or inaccuracies caused by this mistake, and I will make sure to keep this in mind for future responses.”  I pressed it a bit further, asking if ChatGPT has any means to compare dates in the training data with dates in the real world; the arrow of time I was referring to earlier.  It reported that it does not, somewhat proving my assumptions, but making me feel bad in the meantime.  “As a language model, I do not have the ability to compare dates in my training data with dates in the real world…I apologize for any inaccuracies or confusion caused by my previous response, and I will make sure to be more careful in the future.”  I had no choice but to tell a machine there was no need to apologize, it’s doing a great job.

In that regard, I was being honest.  There is no doubt ChatGPT is an incredible piece of software, far more capable than anything of its kind to date, but it is equally clear that machines are far from intelligent in the way humans use the term.  Of course, we perform functions similar to ChatGPT.  We assemble new thoughts from knowledge stored in memory, answering a question based on disparate facts that we pull together to form coherent sentences, and we can make similar mistakes.  There is a deeper, often unconscious level to our cognition however, one that ChatGPT itself admits it doesn’t have.  Our brains are  hard-wired with built in rules of reality and laws of physics, and our thoughts are automatically compared against these heuristics on an ongoing basis to provide a real-time assessment of what is right and wrong, fair and unfair, likely or unlikely, etc.  I mentioned the arrow of time.  Whatever you call it, we automatically organize thoughts into before and after, cause and effect.  This process is not perfect, but it is functional enough that we would be aware a date we were reporting occurred after a person’s death.  We have a built in expectation of what will happen based on previous events, we know death is final, and evaluate our options accordingly.  Incredibly, this trait appears to be shared with others in the animal kingdom, particularly primates and something as dramatic as death is not required.  An experiment conducted on Rhesus monkeys found they have an innate sense of when something might be too-good-to-be true.  In 2013, Emily J. Knight, Kristen M. Klepac, and Jerald D. Klarik published, “Too Good to Be True: Rhesus Monkeys React Negatively to Better-than-Expected Offers.”  They “tested the hypothesis that two evolutionarily-conserved evaluation processes underlie goal-directed behavior: (1) consistency, concerned with prediction errors, and (2) valuation, concerned with outcome utility. Rhesus monkeys (Macaca mulatta) viewed a food item and then were offered an identical, better, or worse food, which they could accept or reject. The monkeys ultimately accepted all offers, attesting to the influence of the valuation process. However, they were slower to accept the unexpected offers, and they exhibited aversive reactions, especially to the better-than-expected offers, repeatedly turning their heads and looking away before accepting the food item.”

An arrow of time, cause and effect, and the resulting expectations are not the only physics programmed into the animal kingdom.  The laws of motion that govern the simple tossing of a ball or throwing of a spear were not formalized until Sir Isaac Newton published Principia in 1687, but we use them every time we play catch, monkeys use them every time they climb trees, cats use them everytime they jump on the counter, etc.  ChatGPT itself described these laws related to the tossing of a ball as, “When you throw a ball into the air, it follows a curved path called a parabolic trajectory due to the force of gravity acting upon it. The force of gravity, acting in the direction of the center of the Earth, causes the ball to accelerate downwards.  As the ball moves upward, its velocity slows down until it reaches its highest point, known as the apex of the trajectory. At the apex, the ball’s velocity is zero and it is momentarily at rest. After reaching the apex, the ball begins to fall back down towards the ground, and as it falls, its velocity increases.”  Incredibly, all this is built into our brains, as is the force of gravity and approximations for any resistance we might encounter.  We could not function without it, unable to stand, walk, manipulate objects, or do anything requiring an interaction with the physical world.  Along with this knowledge, there are similar rules for estimating size and distance, weight, the quantity of things, etc., plus mention inside and outside, mine and otherwise, even the beginnings or right and wrong.  Our sense of hearing also uses the complex Fourier Transformation to manipulate sound waves in real time, as well as other unconscious processes that rely on advanced physics. This should not imply that these rules are 100% accurate and the universe really works this way at a fundamental level.  Evolution did not begin coding them into our ancestor’s genes so we could all be physicists one day.  Rather they are based on an organism’s experience of the world as it appears to work for purposes of their survival and reproduction, experience gleaned from billions of years of evolution and, for higher animals, the lifetime of the organism.  This is one of the reasons the average person has such a difficult time understanding the apparent paradoxes of quantum theory for example, where an organism can be both alive or dead at the same time, a particle can effectively be in multiple places at once.  These discoveries, after generations of scientists probed the unknown, are counter to the experience built into our brains.  As far as we can tell from our own personal experiences, the world doesn’t work that way.

Whatever the case, until Artifical Intelligence software includes this additional layer – obviously a computer version of it with far more precise rules – it will never been intelligent the way we use the term.  This does not make it any less amazing, but we remain much more so, at least for now.

You can try ChatGPT for yourself here.

Advertisement

2 thoughts on “ChatGPT and the physics evolution built into our brains”

  1. Hahaha! I work in tech for my day job, so I have no choice but to stay on top of these things, but I tend to agree with you, though not for the reason most think. I am skeptical that we are anywhere near Skynet out of The Terminator franchise, but I believe our reliance on these machines is melting our brains. No one thinks or does anything anymore, personally or professionally. It’s all on some kind of device and I don’t think we can afford the loss of brain power for long. 🙂

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s