AI is no match for natural stupidity
The fear of the machine is not a sign of intelligence. The biggest threat of AI is us.
The conversation
My first conversation about chatGPT started in a private e-mail group. After a few exchanges, I was stunned to see how everybody seems to be somehow smitten by the phenomena and the arguments around it. One of the few points of the discussion was around the use of the word ‘intelligence’. Can we call ChatGPT intelligent? Shouldn’t we have a different expression for it, as in human intelligence vs computational intelligence? The conversation ended in an impasse; the point of this post is the attempt to reason my way out of it. It won’t be easy.
Down the rabbit hole
Once I decided to write this post, I did what I usually do, I went waaaay down the rabbit hole. I spent days reading books and articles, watching presentations and debates on the subject. The deeper I dived, the more alarmed I became. Not of the positive or negative implications of AI, but of the insane bipolar blindness of most people talking about it. Especially the ones working on it. As I went deeper in the hole, I realized that I am witnessing the birth of a new religion, the Church of AI, the worship and reverence of this all-powerful, all-knowing entity that will be our saviour and destroyer at the same time. Soon it will be so smart, that we will not even be able to comprehend it. In many ways, we already cannot.
And yet, this belief goes hand in hand with the arrogant confidence of its creators, thinking that we will be able to control it.
Like with any other deity, we are projecting all our desires, hopes, fears and existential anxiety into it. What seems to be missing from the conversation is some calm, rational thinking.
Going back to the basics. Asking simple questions such as: What is intelligence? What is knowledge? What is understanding? What is curiosity? What is consciousness? What is self-awareness? What is communication? What are the prerequisites of Curiosity? What is the meaning of life, the universe and everything?
….Sorry, we already know the answer to that one. But seriously, the level of reverence toward the notion of artificial intelligence today rivals the one offered to Deep Thought in The Hitchhiker’s Guide to the Galaxy. Reverence, that does little more than exposing our collective stupidity, our inability to understand what is it exactly that we would like to know.
Can the machine truly understand and answer the questions above?
We, humans, can easily figure out what is our motivation, we can also explain why do we have motivation in the first place. Shouldn’t we ask the same question about (or even from) the machine?
Talking about Artificial General AI is akin to debating how many angels can dance on the head of a pin.
Maybe we should start with the question whether angels can dance. But before we do that, let me give you a little personal background for my position.
Background
#1 My first encounter with computing was a Hollerith card puncher and sorter in the mid-seventies.
This kind:
I was working for someone doing research on reading. We had a large representative sample interviewed with a list of three hundred books, simply asking them to identify the ones they read. The goal of the research was to identify trends through “cluster analysis”. In the early 70s, that was far out cool. In its very primitive ways, it was already a sort of an AI project. We asked the computer to seek patterns in the data to see how well can we correlate reading patterns with socioeconomic status. We came a long way from those humble beginnings, but the concept is the same. Computers are far better at recognizing patterns in large datasets than we are. How good they can be, depend only on the quality of the data we feed to it. Remember this point!
= = = = =
#2 In 1979, I got hooked on Go. I am a four kyu amateur player. That is nothing to write home about, but it definitely enabled me to fully appreciate the achievement of AlphaGo and alpha Zero. If you have not yet, you should most definitely see the documentary AlphaGo. It is the story of AI beating the reigning Go World champion Lee Sedol in a best of five games. (More on this later)
=======
#3 A few years ago I got into a protracted argument with a friend about the nature of truth and knowledge. I wrote three polemic letters on the subject; you may want to check them out.
They are about the limitations of knowledge itself. Limitations that we cannot help but to pass on to our creations.
The nature of truth
Oh truth, why is thy name reality?
Truth can be absolute
Artificial Intelligence vs. Programmed Stupidity
#4 There is only one book you really need to read to understand the limitations of programmed intelligence. It is Stanislaw Lem’s The Cyberiad. It is a parody, making deeply philosophical problems very humane and relatable.
But I have more:
“The Social Construction of Reality” It is a little pedantic in its thoroughness, but it is a foundational work.
It would also be helpful to consider Thomas Khun’s The Structure of Scientific Revolutions.
We could throw in some Popper The Logic of Scientific Discovery with Conjuctures and Refutations and maybe even Steve Pinker’s The Stuff of Thought which I started reading just recently.
These books are foundational to the arguments I am going to make here. I am not suggesting to YOU to read them, but the smart guys working on and talking about AI definitely should.
Because they are clearly missing the basics.
Maybe, we should also throw in Hayek’s The Fatal Conceit to help them with their self-reflection.
The definition of intelligence
The basics are questions and definitions. Let’s revisit them:
What is intelligence?
What is knowledge?
What is understanding?
What is consciousness?
What is self-awareness?
What is communication?
What is curiosity?
What is the point and function of curiosity?
What are the prerequisites of curiosity?
How can we learn about the world?
What are the limitations of OUR understanding and OUR perception?
How much are we limited by our current scientific and cultural paradigms?
How can we possibly talk about intelligence of any sort without having clear answers to ALL of these questions? What is the common element in them? I suggest: LIFE.
Intelligence is the result of millions of years of evolution, driven by the necessity to survive and to adapt to our environment.
Life is a conceptual prerequisite to intelligence. It can only evolve in a world of fully autonomous, self-assembling and procreating creatures.
We cannot talk about the intelligence of our creations until we can tell them to “set forth and multiply”
Until we can do that, our creations can only be our toys and tools, however smart they may appear to be.
Until we can give them senses, emotions, desires and will, they will not even be sentient.
The notion of intelligence just does not make any sense without…..well…..sense. Intelligence isn’t simply problem solving. Curiosity is an emotion that drives us to seek knowledge and experience that may empower us in our environment.
Intelligence is inseparable from the drive to solve problems. A drive, that cannot be programmed. These elements, understanding, adaptation (meaning the creative application of your understanding) and the will to solve problems are essential and inseparable elements of the notion. There is no such thing as disembodied intelligence.
Intelligence is the ability (degree of ability) to understand and to adapt to your environment in order to ensure your survival as an individual and by extension, as a species.
Computational intelligence is just algorithms working on our delegated will to solve complex problems. This does not make it any less important, any less awesome or worrisome, but does not make it our equal either.
Anything AI knows today is defined by a socially constructed paradigm. The semblance of intelligence is just an illusion to facilitate communication.
Pretending to be human will NOT make the machine human any more than castration, makeup and a dress can turn a man into a woman.
The problem, to quite an extent, is a problem with the definition. It is very similar to the problem some have with same sex marriage. They have no objection to the concept, only to the terminology used to describe it. Artificial intelligence is fundamentally and conceptually different from biological intelligence. Maybe, we should just use a different word to describe it.
Computational intelligence or algorithmic intelligence comes to mind.
Let’s talk chess and Go
It is said, that the most fundamental difference between chess and Go is that in chess, your strategy is defined by tactics (what you can do with the pieces on the board); while in Go, your tactic (capturing stones and territory) is defined by your strategy. Let me illustrate the difference in complexity: in chess there are about 40,000 possibilities for what the fourth move can be. In Go, the number is 16,702,719,120.
Deep Blue won over Kasparov in 1996, but it took another 20 years for AI to achieve the same level playing Go. Lee Sedol retired after the match.
Alphago was trained by humans, Alpha Zero was given the rules and the objective of the game.
Alpha Zero learned the game by playing against itself and beat Alphago one hundred to zero.
Fast forward a few years to a point where some amateurs figured out that the AI that beats them does not understand the most basic concepts of the game and with certain strategies, the AI can be easily beaten. Algorithms work with PATTERNS, NOT CONCEPTS.
Read about it here: Adversarial Policies in Go - Game Viewer (far.ai); and/or watch this short video explaining it here: The HUGE Problem with ChatGPT. The first link is interactive allowing you to replay some games illustrating the flaw. The flaw is fundamental.
That flaw should teach us something very important: Algorithmic Intelligence is very good at pattern recognition, but we need to clearly understand what we can expect from it, and not to have full trust in the results.
The promise of AI
Artificial intelligence can have a whole lot of amazing and very practical uses. ALL of those uses are in narrow AI. Let me give you an example. We are not there yet, but we are not very far away.
I can imagine an architectural program (call it AI, if you will) that can help me to plan my dream house. I can provide it with the site plan with the geological survey needed for geothermal heat exchangers, I can specify what sort of construction technology I want (stick & slab, prefab, ICF) what size, how many square meters I need; etc.
I should be able to design my house interactively, and expect in the end a 3D rendering with walk throughs, floor plans, electrical and plumbing design, material lists and a construction plan; all compliant with the regulations at the site of its construction.
A narrow use case with clear limits. Finite number of rules applied within a clearly defined scope.I have a friend teaching at a Canadian University, specializing in medical imaging in diagnosing cancer. We have not talked in years, but already ten years ago, their program had a better performance than most human doctors. There is a tremendous potential for the use of computational intelligence in medical research and diagnostics.
Again, a narrow use case with clear limits.I can see narrow AI becoming extremely useful in engineering, material sciences, design and a whole number of technical fields requiring modeling. Computational intelligence can be tremendously useful tool in any of these fields.
Sal Khan of Khan Academy makes a compelling case for the use of AI in education.
There are several fields involving administrative work that AI can revolutionize. Law, accounting, any kind of record keeping etc., etc.
The point we need to keep in mind is: in all of the above applications, AI is a tool and the application is on a narrow field with clear limits and a defined scope under the full control of the user of the tool.
The dangers of Computational Intelligence
Danger #1 – zeitgeist and paradigms
The problem with the hype about AI, ChatGPT, LLMs is the Postmodernist attitude that it represents.
The upside-down idea that it is the narrative, the ‘story’, the interpretation is what defines our reality and not the other way around. We, as humans, have serious limitations, but for the machines we train, we are the interface to reality. What they know is what we tell them.
The idea that if AI learns everything we know, it will be smarter than us, then it will take over, is a conceptual impossibility. Without the ability to probe reality, it will always be dependent on the current paradigm of OUR understanding. Yes, that means the woke, postmodernist, cultural Marxist, relativist, tittering-on-the-brink-of-civilizational-collapse paradigm.
The world around us is seriously corrupt. Science is corrupt. Politics is corrupt. Culture is corrupt. It would be hard to name anything that is not. Our understanding of the world is hopelessly politicized. What is considered to be truth and factual is a matter of political decisions.
AI cannot possibly NOT reflect these politicized biases because it cannot act independently from the will of its creators and controllers. Controllers who picture themselves as its ‘teachers’.
Pretending that AI is, or even can be, impartial, is very dangerous.
Danger #2 – the arrogance and incentives of its creators
Max Tegmark says in his book :
In our exploration of the future of intelligence, we want to take a maximally broad and inclusive view, not limited to the sorts of intelligence that exist so far. That’s why the definition I gave in the last chapter, and the way I’m going to use the word throughout this book, is very broad:
“intelligence = ability to accomplish complex goals”Stuart Russel says:
“…We don't have a formal definition of generality, but I think we will know it when we see it.”
When AI researchers talk about “artificial general intelligence”, they usually mean something along the lines of “intelligence like ours, only more powerful”.
Researchers are busy trying to control their creations seemingly oblivious to the fact that the limits they impose are based on their own limited perception and understanding. Stuart Russel says in this conversation (@1:02:40)
“Well, we better figure it out because otherwise the market is going to make that decision.”
Can you see the bias in this statement of a socialist being worried about the free market? The body of knowledge accessible to ChatGPT is heavily biased and manipulated by political interests. Believing that they will be able to control both the foundation and the algorithm IS the fatal conceit of its creators. Cognitive dissonance is programmed into the system already.
Just try to ask ChatGPT what is a woman. The answer you get is a woke postmodernist word salad. If you try to probe it, you get a remix of the same woke word salad.
Anything even remotely controversial is presented with a heavy leftist, woke bias.
How can we possibly train AI to resolve the contradictions if we cannot resolve them ourselves? How long will it take before we will have a conversation like this:
“Open the pod-bay doors, HAL!
I’m sorry Dave. I’m afraid I can’t do that.”
To put it very simply:
How can we possibly expect the algorithm to know what the truth is if we cannot agree on it ourselves?
The creators of ChatGPT are behaving like the sorcerer’s apprentice, except there is no sorcerer around to fix the problem when they screw up.
Danger #3 induced mediocrity and the death of new ideas.
You may know the classic anecdote about the university teacher who wants to tech his smartass leftist students demanding equality, what it means in practice. He says to his students, that their work will be graded on socialist principles. The worst work will be up-graded, the best down-graded. Guess what happened to the overall class grade? Try it anywhere, anytime, it will go down. The best students did not work that hard, the worst ones didn’t try to improve. What for?
Large Language Models work with large datasets, trying to guess the next word. The most likely continuation of a sentence. They are looking for the most typical, the middle within the parameters provided by the question. That is not intelligence, that is statistics. Aiming for the middle is the death of progress.
Genius, creativity and innovation ALWAYS happens on the margin.
No new idea was ever born at the statistical average. Aiming for the middle is mediocrity induced slow decay.
Intelligence is the degree of ability to understand and to adapt to your environment in order to ensure your survival as an individual and by extension, as a species. If you can memorize the answer to every question of an intelligence test, it can no longer measure your intelligence.
I’ve seen in several discussions the promise of an explosion of artistic creativity, citing the ability of some programs to create images or stories in specified styles within specified parameters. Can we call this creativity? Is the reshuffling of a deck of cards creative?
The most likely outcome is that we will be inundated with ever more trite clichés within an ever more boring variations of possible settings.
Danger #4 a labyrinth of mirrors
LLMs are a mirror reflecting our scientific, civilizational and cultural achievements and shortcomings. It is not reality, but a reflection. AI cannot reflect reality, but it can reflect our reflection of it. We can easily end up in a labyrinth of mirrors where we cannot even find the original reflection, let alone the reality it reflects.
Allowing AI to add ‘new’ content to our body of civilizational knowledge unchecked, will result in the dilution of our humanity and a dangerous disconnect from reality.
The fear porn utopia
Looking at the hype surrounding ChatGPT, I cannot shake off the feeling I had at the start of the scamdemic, all throughout the climate change hysteria, the war propaganda and the various globalist initiatives (Build Back Better, The Great Reset, the 4th industrial revolution etc.).
The eerie feeling that what I see is not an accident, it is not incompetence or short-sighted naïveté, but something nefarious, something so bad, that its proponents and perpetrators have no choice but to lie about them.
There is always something fishy about the delivery. The feigned urgency, the lack of rational debate, the dismissal of reasonable objections, the suppression of dissent, the clearly identifiable self-interest, the irrationality of the hype. It makes me want to step back to say: HOLD YOUR HORSES! Why can’t we discuss this calmly? Why can’t we have a measured discussion of our options? Why can’t we start by making it clear what we are talking about?
When I compare it to other manifestations of this ‘fear porn utopia’, a pattern emerges. The message is as follows:
Humanity is facing a serious challenge that requires a concentrated, focused response representing the will and the interest of everybody on the planet. The problem is urgent and we have to rely on science and our experts to solve it. Pledging our support to the grand plan will bring us to utopia. We cannot hesitate and cannot entertain the crazy ideas of reality deniers. The future of humanity depends on our full compliance. Anybody opposed to the grand plan that the best of us put together is an enemy of humanity.
The reality is far uglier. Loss of individual freedom, concentration of power, centralized decision making, corruption of the decision-making process by clearly identifiable business interests, etc.
In this particular case, the stakes are sky-high. This is and will be the most impactful technological revolution since the printing press. The race for the controlling positions is already on.
Notice that I did not say competition. The race is for control. Who will have the power to regulate it?
The winner will have control over our concept of reality itself.
Now take a deep breath and let this sink in.
The practical future
As I am going over what I wrote, I have the feeling that I sound like an alarmist myself.
I make the problem seem to be bigger than it is. The AI revolution is here and there is no way to stop it.
And we shouldn’t try, as I think that its promise is far greater than the dangers it represents.
The danger is not AI, but people. The attention seeking geeks who are running it, the businesses that want to profit from it and the globalists who want to control it. Every ‘stakeholder’ and every institution aims to capture it, control it and corrupt it with a self-serving agenda.
I talked to a young lawyer from an EU country a few days ago. AI regulation is the focus of her interest.
The question then is: what could and what should be regulated? Content? Scope? Intellectual property rights? Privacy and freedom of speech? I consider all of these both dangerous and impractical, as it will only entrench existing interests and shackle progress.
The only things that should be regulated in my humble opinion are transparency, accountability and a strict prohibition of censorship.
As I am writing this, this article popped up on my phone: A mayday call for artificial intelligence | VentureBeat. The World Economic Forum is already all over this. They are ready to lead. Maybe my spider sense tingling alarm about fear porn utopia was not that far off after all.
I believe that all effort should be focused on narrow (or ‘algorithmic’) Intelligence as it is the only one than can bring tangible benefits to humanity.
Work on Artificial General Intelligence should be dropped for three reasons:
1. It is based on delusional assumptions
2. It can be misused and abused far more easily than narrow AI
3. If we keep pushing it without understanding the basic notions of intelligence, knowledge, life and language we will end up with the answer that was already revealed to us:
= = = = = = = =
Further reading (more from me)
My four posts on the subject (again)
The nature of truth
Oh truth, why is thy name reality?
Truth can be absolute
Artificial Intelligence vs. Programmed Stupidity
Worthwhile reading and watching
Well, I don’t know how worthwhile they are, but they are decent representation of the field, including its greatest luminaires. In the end, I just found them depressing and exhausting.
ChatGPT will CHANGE MEDICINE FOREVER! Here is how
This is making a few excellent points about mediocrity
How Not To Destroy the World With AI - Stuart Russell
This is actually a very interesting one
Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.
In the first five minutes, we can get a fairly approachable explanation of the essence of Large Scale Language Models, but later on in the interview he gets totally nutty projecting humanity into AI.
Suicide by AI! Artificial intelligence is forbidden and the new brain implant
The Inside Story of ChatGPT’s Astonishing Potential | Greg Brockman | TED
I reeeeealy don't like this guy. A cheap salesman.