This is my sixth post on the subject and the third in a series about artificial intelligence (links are at the end).
Reading my previous posts, you may have gotten the impression that I do not like AI, that I don’t take it seriously or have some sort of opposition to it. Nothing can be further from the truth. I like it, I respect it and I am excited about its potentials. I am only concerned about its politicization, a clear lack of understanding of its limitations, the quasi-religious reverence afforded to it and the tremendous potential for its misuse.
This post is about the promises, but let me restate the above points to set the ground for the understanding of the promises.
Artificial intelligence is qualitatively different from natural, human intelligence.
It will never be our overlord.Natural (human, biological) intelligence is the product of evolution and cannot be replicated.
The expression “artificial intelligence” is misleading, as it suggests that natural intelligence can be somehow reproduced by artificial means.
The concept of Artificial General Intelligence is nonsense.
Narrow AI is a tool and it should be treated as such. All AI should be treated as narrow.
Large Language Models (LLMs) are trying to mimic human intelligence without having access to the prerequisites that created human intelligence.
Narrow AI is a tool with tremendous promise and potential while LLMs are abominations.
The most important task for us, humans, is to clearly understand what we can and cannot expect from these most powerful tools that we are creating.
Starting with the understanding that they are nothing more than tools.
The promises of narrow AI
As I said above, all AI should be treated as narrow. Computers can do certain things much better than we can. Calculation speed, data retrieval and analysis, pattern recognition, modeling and design with specific goals. Computers can be the best tools if they do not have to deal with ambiguities, and fuzziness that would prevent them from making good decisions. The game of Go and autonomous driving would be good examples.
In Go, you have a finite space, simple rules, a simple goal and an extremely large number of options.
In self driving, the goal is simple: “move from point A to point B,” while the execution is very, very complex with an endless set of specific decisions.
In both cases, the goals are clear and the rules are simple. The possibilities to find such problems with similar parameters is only limited by our imagination and our ability to ask good questions.
AI can be developed in just about any field of scientific research to be an essential tool for testing, modeling and designing.
AI is much better equipped than we are to analyze thousands of samples not only to answer our original questions, but also to identify patterns and trends to design the next iteration of tests to be performed seeking answers to an evolved set of questions.
AI is much better equipped than we are to run thousands of simulations, as it did in Tesla’s latest electric motor design. Finding efficiencies is pattern recognition and AI is great at doing that.
The next industrial revolution could very well be based on material sciences. What we know so far in this field is barely scratching the surface of possibilities. The same goes for structural engineering, chemistry, biology and the medical sciences.
Business management and financial services can also benefit from narrow AI. The key, again, is well formulated questions applied to large sets of data.
Even LLMs (Large Language Models) can have a number of positive uses.
It could make the law much more accessible. It could make business management more agile.
It could accelerate the evolution of coding.
It could make education more effective by providing personalized tutoring tailored to specific needs.
We could go on finding more examples, each with the same conceptual framework: a narrow task with well defined boundaries. None of them need centralized or even globalized regulations. They are fine with scores of different efforts aimed at the same problem. They are fine with free market competition, global cooperation or open-source initiatives. It has no use for the straightjacket of international regulations.
Every technological revolution in history took us another step away from drudgery and to a land of previously unimaginable possibilities. The AI revolution is not different. The power of narrow AI is the ability of liberating our imagination and creativity.
The point of narrow AI is to take a little more perspiration out of our lives so that we can focus more on the inspiration.
Side effects – the economic fallout
Let’s start with the obvious: the number of jobs that will be created by the expansion of narrow AI research will dwarf the number of jobs that will be lost in the same fields. It could be a classical example of the Luddites’ fear vs Jevons paradox.
Changes may be radical and may come surprisingly fast, but they will be much smaller in scope than the hype seems to suggest. We have far greater problems than disruptive technologies.
The promises of AI far outweigh its dangers.
This does not mean that the dangers are not real, but the dangers are not intrinsic to AI.
As always, the danger is us.
The dangers of mistaking LLMs for intelligence and the delusion of AGI
Large Language Models have no direct connection to reality. They don’t ‘know’ anything.
What appears to be knowledge, is what we fed to it, including all the limitations we have, plus the limitations of our reigning paradigms and acceptable narratives. LLMs are the reflections of snapshots of our understanding of reality.
When I see LLMs, when I interact with them, I see everything that is wrong with our culture. Heavily biased ideological narratives, partisan politics, conformist mediocrity, manipulative language, shameless propaganda, corrupted science and clear signs of censorship.
Whenever people talk about the dangers of AI, I see the dangers of its abuse, especially in LLMs.
I see people with agendas; I see people with ideology and the desire for control; I see a bunch of wannabe Big Brothers.
I cannot shake off the feeling, that the discussions that we are having are just part of another WEF globalist project manipulating us toward a herd-like compliance, toward the acceptance of the one truth of the collective will. The whole thing stinks of globalist aspirations. I also cannot shake off a sense of desperation in it.
Maybe
is right and this desperation is a sign of the empire losing control. But at this point, on this issue, they seem to be firmly in control. I have not come across a single article, video or conversation with any of the points I tried to make in this or any of my previous posts on the subject.In the end, I am optimistic. I always am, but I could also be wrong. If you think I am, let me know. If I am missing something, let me know. If you see the dangers where I do, let me spread the message by sharing this post.
Further reading (more from me)
AI is no match for natural stupidity
You can find a long list of references at the end this post
Excellent article.