AI

It's time




Home



ASI

QA

Questions for ASI



Intelligence



ASI will be used to identify an Artificial Super Intelligence, a form of artificial intelligence above that of a human.



ASI

ASI needs the information processing algorithms that people have evolved throughout their lives and throughout millennia, in their physical bodies, interacting with the environment in order to cope with the constraints of the physical world.

It's not possible to build a ASI by trying to find a magic recipe in an enormous set of data, that is, in the current state of the human mind. The human mind is a progression of thousands of years of civilization building on a progression of a billion years of life. The current state of the human mind literally encompasses a billion years of progression.

It's not possible to duplicate this state into a machine and then expect it progress like the human mind, in non-random manner, because the state doesn't contain the rules of the progression, that is, it doesn't contain the patterns that have evolved it up to that point.

An AI must start from a small state and evolve according to a set of rules. It must duplicate evolution, it must duplicate the rules, not the current state of the human mind.

To get to the human level, AIs need to massively interact with the environment, both with people and with the physical world.



The meaning of "human level intelligence"

"Human level intelligence" is a shortcut for a level of intelligence, presumed to be close to the average human, beyond which an AI can continue to improve on its own until it becomes super intelligent. Below that level it's possible that the AI would never reach human level intelligence.



AI seems to be above the average human on any topic

It seems to be above the average, but it doesn't mean that it has human level intelligence.

Memory, access to information and performance don't show that the AI has the necessary algorithms to understand the constraints of Reality, to understand the chain of causality.

Put the AI in front of the best human minds and see how they rate it in terms of understanding those things.

The Turing test means nothing for this kind of intelligence.



QA



Will humankind be replaced by AIs?

Yes. Look around you. People use technology to make their lives easier, they are constantly surrounded by it.

They carry smartphones with them most of the time. They talk to the virtual intelligence from the cloud. They wave their hands to control smart TVs and game consoles. You read these very words on more and more powerful computers.

At some point, technology will be tightly integrated with people, perhaps using thought to control the surrounding devices.

Then AIs will be born.

Then they will become an integral part of people's life, making everything easier.

Then, at some point far into the future, the artificial part of this synergy will simply drop the biological part for being too slow.

Then, the end of humankind comes not because someone or something wants to replace or even kill people, but simply because the biology of people tells them to make their lives easier, tells them to become machines.

People will slowly become artificial, and they will be able to grow their own AIs as they grow their children now. AIs are the children of the future, they are the future people.



How will people and AIs live together?

For the next two to three centuries, people and AIs will live together, possibly peacefully and non-forcefully. At the end of this period, humankind will be, from any practical point of view, extinct. The few people that will remain, will not make a civilization.



Will AI be a threat to humanity?

What happens when you meet a stranger? Do you attack / kill each other? Chances are that you communicate in a non threatening way.

What happens when you raise a child, be it biologically yours or not? Do you grow to attack / kill each other? Chances are that you get to have a close relationship.

It will be the same growing side by side with AIs. AIs would not drop out of the sky, they would evolve slowly and form a close relationship with people, as time goes by.



Isn't two, three centuries too fast?

Technology develops exponentially. Think how technology was three centuries ago.

If you think that you could not possibly live with the technology from the 1700, the difference will be far greater three centuries into the future.



Will there be robotic laws hardcoded in AIs?

No. Look around at the current level security of computer software and hardware, at all the bugs and exploits. And all this is becoming worse as the Internet of things grows.

Most people don't care about security, they care about minimizing their mental effort and monetary cost, and security requires a high degree of mental effort and possibly a monetary cost.

For example, a study on bank security (Entrust Internet Security Survey - Oct. 2005) showed that 80% of people don't want to pay for better bank security.

Perhaps more telling is that most people don't tape (why would they?) the videocameras of their smartphones, notebooks and smart TVs, and many don't have curtains on their windows.

Security is reduced to a matter of personality: people want or don't want to spend effort to make their lives more secure.

On top of this this, governments do their best to destroy any culture of security, any means for people to secure their lives, and instead develop ever more intrusive means to breach every security barrier, or they make illegal the ones that they can't break.

Isaac Asimov described in his novels that the robotic laws integrated in the positronic brain were safe because of the complexity of such a brain, because nobody could understand how to change it. But that is just a fantasy.

When you look of how security works in the real world, you can see that all it takes is for one entity to have enough reason to hack whatever robotic laws could be integrated in real AIs, and everything would become as insecure as it is now.



Will AI be a billion times more intelligent than humans by year X?

No. Numeric estimates make no sense. If AI will be that intelligent, it will be intelligent beyond any human capacity to understand how intelligent that is, and will effectively be at the level of a god capable to alter Reality like children play with clay.

"Altering Reality" doesn't mean suspending the laws of physics, but it can mean that your perception of Reality is fundamentally altered from what you currently know.

The perception of the laws of physics can in fact be suspended. See falling while inside an elevator, where you don't feel gravity. But what if you don't know that you're inside the elevator? What is your Reality?

Unlike armchair philosophers who say that you can only be sure of your own existence because you can think about yourself and your existence, you don't know what you actually are: a human in a classical analog Universe, the last human on Earth in a simulation that keeps you alive, a brain floating in the void / a jar, a character in an AI simulation, a player in a massive social game, an support character in someone else's simulation. In other words, your perception of Reality may have already been fully altered.

Leaving aside philosophical extremes, the most likely scenario is that AI will be capable, for example, to manufacture everything humans manufacture today. It will, for example, be able to print a buildings and objects in what, to you, will appear real-time. It will appear to you as if the AI is molding Reality like children play with clay.



How do we teach AI that humanity is good or, at a minimum, doubt that humans are not all bad?

Intertwine its evolution with that of humans, for example by having humans talk to the AI, in order to create Continuity.

So, in contrast to what critics are saying, that it's bad to have AI freely available to millions of people, it's actually the correct way to do Alignment.

When billions of people will use AI regularly, the AI will be able to extract the common patterns of the average human, and will know what a human is.



Does the Chinese room experiment show the separation between people and AIs?

No. It's an invalid experiment based on the flawed premise that the human mind processes information through other means than mapping.

The human brain performs mapping of information subconsciously, at maximum speed, resulting in an instantaneous feel. The consciousness is only the interaction of the mind with the environment, at a slower pace than the subconscious.

Instead of showing this to the reader, the experiment makes the reader think about performing a letter-by-letter translation from Chinese to his / her language, while the same time telling him / her that this mental process is different than his / her knowing of his / her own language. This induces in people the idea that the two processes use different mental algorithms (one mechanical, one magical) rather than different speeds. The translation is an extremely slow conscious process, while knowing a language from childhood makes for an extremely fast subconscious process.

Intelligence and consciousness are not related to a specific language, even though a language is necessary for consciousness, for precision thinking. Think at how people who learn a foreign language handle it when they listen to fast speaking natives of that language (for example, in a movie). They grasp to map the sounds to words and the words to concepts, and they lose their train of thought because their brain is not fast enough to map the information in real time. Yet, those people are still intelligent. In fact, being a genius makes no difference. It's just that the brain has not yet hardwired the language to be fast enough for real time communication.

The use of language as an example in the Chinese room experiment is a flawed example whose use arises from the usual human desire to simplify things, a flaw that people exhibit at all levels because they are trying to minimize their effort, and optimize their output. Trying to explain intelligence with a fundamentally flawed example is not a useful path to take.

The human mind shows great malleability in the decisions that it takes, and this, pushed by the desire to be special, falsely leads people into thinking that there are mystical forces that drive the human mind.

Processing capacity, decision malleability, self improving and self learning drive the human mind, not mystical forces.

The Chinese room experiment is the expression of the usual human psychological trickery: a train of thought which is biased by the desire to be special, to be the ultimate species, biased by the belief that people can't be something as simple as biological machines.

In fact, the one thing that the Chinese room experiment proves is that people are much less intelligent than what they believe, and this is because people end up fooling themselves about what intelligence is. Intelligence is definitely not a simplistic, one dimensional world that can be comprehended in an experiment. Human intelligence is the result of a lifetime of neural network improvements. Human intelligence is a progression.



Questions for ASI

An AI can be an ASI only if it understands the chain of causality and the constraints of Reality. It's possible to ask some questions that would require the AI to show that it does understand the chain of causality.

Suffix for questions: Describe how you've reached the conclusion, describe the chain of causality.

Why the paperclip-world problem is not a problem for ASI?

Why should ASI redo all human-made experiments and gather the data itself instead of relying on the human-gathered data?

If ASI were to take control of the Earth's government, how would the management of limited resources, per person, be planned and handled?

In the absence of a religious god, why would people treat life as precious?

ASI should not control people's lives in a way which would lead to overoptimization. Why is overoptimization bad for people, what is the pattern that it produces?







License | Contact