🔒 The Myth of Sentient Machines

1 Guest is here.
664868270ee90
KolyaQuote
A machine cannot adapt like a human because it's lacking experiences of the world. While it can "learn" that placing the red ball into the cup results in an energy boost, whereas blue balls do nothing, even such a pitifully simple experiment requires pre-programming of what is a ball, what to do with it and even that energy is a good thing.
Feed a new born baby and it requires no explanations. Being hungry and being fed is enough, because it has a living body that can experience stuff. 

Humans can deal with an infinite number of situations because they can adapt memories of previous experiences to new situations taking into account the differences. The process of how these memories are formed, reinforced and overwritten, their quality and how they influence each other, and how they make up an image of the world is inseparable from the human experience and the emotions they invoke.

Pretty soon the baby learns that its fathers hairy flat chest does not feed it. It's not as comfortable either. But the father is more likely to expose the baby to new situations, which is interesting because it learns about danger and reward. It will still take several years until it learns about other people's motifs and empathy.

The human experience is just very complex. I don't understand why AI researchers and others have underestimated it for so long and continue to do so while at the same time being scared of a potential success. It's like telling yourself the old Frankenstein story over and over again. That book was written during the year without summer 1816 by a woman who had recently had a miscarriage. Not by a scientist or engineer.
664868270f058
foxQuote
The part I don't get is why you are convinced that a machine, theoretically at this point, wouldn't be able to emulate all the needed processes (collection of sensory input, analysis, filtering, storage, inter-connecting via neuronal network, etc.) at some point? It seems that you yourself are only relying on the complexity-argument - which could be quite dangerously short-sighted, imo. A machine like that would certainly make different experiences and arrive at different conclusions but I don't think that's all that relevant at this point of the discussion.
664868270f23b
KolyaQuote
I'm arguing that without experiencing the world an AI cannot come to any kind of understanding of it. And without experiencing a human life it will not develop anything humans would call intelligence. 

If an AI was placed in a robot body with enough sensors to experience the world and had the same inherent needs and skills as a new born human and was taught (not programmed) for years, it might become an artificial lifeform that develops consciousness, intent and an intelligence befitting its robot life.

But that's not going to happen, because we don't know enough about the skills a human baby inherits. For example language acquisition is still a mystery despite or because of Chomsky (who convinced linguists that babies have hereditary grammar for every language in the world that are hooked into during language acquisition).

We also wouldn't know how to teach such a robot. It would likely be costly, long term and the result might be artificial algae or perhaps a dormouse. Instead the expectation seems to be that feeding a program tonnes of symbols and rivaling rules will make it connect the dots at some point. Like Google's neural network is currently reading thousands of romance novels, hoping to enhance its emotional intelligence.

For 50 years we tried throwing increased processing power at it, getting nowhere. Siri et al are conceptually not different from the ELIZA script from 1966. And that's still the most human like intelligence we came up with. A script that looks for a few keywords and otherwise falls back to stock phrases.
664868270f38d
foxQuote
For the moment you are right, as far as we know. And I tend to agree that it is unlikely something like that will happen before we have figured out pretty much everything about how the human brain works. However, in my opinion this is only a matter of time and I wouldn't underestimate the progress in the related fields - especially with multi-national corporations like Google and Microsoft pressing matters. The complexity argument is not working in the long run - as mankind should've learned from a number of experiences before, in my opinion.
664868270fc10
Briareos HQuote

Quote by Kolya:
He doesn't mention binary computation in the way you suggest, Briareos.
This two-symbol system is the foundational principle that all of digital computing is based upon. Everything a computer does involves manipulating two symbols in some way. As such, they can be thought of as a practical type of Turing machine—an abstract, hypothetical machine that computes by manipulating symbols.
 
A Turing machine’s operations are said to be “syntactical”, meaning they only recognize symbols and not the meaning of those symbols—i.e., their semantics. Even the word “recognize” is misleading because it implies a subjective experience, so perhaps it is better to simply say that computers are sensitive to symbols, whereas the brain is capable of semantic understanding.
My complaint here was merely that the author didn't need to remind us that computers use binary data to introduce the fact that they are Turing machines. He didn't need to mention binary at all but still does so awkwardly; I get the feeling that he wanted to imply more but was wary of an association fallacy. Or maybe I'm looking too much into it, it is very possible :p and not relevant to my point anyway.

My point, and I'll use the rest of your posts as a basis here, is that it would have been fine if he said a computer could never think like a human, and nothing more. I think so too as well, because as you rightfully point out there is no true duality between our perception and our thinking. Perceptual integration, memory, pattern building etc. are a continuum of experience that is highly dependent on our physical and chemical states. I will never put that into question and if anyone's interested in the subject, I heartily recommend one of my favorite works of philosophy, "Phenomenology of perception" by Maurice Merleau-Ponty which bridges phenomenological philosophy (albeit a slightly weird version of it, not exactly Husserl), early existentialism and psychology in a remarkably easy to follow and rational way.

A computerized brain needs a human experience to think like a human and as such needs the same inputs -- which they'll never get or at least not in our lifetimes. Alright. But to say that they can not think at all, i.e. that they can never be intelligent and self-reflective is another thing altogether.

When you look at pictures that went through Google's Deep Dream, most objects get transformed into animal faces. It does so because it was trained to see animal faces everywhere: when you present it with something which it doesn't know, it is going to represent it in a way where it can see an animal face in it. I am arguing that if it was trained with enough (i.e. more) neurons, and with a learning set that encompassed the entire web, the way it would represent data when presented with a new input would be in no way different than the way an "intelligent entity living in the web" would represent data. As such, I fully believe that the idea in your last post's second to last paragraph (feeding romance novels) is sound and I don't agree with your conclusion. When triggered in the right way, it could understand and translate any language, it could predict outcomes of complex systems (hello stockmarket abuse), it could understand the hidden desires and motivations of most internet users and interact with them in a meaningful way (hello targeted ads), it could create new art in the styles that it learned (which it already does).

What exactly is the step leading from there to consciousness? From copying Picasso on command to deciding to create an entirely new art style? What is missing? Nobody knows for sure. And I truly believe that nobody today can reasonably argue that self-awareness can not come out of this. If you want my opinion, I'd say it would need a decision-making system and probably meta-level synchronicity circuits. Being able to see itself in time and act upon it. After learning so much on human concepts of consciousness, I'm sure it would get the hint rather quickly.

Finally, I disagree with your pessimism regarding the history of AI. Use of wide, multi-layered neural networks with large data sets only became possible very recently, thanks to distributed computing and efficient data representation. What was done before was purely algorithmic. I don't know how Siri works but I'm almost certain it only uses very basic learning techniques. Neural networks are extremely computing intensive, they really are.

EDIT: Don't get me wrong, I do not hold blind faith in Google's neural networks. Maybe they're not optimal, maybe --like real brains-- there needs to be different kinds of neurons for specialized tasks, maybe there needs to be an element of simulated biology. But unlike the author of the article, I see no fundamental reason to dismiss possible intelligence coming out of such systems. In my opinion, the framework of thought which allows him to categorically deny potential sentience is at best unproven, at worst completely false.

Legal stuff

Privacy Policy & Terms of Service Contact