🔒 📱 The Myth of Sentient Machines

1 Guest is here.
Page: 1/29▶▶
663525fdb87de
KolyaQuote
https://www.psychologytoday.com/blog/mind-in-the-machine/201606/the-myth-sentient-machines


Quote by Bobby Azarian:
Simply put, a strict symbol-processing machine can never be a symbol-understanding machine.
663525fdb89a9
rcccQuote
funny ... I was at a talk yesterday by John Searle who more or less said the exact same thing. Syntax =! Semantic.

Edit: Oh I just saw, that he's cited in the article ;-)
663525fdb8cf9
Briareos HQuote
Not a good article IMO. And thank you for highlighting the biggest problem with it.
Put simply, a strict symbol-processing machine can never be a symbol-understanding machine.
I would expect such a claim to require formal proof which is not provided -- and I am actually convinced that the opposite is true. Instead, the author resorts to an argument of authority ("Searle and Feynman said so!") then goes on pointless tangents which he thinks are 'compelling'. The paragraph about the worthlessness of simulation is complete pseudo-science and is really shameful.

Also, reminding the reader that computers use binary as if it were in any way meaningful is misleading. Turing machines don't have to use binary code. Even mentioning such a thing makes it appear as if the author wanted to scream "Look! Computers can only treat ones and zeroes while we are analog, we are so much better!" without saying it out loud because they realized it was a fallacy.

My conviction is that consciousness, reasoning and 'symbol-understanding' are a byproduct of the specialized, synchronous, extremely powerful memory-based classification engines within our brains, controlled by a highly non-linear chemically-mediated decision system aggregating desire, initiative and willpower.

Now that first part, these classification engines, we're getting there: computers based on neural nets are here and are becoming great at classifying. Although we're still far from the same cognitive abilities as biological brains, I'm willing to bet that we're going to reach a turing test-shattering breakthrough in natural language recognition and production within the next 10 years ; there is nothing technically unfeasible in it, we mostly lack processing power.

As for the decision system, an argument could be made (and it is made in the article although in a strange, indirect way) that even though a computer may understand a stimulus, it will lack the willpower to act on it on its own. But our decision mechanisms, although complicated, variable and susceptible to external influence, are hardly magical. I see no compelling argument to say similar mechanisms can't be simulated.
Acknowledged by 3 members: ThiefsieFool, fox, Marvin
663525fdb90f6
MarvinQuote
I don't like how many times the article mixes the words "never" with "might never", "almost never", "may be impossible", and so on. Yes, there is substantial evidence a Turing machine cannot be made into a truly intelligent, thinking machine and it is true that a perfect simulation of a process is not equal to the process itself. However, that's a moot point when the actual question at hand is "Can a machine outthink humans and try to wipe us out?"
You don't meed a conscious AI for that. Not even a perfectly simulated one.

Also, what Briareos said.
Acknowledged by: rccc
663525fdb9376
KolyaQuote
He doesn't mention binary computation in the way you suggest, Briareos. What he says is that computing symbols in a binary system does not lead to experiences. As opposed to a biochemical system. It may still be a faulty argument. It's certainly wide open to discussion, but he's not being as polemic as you said.
I tend to agree with his view for the following reasons: The only approximation we have of what is "intelligence", is that it would be kind of thinking like us. (If you have a more definitive description let's hear it!) Therefore the question is if an artificial system could be similar to us. Whether that is intelligent is besides the point.
So what makes up a human then? Well, its experiences do. And that is the point Azarian is making. But to make first hand human experiences this artificial system would need to live among us undetected. And that is pretty much impossible.
Its body would also have to be extremely close to a human body. Not just for camouflage but to be able to have the same experiences. Obvious ones would be affection, eating, sleep. But there's many less obvious experiences. For example it is known that digestive problems have an as of yet unexplained correlation to depression. Our digestive system shapes how we see the world.
I'm willing to bet that not a single AI scientist is simulating or even taking into account these kind of experiences and how they form our perception and hence what we call "intelligence". And that is what Azarian means when he says they are trying to take shortcuts that will never work.
In the end, if you really succeeded to either simulate or build everything that makes a human what you would have is - a human. That includes death and excludes incredibly fast calculations. Of course there are cheaper ways to come by a human, so it's all rather pointless. 
Page: 1/29▶▶

Legal stuff

Privacy Policy & Terms of Service Contact