🔒 The Myth of Sentient Machines

1 Guest is here.
664830637efbc
XKILLJOY98Quote
Don't get me started.

It's no myth, what is a brain but a biological computer and what is a computer but a technological brain?

In the past people have viewed things as impossible that are now a reality, this is very much the same thing.

I for one firmly believe it is very possible.
664830637f348
KolyaQuote
Put a brain on a glass platter then and see what it thinks about itself and its situation. The point is that you are not your brain. And your body isn't just a machine to carry your head-computer around. Everything you think and therefore everything you consider intelligent cannot be separated from your experience of being a human body. 
And if you don't believe that, then try communicating with a dolphin. It's another brain, just with a different body right? You may say that it's "not intelligent" but what you are really saying is that its experiences are too different from yours. Therefore you can't understand it. From the point of view of another dophin it's perfectly intelligent.

An interesting side note is that the human trait which allows us to somewhat imagine what the life of a dolphin or an AI might be like - empathy - is one that the "AI"s we have are very bad at. Try telling Siri/Google Now that you have a deadly disease. You probably weren't expecting an empathetic response anyway. It's just a glorified chatbot after all, a symbol processor that doesn't understand any of these symbols. No intelligence whatsoever. Even if it came up with an empathetic answer it would be because someone put it there, ie it is faked.
Here's an interesting paper about the importance of empathy for developing human level AI: http://www-formal.stanford.edu/cmason/circulation-ws07cmason.pdf
It becomes rather funny when they try to simulate affects in programmative ways.
R1: If FEELS(In-Love-With(x)) then Assert(Handsome(x))
R2: IF BELIEVES(Obese(x)) then NOT(Handsome(x))
R3: IF BELIEVES(Proposes(x) and Handsome(x)) Then Accept-Proposal(x)
I think I can guess how well this will work out. It's still a processor dealing with symbols. What sexual attraction or love actually mean and can do to one's thoughts will forever escape it. And so it will stay stupid.
664830637f4b8
foxQuote
In theory, I believe everything could be simulated eventually - even love. At which point that might become technically possible and why that would be useful is another question and which role quantum mechanics vs binary systems will play, I don't know. But I don't think that machines need to become perfect simulations of humans to develop something that is somewhat comparable to consciousness and intent and therefore becoming a threat to everything else. 

664830637f87e
KolyaQuote

Quote by fox:
develop something that is somewhat comparable to consciousness and intent and therefore becoming a threat to everything else.

A robot is supposed to bring heavy boxes from A to B. If it encounters an obstacle in its way it is programmed to calculate whether it could safely drive over said obstacle or going around would be faster. And if it recognizes shape and motion in sync with itself it's supposed to wave it's robot arm.  :droid:/
Then place a kitten in its way and a mirror on the side. You get an AI with something that is somewhat comparable to consciousness and an intent to drive over a kitten. Fortunately the kitten ran away, singularity averted. 

The classic counter argument to "symbol processing is not understanding" usually is "if the simulation is comprehensive enough it will be indistinguishable from intelligence". Wave if you think a comprehensive simulation of infinite situations is possible. What about half-comprehensive?
664830637fc63
foxQuote

Quote by Kolya:
The classic counter argument to "symbol processing is not understanding" usually is "if the simulation is comprehensive enough it will be indistinguishable from intelligence". Wave if you think a comprehensive simulation of infinite situations is possible. What about half-comprehensive?

I admit that I'm entirely out of my league here. But why would a machine have to simulate inifinite situations? If that would be possible, it would be able to forsee the future but that is not what this is about. Humans can't do that either - we only have to be able to deal with actually emerging situations (using past experiences to adapt/optimize the reactions plus some gambling/assuming). That might be incredibly complex but I don't understand why it shouldn't be possible at some point?

Legal stuff

Privacy Policy & Terms of Service Contact