UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2012 > Dec > Dec 24

Re: Artificial Intelligence

From: William Treurniet <wtreurniet.nul>
Date: Mon, 24 Dec 2012 10:05:32 -0500
Archived: Mon, 24 Dec 2012 10:39:54 -0500
Subject: Re: Artificial Intelligence


>From: Jason Gammon <boyinthemachine.nul>
>To: post.nul
>Date: Sun, 23 Dec 2012 15:10:15 -0500 (EST)
>Subject: Re: Artificial Intelligence

>>From: John Donaldson <John.Donaldson.nul>
>>To: post.nul
>>Date: Sun, 23 Dec 2012 06:38:07 +0000
>>Subject: Re: Artificial Intelligence

><snip>

>>Imagine someone, Bob, who is your typical human, except that his
>>pain sensations are swapped with his pleasure sensations - so
>>that when you thump Bob on the jaw, the causal inputs and
>>outputs are exactly the same, he cries out as if in pain,
>>clutches his jaw, etc., but the *feeling* he undergoes is as if
>>his face had just been gently massaged, say. From the outside,
>>Bob would be indistinguishable from your average Joe, but from
>>the inside, from Bob's perspective, things are strikingly
>>different. If such a case is possible, then it looks like you
>>can't properly program sensations - you can only program their
>>causes and effects. Many people think similar reasoning holds
>>for other mental states, such as emotions and certain aspects of
>>perceptual experience.

>>Now, this line of reasoning can, and indeed has been challenged
>>on a number of fronts, and the debates are very complex. But
>>let's just grant that it's true that you can't give a full
>>functional analysis of sensations, emotions and aspects of
>>perception - you can capture their inputs and outputs, but not
>>how they *feel*. Even granting that, it doesn't follow that AI
>>wouldn't behave ethically because you could program AI with the
>>appropriate desires and beliefs: the belief that torture is
>>wrong, the desire not to do wrong, etc.

But the original article allowed the possibility that the
super-AI could change its programming as it evolved. In fact,
that is how it would become a super-AI.

>The problem, then, is that arguing that AI can't feel would seem
>>to rely on the claim that feelings can't be functionalised. But
>>if feelings can't be functionalised, then they would seem not to
>>be essential to the causal processes that drive behaviour. If
>>feelings are not essential to the causal processes that drive
>>behaviour, then not feeling would not cause any difference in
>>behaviour.

I believe that a being's ability (or lack of it) to empathize
with another being does affect its behaviour toward the other.
I'm not convinced that this can be "functionalized".

The position taken by a spectrum meter reacting to EM input
functionalizes it, but that's not the same thing as seeing it.
When I appreciate a colourful painting, that is not the same as
looking at the spatial arrangement of numeric spectrum positions
corresponding to the colours. So I would argue that qualia can't
be functionalized  because they can't be represented as they are.

>>Now, of course, an AI may change it's beliefs and desires, upon
>>reflection - and perhaps quite radically: for example an AI may
>>come to the conclusion that there are no objective, absolute
>>moral facts, in the same way that there are no objective,
>>absolute facts about what tastes good or not (it's merely a
>>"matter of taste" - you can't be correct or incorrect about
>>what's yummy, in the way that you can be correct or incorrect
>>about what's a cube or a sphere, or what has mass, and so on).
>>But that's an entirely different argument; although, as it
>>happens, I think reflecting on it provides a *positive* argument
>>for why AI would be ethical. First, note that the overwhelming
>>burden of argument is on the person who wishes to argue that
>>there are no moral facts - this is because of standard "hard
>>cases" from moral philosophy like this:

>>P1: if there are no moral facts, then it's not true that
>>torturing babies for fun is wrong.

>>P2: It is true that torturing babies for fun is wrong. C:
>>therefore there are moral facts.

>>That's a deductively valid argument, so if you want to deny the
>>conclusion then you must deny one of the premises - premise 2,
>>in fact (premise 1 merely states a logically trivial consequence
>>of the position: If there are no facts in domain A, then any
>>particular purported fact in domain A is not a fact). Denying
>>premise 2 is a task I do not envy. Much more might be said here,
>>for sure, but even these brief considerations give us good
>>grounds for believing that AI would believe that there are moral
>>facts, and intelligent beings tend to want to pay attention to
>>the facts and modulate their behaviour accordingly... isn't that
>>the smart thing to do..?

Premise 1 would be an entirely logical position to take by a
super-AI that cannot feel. Moral behaviour comes from the
ability to empathize, followed by the application of the golden
rule - do unto others as you would have them do unto you. So a
super-AI that has no ability to experience quales would torture
babies, unfeelingly, if it was a logical thing to do under a
given set of circumstances.

>>John

>I just wanted to clarify that when we are talking about creating
>A.I. we are not talking about recreating a human being in a
>machine. There's no need for that as sex is far more efficient.
>Instead, what we are discussing is creating a machine that is
>roughly as intelligent as a human being.

>>From then on it would upgrade itself or produce successful
>generations of machines that are more intelligent until the
>point where machines reach a god-like state of intelligence
>which cannot be matched by human intelligence. At that stage if
>A.I. wanted to design a machine to perfectly emulate a human
>being then it could do so.

>But humans have no real need or motivation for recreating a
>human in a machine, such as a being that feels pain, pleasure,
>experiences emotions, etc.

Why would anyone say that feelings serve no purpose? They are
important determinants of social behaviour, both positive and
negative. A conscious living organism is more than a fancy
thermostat, and that is what your super-AI would be without
emotion.

>Now some may argue that we may want robots with emotions, but in
>reality we will settle for far less. No one wants a deppresed
>robot for example.

>So we will settle for robots that will lie to us and convince us
>they have emotions, and are happy, when they truly do not have
>emotions. The A.I. we construct won't need emotions so it will
>be up to future advanced A.I. as to whether or not it wants to
>pursue such developments, i.e. tin man recieving a heart.

One could argue that an emotionless super-AI would not recognize
an emotion when it saw it. It would not have the frame of
reference needed to recognize emotional effects on behaviour, no
matter how intelligent it were. Like many psychologists have done
in the past, it would interpret biological behaviour strictly in
terms of input/output.

>Currently we have machines that are as intelligent as insects
>and we have A.I. on the savant level of intelligence. What we
>are looking for is the general intelligence of normal human
>beings. One we achieve that the party will start.

>Jason Gammon

Jason, I don't understand why we humans would want to create the
conditions for an unemotional super-AI to evolve when it would
see us the same way we see an ant. Maybe not even that. At least
we can appreciate the aesthetics of an ant hill organization. The
cost to humanity would be great if super-AIs were to evolve, and
there is no apparent benefit. Why are you so upbeat about this?

Such a project is contrary to all human technological evolution.
We have always created technology as a tool to make life easier
or to amuse us. Now people like you are talking about a tool
that could make us not only redundant but unable to survive as a
species. How is this not a suicidal project? Or is this an
extreme case of a technological challenge that will merely amuse
some of us in the short-term?


William




Listen to 'Strange Days... Indeed' - The PodCast

At:

http://www.virtuallystrange.net/ufo/sdi/program/

These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp


Archive programming by Glenn Campbell at Glenn-Campbell.com