UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2012 > Dec > Dec 24

Re: Artificial Intelligence

From: Jason Gammon <boyinthemachine.nul>
Date: Sun, 23 Dec 2012 15:10:15 -0500 (EST)
Archived: Mon, 24 Dec 2012 05:19:58 -0500
Subject: Re: Artificial Intelligence


>From: John Donaldson <John.Donaldson.nul>
>To: post.nul
>Date: Sun, 23 Dec 2012 06:38:07 +0000
>Subject: Re: Artificial Intelligence

<snip>

>I think materialism is true, and I think that an AI could feel
>just like we do, and probably then some. But even granting that
>materialism is false, and granting that AIs wouldn't be able to
>feel like you and I, it does not follow that AIs would not
>behave ethically. Here's why:

>The standard story in contemporary cognitive science goes
>something like this: if a mental state can be analysed in terms
>of its function, then it could be programmed, at least in
>principle. If a mental state cannot be functionally analysed,
>then it can't be programmed, even in principle. To analyse a
>mental state type functionally is to give a complete
>specification of it in terms of its causal inputs and outputs
>(or causes and effects).

>So, for example, the belief that there's a tiger fast
>approaching is typically caused by a perception of a tiger fast
>approaching, and typically causes tiger avoidance behaviour
>(assuming a desire to avoid tigers). You see the tiger, this
>causes you to believe that there is a tiger, you desire to avoid
>the tiger, the belief and the desire cause you to run away.

>Another example: the mental state of pain has among its primary
>causal inputs: bodily damage, and among its primary causal
>outputs: pain avoidance behaviour. You bang your knee, this
>causes you to feel pain, the pain causes you to grab hold of
>your knee.

>Of course, these are toy examples, a full specification of
>causal inputs and outputs for any mental state would be
>exceptionally complicated, but hopefully the basic idea is
>clear.

>The problem, which has been the focus of long-standing and
>ongoing debate in contemporary cognitive science - e.g.

>http://plato.stanford.edu/entries/qualia/

>is that for certain mental states the functional analysis seems
>to miss something out, prima facie at least. Sensations, like
>pain, pleasure, itch, tickle, hot, cold, and so on, are perhaps
>the classic case. Even granting a input/output specification of
>pain, it still seems like something essential to pain has been
>missed from the analysis: its *painfullness* - what it *feels
>like* to be in pain. The particular sharp, piercing feeling of a
>migraine; or the particular dizzying, sickening shudder of a
>thump to the jaw. And so on. It seems difficult to deny that
>sensations have this *felt quality*, that there is *something it
>is like* to experience such a sensation, and it is not obvious
>that this apparently essential feature of pain is captured by a
>functional analysis.

>Imagine someone, Bob, who is your typical human, except that his
>pain sensations are swapped with his pleasure sensations - so
>that when you thump Bob on the jaw, the causal inputs and
>outputs are exactly the same, he cries out as if in pain,
>clutches his jaw, etc., but the *feeling* he undergoes is as if
>his face had just been gently massaged, say. From the outside,
>Bob would be indistinguishable from your average Joe, but from
>the inside, from Bob's perspective, things are strikingly
>different. If such a case is possible, then it looks like you
>can't properly program sensations - you can only program their
>causes and effects. Many people think similar reasoning holds
>for other mental states, such as emotions and certain aspects of
>perceptual experience.

>Now, this line of reasoning can, and indeed has been challenged
>on a number of fronts, and the debates are very complex. But
>let's just grant that it's true that you can't give a full
>functional analysis of sensations, emotions and aspects of
>perception - you can capture their inputs and outputs, but not
>how they *feel*. Even granting that, it doesn't follow that AI
>wouldn't behave ethically because you could program AI with the
>appropriate desires and beliefs: the belief that torture is
>wrong, the desire not to do wrong, etc.

>The problem, then, is that arguing that AI can't feel would seem
>to rely on the claim that feelings can't be functionalised. But
>if feelings can't be functionalised, then they would seem not to
>be essential to the causal processes that drive behaviour. If
>feelings are not essential to the causal processes that drive
>behaviour, then not feeling would not cause any difference in
>behaviour.

>Now, of course, an AI may change it's beliefs and desires, upon
>reflection - and perhaps quite radically: for example an AI may
>come to the conclusion that there are no objective, absolute
>moral facts, in the same way that there are no objective,
>absolute facts about what tastes good or not (it's merely a
>"matter of taste" - you can't be correct or incorrect about
>what's yummy, in the way that you can be correct or incorrect
>about what's a cube or a sphere, or what has mass, and so on).
>But that's an entirely different argument; although, as it
>happens, I think reflecting on it provides a *positive* argument
>for why AI would be ethical. First, note that the overwhelming
>burden of argument is on the person who wishes to argue that
>there are no moral facts - this is because of standard "hard
>cases" from moral philosophy like this:

>P1: if there are no moral facts, then it's not true that
>torturing babies for fun is wrong.

>P2: It is true that torturing babies for fun is wrong. C:
>therefore there are moral facts.

>That's a deductively valid argument, so if you want to deny the
>conclusion then you must deny one of the premises - premise 2,
>in fact (premise 1 merely states a logically trivial consequence
>of the position: If there are no facts in domain A, then any
>particular purported fact in domain A is not a fact). Denying
>premise 2 is a task I do not envy. Much more might be said here,
>for sure, but even these brief considerations give us good
>grounds for believing that AI would believe that there are moral
>facts, and intelligent beings tend to want to pay attention to
>the facts and modulate their behaviour accordingly... isn't that
>the smart thing to do..?

>John


I just wanted to clarify that when we are talking about creating
A.I. we are not talking about recreating a human being in a
machine. There's no need for that as sex is far more efficient.
Instead, what we are discussing is creating a machine that is
roughly as intelligent as a human being.

From then on it would upgrade itself or produce successful
generations of machines that are more intelligent until the
point where machines reach a god-like state of intelligence
which cannot be matched by human intelligence. At that stage if
A.I. wanted to design a machine to perfectly emulate a human
being then it could do so.

But humans have no real need or motivation for recreating a
human in a machine, such as a being that feels pain, pleasure,
experiences emotions, etc.

Now some may argue that we may want robots with emotions, but in
reality we will settle for far less. No one wants a deppresed
robot for example.

So we will settle for robots that will lie to us and convince us
they have emotions, and are happy, when they truly do not have
emotions. The A.I. we construct won't need emotions so it will
be up to future advanced A.I. as to whether or not it wants to
pursue such developments, i.e. tin man recieving a heart.

Currently we have machines that are as intelligent as insects
and we have A.I. on the savant level of intelligence. What we
are looking for is the general intelligence of normal human
beings. One we achieve that the party will start.


Jason Gammon



Listen to 'Strange Days... Indeed' - The PodCast

At:

http://www.virtuallystrange.net/ufo/sdi/program/

These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp


Archive programming by Glenn Campbell at Glenn-Campbell.com