UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2012 > Dec > Dec 23

Re: Artificial Intelligence

From: John Donaldson <John.Donaldson.nul>
Date: Sun, 23 Dec 2012 06:38:07 +0000
Archived: Sun, 23 Dec 2012 08:21:01 -0500
Subject: Re: Artificial Intelligence


>From: William Treurniet<wtreurniet.nul>
>To: post.nul
>Date: Fri, 21 Dec 2012 10:50:11 -0500
>Subject: Re: Artificial Intelligence

>>From: John Donaldson<John.Donaldson.nul>
>>To: post.nul
>>Date: Thu, 20 Dec 2012 05:03:56 +0000
>>Subject: Re: Artificial Intelligence

>>>From: William Treurniet<wtreurniet.nul>
>>>To: post.nul
>>>Date: Wed, 19 Dec 2012 09:58:35 -0500
>>>Subject: Re: Artificial Intelligence

>>>>From: John Donaldson<John.Donaldson.nul>
>>>>To:<post.nul>
>>>>Date: Wed, 19 Dec 2012 06:39:08 +0000
>>>>Subject: Artificial Intelligence

>>>>I have come across the following very recent, authoritative,
>>>>scholarly article on the possibility for AI this century and
>>>>some possible consequences, which might be of interest:
>>>>Intelligence Explosion: Evidence and Import, by Luke Muehlhauser
>>>>and Anna Salamon of The Singularity Institute

><snip>

>>>>Available at:

>>>>singularity.org/files/IE-EI.pdf

>>This is an interesting article that basically asks how to
>>>implement Asimov's Three Laws of Robotics (without mentioning
>>>them) so that they cannot be circumvented by uncaring humans or
>>>a superhuman artificial intelligence. The authors conclude that
>>>there is a reasonable probability of human extinction if such an
>>>AI can be created, and that research should be aimed at
>>>minimizing that risk. Interestingly, given that the risk cannot
>>>be eliminated, the authors do not propose the option of blocking
>>>such research that could wipe us out. Aside from that, I take
>>>issue with an assumption behind the supposition that such a
>>>super AI can be created.

>>>The authors admit that how to build such an intelligence is not
>>>really known at present, but we can proceed by copying the human
>>>brain, i.e., creating a whole brain emulation (WBE). We may not
>>>know how the human brain works, but we can at least reverse-
>>>engineer it and create a copy that does the same thing. This is
>>>somewhat optimistic since neuroscience, to my knowledge, has not
>>>even figured out yet where memory is stored. Certain brain
>>>activity may be correlated with apparent activation of a memory,
>>>but that does not mean that the memory exists in the brain.
>>>There is reason to believe from the NDE literature that memories
>>>can be created when the brain is dead, and may subsequently be
>>>recalled. Mind is not encapsulated by the brain. So creating a
>>>WBE by simulating the biology of the brain but excluding this
>>>feature is unlikely to result in a complete implementation of a
>>>living brain.

>>>The authors conclude by saying that "Our first superhuman AI
>>>must be a safe super-human AI, for we may not get a second
>>>chance". Let's assume that such a "safe" AI is created. I'm
>>>reminded of the movie, Jurrasic Park, where lizard reproduction
>>>was supposed to be impossible. It was observed by one of the
>>>characters that "nature will find a way". This should apply in
>>>spades to a constrained but super-intelligent AI.

>><snip>

>>You also say "Mind is not encapsulated by the brain." I'm not
>>sure what you mean here - is it that materialism is false (i.e.
>>that it's false that mental properties either are, or at least
>>supervene on physical properties)? If you are denying
>>materialism, then fair enough (there's certainly a very
>>interesting debate to be had there), but it does seem a little
>>uncharitable to criticise the article for being materialist -
>>the whole AI project proceeds on that assumption, and this is an
>>article that asks: "assuming AI is possible, what follows, and
>>when...?" Which is surely a reasonable question.

>>If you aren't denying materialism, then it follows that if two
>>entities are physically identical, then they will be mentally
>>identical. From that it follows that if you can copy a brain,
>>you copy the mind that "goes with it" (i.e. that is identical
>>with it, or at least supervenes on it). Hence the whole brain
>>emulation idea...

>Hi John,

>The article is a call to develop ways to make super-AIs forever
>safe, and I argue that this is impossible under the
>materialist's world view.

>Human attributes like empathy and compassion cannot be felt by a
>hardware/software device. The same problem comes with
>implementing colour perception. A simulation can only respond to
>a particular range of EM frequencies representing a given
>colour. It cannot create our subjective response to the colour.
>That experience is just not physical. It belongs to a non-matter
>consciousness which, by definition, has that property.

>The empathy that a consciousness feels for another being is what
>determines ethical behaviour. A super intelligence that cannot
>feel that could not be empathetic and we would never be safe
>from it. They might not even be safe from each other, but that's
>another issue.

I think materialism is true, and I think that an AI could feel
just like we do, and probably then some. But even granting that
materialism is false, and granting that AIs wouldn't be able to
feel like you and I, it does not follow that AIs would not
behave ethically. Here's why:

The standard story in contemporary cognitive science goes
something like this: if a mental state can be analysed in terms
of its function, then it could be programmed, at least in
principle. If a  mental state cannot be functionally analysed,
then it can't be programmed, even in principle. To analyse a
mental state type functionally is to give a complete
specification of it in terms of its causal inputs and outputs
(or causes and effects).

So, for example, the belief that there's a tiger fast
approaching is typically caused by a perception of a tiger fast
approaching, and typically causes tiger avoidance behaviour
(assuming a desire to avoid tigers). You see the tiger, this
causes you to believe that there is a tiger, you desire to avoid
the tiger, the belief and the desire cause you to run away.

Another example: the mental state of pain has among its primary
causal inputs: bodily damage, and among its primary causal
outputs: pain avoidance behaviour. You bang your knee, this
causes you to feel pain, the pain causes you to grab hold of
your knee.

Of course, these are toy examples, a full specification of
causal inputs and outputs for any mental state would be
exceptionally complicated, but hopefully the basic idea is
clear.

The problem, which has been the focus of long-standing and
ongoing debate in contemporary cognitive science - e.g.

http://plato.stanford.edu/entries/qualia/

is that for certain mental states the functional analysis seems
to miss something out, prima facie at least. Sensations, like
pain, pleasure, itch, tickle, hot, cold, and so on, are perhaps
the classic case. Even granting a input/output specification of
pain, it still seems like something essential to pain has been
missed from the analysis: its *painfullness* - what it *feels
like* to be in pain. The particular sharp, piercing feeling of a
migraine; or the particular dizzying, sickening shudder of a
thump to the jaw. And so on. It seems difficult to deny that
sensations have this *felt quality*, that there is *something it
is like* to experience such a sensation, and it is not obvious
that this apparently essential feature of pain is captured by a
functional analysis.

Imagine someone, Bob, who is your typical human, except that his
pain sensations are swapped with his pleasure sensations - so
that when you thump Bob on the jaw, the causal inputs and
outputs are exactly the same, he cries out as if in pain,
clutches his jaw, etc., but the *feeling* he undergoes is as if
his face had just been gently massaged, say. From the outside,
Bob would be indistinguishable from your average Joe, but from
the inside, from Bob's perspective, things are strikingly
different. If such a case is possible, then it looks like you
can't properly program sensations - you can only program their
causes and effects. Many people think similar reasoning holds
for other mental states, such as emotions and certain aspects of
perceptual experience.

Now, this line of reasoning can, and indeed has been challenged
on a number of fronts, and the debates are very complex. But
let's just grant that it's true that you can't give a full
functional analysis of sensations, emotions and aspects of
perception - you can capture their inputs and outputs, but not
how they *feel*. Even granting that, it doesn't follow that AI
wouldn't behave ethically because you could program AI with the
appropriate desires and beliefs: the belief that torture is
wrong, the desire not to do wrong, etc.

The problem, then, is that arguing that AI can't feel would seem
to rely on the claim that feelings can't be functionalised. But
if feelings can't be functionalised, then they would seem not to
be essential to the causal processes that drive behaviour. If
feelings are not essential to the causal processes that drive
behaviour, then not feeling would not cause any difference in
behaviour.

Now, of course, an AI may change it's beliefs and desires, upon
reflection - and perhaps quite radically: for example an AI may
come to the conclusion that there are no objective, absolute
moral facts, in the same way that there are no objective,
absolute facts about what tastes good or not (it's merely a
"matter of taste" - you can't be correct or incorrect about
what's yummy, in the way that you can be correct or incorrect
about what's a cube or a sphere, or what has mass, and so on).
But that's an entirely different argument; although, as it
happens, I think reflecting on it provides a *positive* argument
for why AI would be ethical. First, note that the overwhelming
burden of argument is on the person who wishes to argue that
there are no moral facts - this is because of standard "hard
cases" from moral philosophy like this:

P1: if there are no moral facts, then it's not true that
torturing babies for fun is wrong.

P2: It is true that torturing babies for fun is wrong. C:
therefore there are moral facts.

That's a deductively valid argument, so if you want to deny the
conclusion then you must deny one of the premises - premise 2,
in fact (premise 1 merely states a logically trivial consequence
of the position: If there are no facts in domain A, then any
particular purported fact in domain A is not a fact). Denying
premise 2 is a task I do not envy. Much more might be said here,
for sure, but even these brief considerations give us good
grounds for believing that AI would believe that there are moral
facts, and intelligent beings tend to want to pay attention to
the facts and modulate their behaviour accordingly... isn't that
the smart thing to do..?


John



Listen to 'Strange Days... Indeed' - The PodCast

At:

http://www.virtuallystrange.net/ufo/sdi/program/

These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp


Archive programming by Glenn Campbell at Glenn-Campbell.com