UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2012 > Dec > Dec 31

Re: Artificial Intelligence

From: John Donaldson <John.Donaldson.nul>
Date: Sun, 30 Dec 2012 21:51:06 +0000
Archived: Mon, 31 Dec 2012 05:07:57 -0500
Subject: Re: Artificial Intelligence


On 24-Dec-12 3:39 PM, UFO UpDates - Toronto wrote:
>From: William Treurniet<wtreurniet.nul>
>To: post.nul
>Date: Mon, 24 Dec 2012 10:05:32 -0500
>Subject: Re: Artificial Intelligence

>>From: Jason Gammon<boyinthemachine.nul>
>>To: post.nul
>>Date: Sun, 23 Dec 2012 15:10:15 -0500 (EST)
>>Subject: Re: Artificial Intelligence
>>>From: John Donaldson<John.Donaldson.nul>

>>>To: post.nul
>>>Date: Sun, 23 Dec 2012 06:38:07 +0000
>>>Subject: Re: Artificial Intelligence

>><snip>

>>>Imagine someone, Bob, who is your typical human, except that his
>>>pain sensations are swapped with his pleasure sensations - so
>>>that when you thump Bob on the jaw, the causal inputs and
>>>outputs are exactly the same, he cries out as if in pain,
>>>clutches his jaw, etc., but the *feeling* he undergoes is as if
>>>his face had just been gently massaged, say. From the outside,
>>>Bob would be indistinguishable from your average Joe, but from
>>>the inside, from Bob's perspective, things are strikingly
>>>different. If such a case is possible, then it looks like you
>>>can't properly program sensations - you can only program their
>>>causes and effects. Many people think similar reasoning holds
>>>for other mental states, such as emotions and certain aspects of
>>>perceptual experience.

>>>Now, this line of reasoning can, and indeed has been challenged
>>>on a number of fronts, and the debates are very complex. But
>>>let's just grant that it's true that you can't give a full
>>>functional analysis of sensations, emotions and aspects of
>>>perception - you can capture their inputs and outputs, but not
>>>how they *feel*. Even granting that, it doesn't follow that AI
>>>wouldn't behave ethically because you could program AI with the
>>>appropriate desires and beliefs: the belief that torture is
>>>wrong, the desire not to do wrong, etc.

>But the original article allowed the possibility that the
>super-AI could change its programming as it evolved. In fact,
>that is how it would become a super-AI.

You first argued that it was not possible to programme ethical
behaviour because "[t]he empathy that a consciousness feels for
another being is what determines ethical behaviour." I countered
that by explaining how ethical behaviour could be programmed.
Your response was to grant that it could be programmed, but you
then contended that such programming could never be "safe",
because AIs could change their own programming. I addressed that
point in my previous post, but I reiterate it below in response
to final comment.

>>The problem, then, is that arguing that AI can't feel would seem
>>>to rely on the claim that feelings can't be functionalised. But
>>>if feelings can't be functionalised, then they would seem not to
>>>be essential to the causal processes that drive behaviour. If
>>>feelings are not essential to the causal processes that drive
>>>behaviour, then not feeling would not cause any difference in
>>>behaviour.

>I believe that a being's ability (or lack of it) to empathize with
>another being does affect its behaviour toward the other. I'm not
>convinced that this can be "functionalized". The position taken by a
>spectrum meter reacting to EM input functionalizes it, but that's not
>the same thing as seeing it. When I appreciate a colourful painting,
>that is not the same as looking at the spatial arrangement of numeric
>spectrum positions corresponding to the colours. So I would argue that
>qualia can't be functionalized because they can't be represented as
>they are.

You haven't quite grasped my point. I granted the claim that
feelings can't be functionalised (although only for the purposes
of argument - it is an extraordinarily difficult issue to
settle). Let's take pain as the example. I pointed out that in
order to explain why pain can't be functionalised one must point
to some essential aspect of pain that isn't captured by a
functional analysis of pain. In other words, in order to explain
why pain can't be fully described in terms of the causes and
effects of pain, then one must point to some aspect of pain that
isn't captured by simply describing the causes and effects of
pain. There is some plausibility to the line of thought that
there is some aspect of pain that isn't captured by simply
describing pain's causes and effects, namely: its painfulness.
Fair enough. But, and here's the rub, if you defend that line of
thought then you face the following problem:

If some essential aspect of pain isn't functionalisable then
that aspect is irrelevent to the causes and effects of pain. But
in that case, that aspect of pain which can't be functionalised
is irrelevant to programming an AI to behave as if it's in pain.

(It's actually a deeper problem than that - if there is some
essential aspect of pain that is irrelevant to pain's causes and
effects then it seems both metaphysically and scientifically
mysterious, nay - suspicious. How do we even *know* of this
aspect of pain if it is so causally inefficacious? If someone
claims "x exists, but x has no causes and effects" then at the
very least they need to explain how we can know x exists,
because *coming to know something is a causal process*)

>>>Now, of course, an AI may change it's beliefs and desires, upon
>>>reflection - and perhaps quite radically: for example an AI may
>>>come to the conclusion that there are no objective, absolute
>>>moral facts, in the same way that there are no objective,
>>>absolute facts about what tastes good or not (it's merely a
>>>"matter of taste" - you can't be correct or incorrect about
>>>what's yummy, in the way that you can be correct or incorrect
>>>about what's a cube or a sphere, or what has mass, and so on).
>>>But that's an entirely different argument; although, as it
>>>happens, I think reflecting on it provides a *positive* argument
>>>for why AI would be ethical. First, note that the overwhelming
>>>burden of argument is on the person who wishes to argue that
>>>there are no moral facts - this is because of standard "hard
>>>cases" from moral philosophy like this:

>>>P1: if there are no moral facts, then it's not true that
>>>torturing babies for fun is wrong.

>>>P2: It is true that torturing babies for fun is wrong. C:
>>>therefore there are moral facts.

>>>That's a deductively valid argument, so if you want to deny the
>>>conclusion then you must deny one of the premises - premise 2,
>>>in fact (premise 1 merely states a logically trivial consequence
>>>of the position: If there are no facts in domain A, then any
>>>particular purported fact in domain A is not a fact). Denying
>>>premise 2 is a task I do not envy. Much more might be said here,
>>>for sure, but even these brief considerations give us good
>>>grounds for believing that AI would believe that there are moral
>>>facts, and intelligent beings tend to want to pay attention to
>>>the facts and modulate their behaviour accordingly... isn't that
>>>the smart thing to do..?

>Premise 1 would be an entirely logical position to take by a
>super-AI that cannot feel. Moral behaviour comes from the
>ability to empathize, followed by the application of the golden
>rule - do unto others as you would have them do unto you. So a
>super-AI that has no ability to experience quales would torture
>babies, unfeelingly, if it was a logical thing to do under a
>given set of circumstances.

Again, you haven't quite grasped the point. Remember the
dialectic:

Me: AIs could be programmed to behave ethically.

You: Ah, but what if they changed their programming?

Me: Super AIs are super smart, and whatever change AIs made in
their programming, the smart thing to do would be for those AIs
to continue to believe facts and act in accordance with them.
There are moral facts, therefore AIs would believe those facts
and act in accordance with them.

Let me expand on that last point, again. First: by "fact" I
simply mean "true proposition". Here are some example facts:

2+2 = 4
Bachelors are unmarried
Torturing babies for fun is wrong


It was my contention that whatever "programming change" an AI
would undergo, we have good grounds for believing that no AI
would intentionally programme itself in such a way that it
stopped believing facts in the way required for some "evil AI"
type scenario. It is surely an essential part of what we mean by
"intelligence" that the more intelligent an entity is, the more
likely it is to believe facts (and not believe falsehoods). The
non-functionalisable aspects of feeling simply don't come into
it.

Now, that isn't the end of matters, because prima facie
amoralists seem possible - i.e. persons that make all the
correct moral judgements (torture is wrong, charity is good,
etc.), and thus have the correct moral beliefs, but yet still
don't act morally because they lack the *desire* to do good.
Whether or not amoralists in the this sense really are possible
has been disputed in moral philosophy, but I think amoralists
are possible, and let's just grant that they are for the
purposes of argument. Now, *you* could say "well, what about an
amoral AI - one that has all the correct beliefs, but yet simply
does not desire to act in accordance with those beliefs?" I
think that is a deep and interesting question, that is is not
easy to settle quickly. But I will make the following comments.

The answer to the question of whether AIs would be amoralists
depends upon the answer to this question: "why act morally *at
all*?" This is one of the most venerable questions in all of
moral philosophy. A number of answers have been proposed, but
let's examine one of the best known: "one should act morally
because it's the rational thing to". Hobbes, Locke and Rousseau
proposed this answer in their various versions of social
contract theory. This is also the answer favoured in Kantian
deontology.  Assessing these moral theories in any detail is way
beyond the scope of this post; but the point I wish to make is
that such an assessment must be carried out before coming to any
*firm* conclusion about whether or not it is true that one
should act morally because it is the rational thing to do.

However, even prior to such an assessment, it does appear that
we have good grounds for believing that there must be a
persuasive, positive answer to the question "why act morally *at
all*?" Because: if there isn't a good reason to act morally,
then there isn't a good reason not to torture babies for fun.
But there *must* be a good reason not to torture babies for fun,
therefore there must be a good reason to act morally. It might
not be immediately obvious what that reason is (perhaps it's due
to implicit social contracts, or perhaps some deontological
edict, or something else), but the burden of argument is surely
on the person who wishes to argue that there isn't any reason at
all.

Given all of that, then, it seems like we have good grounds for
believing that super AI would have ethical beliefs and desires,
whether or not they could "really feel" at all...





Listen to 'Strange Days... Indeed' - The PodCast

At:

http://www.virtuallystrange.net/ufo/sdi/program/

These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp


Archive programming by Glenn Campbell at Glenn-Campbell.com