UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2012 > Dec > Dec 21

Re: Artificial Intelligence

From: William Treurniet <wtreurniet.nul>
Date: Fri, 21 Dec 2012 10:50:11 -0500
Archived: Fri, 21 Dec 2012 11:43:48 -0500
Subject: Re: Artificial Intelligence


>From: John Donaldson <John.Donaldson.nul>
>To: post.nul
>Date: Thu, 20 Dec 2012 05:03:56 +0000
>Subject: Re: Artificial Intelligence

>>From: William Treurniet <wtreurniet.nul>
>>To: post.nul
>>Date: Wed, 19 Dec 2012 09:58:35 -0500
>>Subject: Re: Artificial Intelligence

>>>From: John Donaldson <John.Donaldson.nul>
>>>To: <post.nul>
>>>Date: Wed, 19 Dec 2012 06:39:08 +0000
>>>Subject: Artificial Intelligence

>>>I have come across the following very recent, authoritative,
>>>scholarly article on the possibility for AI this century and
>>>some possible consequences, which might be of interest:

>>>Intelligence Explosion: Evidence and Import, by Luke Muehlhauser
>>>and Anna Salamon of The Singularity Institute

>>>Taken from:

>>>Singularity Hypotheses: A scientific and philosophical
>>>assessment, ed. Amnon Eden, Johnny S=F8raker, James H. Moor, and
>>>Eric Steinhart. Berlin: Springer: 2012

>>>Available at:
>>>singularity.org/files/IE-EI.pdf

>This is an interesting article that basically asks how to
>>implement Asimov's Three Laws of Robotics (without mentioning
>>them) so that they cannot be circumvented by uncaring humans or
>>a superhuman artificial intelligence. The authors conclude that
>>there is a reasonable probability of human extinction if such an
>>AI can be created, and that research should be aimed at
>>minimizing that risk. Interestingly, given that the risk cannot
>>be eliminated, the authors do not propose the option of blocking
>>such research that could wipe us out. Aside from that, I take
>>issue with an assumption behind the supposition that such a
>>super AI can be created.

>>The authors admit that how to build such an intelligence is not
>>really known at present, but we can proceed by copying the human
>>brain, i.e., creating a whole brain emulation (WBE). We may not
>>know how the human brain works, but we can at least reverse-
>>engineer it and create a copy that does the same thing. This is
>>somewhat optimistic since neuroscience, to my knowledge, has not
>>even figured out yet where memory is stored. Certain brain
>>activity may be correlated with apparent activation of a memory,
>>but that does not mean that the memory exists in the brain.
>>There is reason to believe from the NDE literature that memories
>>can be created when the brain is dead, and may subsequently be
>>recalled. Mind is not encapsulated by the brain. So creating a
>>WBE by simulating the biology of the brain but excluding this
>>feature is unlikely to result in a complete implementation of a
>>living brain.

>>The authors conclude by saying that "Our first superhuman AI
>>must be a safe super-human AI, for we may not get a second
>>chance". Let's assume that such a "safe" AI is created. I'm
>>reminded of the movie, Jurrasic Park, where lizard reproduction
>>was supposed to be impossible. It was observed by one of the
>>characters that "nature will find a way". This should apply in
>>spades to a constrained but super-intelligent AI.

><snip>

>You also say "Mind is not encapsulated by the brain." I'm not
>sure what you mean here - is it that materialism is false (i.e.
>that it's false that mental properties either are, or at least
>supervene on physical properties)? If you are denying
>materialism, then fair enough (there's certainly a very
>interesting debate to be had there), but it does seem a little
>uncharitable to criticise the article for being materialist -
>the whole AI project proceeds on that assumption, and this is an
>article that asks: "assuming AI is possible, what follows, and
>when...?" Which is surely a reasonable question.

>If you aren't denying materialism, then it follows that if two
>entities are physically identical, then they will be mentally
>identical. From that it follows that if you can copy a brain,
>you copy the mind that "goes with it" (i.e. that is identical
>with it, or at least supervenes on it). Hence the whole brain
>emulation idea...

Hi John,

The article is a call to develop ways to make super-AIs forever
safe, and I argue that this is impossible under the
materialist's world view.

Human attributes like empathy and compassion cannot be felt by a
hardware/software device. The same problem comes with
implementing colour perception. A simulation can only respond to
a particular range of EM frequencies representing a given
colour. It cannot create our subjective response to the colour.
That experience is just not physical. It belongs to a non-matter
consciousness which, by definition, has that property.

The empathy that a consciousness feels for another being is what
determines ethical behaviour. A super intelligence that cannot
feel that could not be empathetic and we would never be safe
from it. They might not even be safe from each other, but that's
another issue.


William




Listen to 'Strange Days... Indeed' - The PodCast

At:

http://www.virtuallystrange.net/ufo/sdi/program/

These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp


Archive programming by Glenn Campbell at Glenn-Campbell.com