UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2012 > Dec > Dec 19

Re: Artificial Intelligence

From: William Treurniet <wtreurniet.nul>
Date: Wed, 19 Dec 2012 09:58:35 -0500
Archived: Wed, 19 Dec 2012 11:14:37 -0500
Subject: Re: Artificial Intelligence

>From: John Donaldson <John.Donaldson.nul>
>To: <post.nul>
>Date: Wed, 19 Dec 2012 06:39:08 +0000
>Subject: Artificial Intelligence

>Dear List-Members,

>I have come across the following very recent, authoritative,
>scholarly article on the possibility for AI this century and
>some possible consequences, which might be of interest:

>Intelligence Explosion: Evidence and Import, by Luke Muehlhauser
>and Anna Salamon of The Singularity Institute

>Taken from:

>Singularity Hypotheses: A scientific and philosophical
>assessment, ed. Amnon Eden, Johnny S=F8raker, James H. Moor, and
>Eric Steinhart. Berlin: Springer: 2012

>Available at:


This is an interesting article that basically asks how to
implement Asimov's Three Laws of Robotics (without mentioning
them) so that they cannot be circumvented by uncaring humans or
a superhuman artificial intelligence. The authors conclude that
there is a reasonable probability of human extinction if such an
AI can be created, and that research should be aimed at
minimizing that risk. Interestingly, given that the risk cannot
be eliminated, the authors do not propose the option of blocking
such research that could wipe us out. Aside from that, I take
issue with an assumption behind the supposition that such a
super AI can be created.

The authors admit that how to build such an intelligence is not
really known at present, but we can proceed by copying the human
brain, i.e., creating a whole brain emulation (WBE). We may not
know how the human brain works, but we can at least reverse-
engineer it and create a copy that does the same thing. This is
somewhat optimistic since neuroscience, to my knowledge, has not
even figured out yet where memory is stored. Certain brain
activity may be correlated with apparent activation of a memory,
but that does not mean that the memory exists in the brain.
There is reason to believe from the NDE literature that memories
can be created when the brain is dead, and may subsequently be
recalled. Mind is not encapsulated by the brain. So creating a
WBE by simulating the biology of the brain but excluding this
feature is unlikely to result in a complete implementation of a
living brain.

The authors conclude by saying that "Our first superhuman AI
must be a safe super-human AI, for we may not get a second
chance". Let's assume that such a "safe" AI is created. I'm
reminded of the movie, Jurrasic Park, where lizard reproduction
was supposed to be impossible. It was observed by one of the
characters that "nature will find a way". This should apply in
spades to a constrained but super-intelligent AI.


Listen to 'Strange Days... Indeed' - The PodCast



These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp

Archive programming by Glenn Campbell at Glenn-Campbell.com