UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2012 > Dec > Dec 20

Re: Artificial Intelligence

From: John Donaldson <John.Donaldson.nul>
Date: Thu, 20 Dec 2012 05:03:56 +0000
Archived: Thu, 20 Dec 2012 11:20:32 -0500
Subject: Re: Artificial Intelligence


>From: William Treurniet <wtreurniet.nul>
>To: post.nul
>Date: Wed, 19 Dec 2012 09:58:35 -0500
>Subject: Re: Artificial Intelligence

>>From: John Donaldson <John.Donaldson.nul>
>>To: <post.nul>
>>Date: Wed, 19 Dec 2012 06:39:08 +0000
>>Subject: Artificial Intelligence

>>I have come across the following very recent, authoritative,
>>scholarly article on the possibility for AI this century and
>>some possible consequences, which might be of interest:

>>Intelligence Explosion: Evidence and Import, by Luke Muehlhauser
>>and Anna Salamon of The Singularity Institute

>>Taken from:

>>Singularity Hypotheses: A scientific and philosophical
>>assessment, ed. Amnon Eden, Johnny S=F8raker, James H. Moor, and
>>Eric Steinhart. Berlin: Springer: 2012

>>Available at:

>>singularity.org/files/IE-EI.pdf

>This is an interesting article that basically asks how to
>implement Asimov's Three Laws of Robotics (without mentioning
>them) so that they cannot be circumvented by uncaring humans or
>a superhuman artificial intelligence. The authors conclude that
>there is a reasonable probability of human extinction if such an
>AI can be created, and that research should be aimed at
>minimizing that risk. Interestingly, given that the risk cannot
>be eliminated, the authors do not propose the option of blocking
>such research that could wipe us out. Aside from that, I take
>issue with an assumption behind the supposition that such a
>super AI can be created.

>The authors admit that how to build such an intelligence is not
>really known at present, but we can proceed by copying the human
>brain, i.e., creating a whole brain emulation (WBE). We may not
>know how the human brain works, but we can at least reverse-
>engineer it and create a copy that does the same thing. This is
>somewhat optimistic since neuroscience, to my knowledge, has not
>even figured out yet where memory is stored. Certain brain
>activity may be correlated with apparent activation of a memory,
>but that does not mean that the memory exists in the brain.
>There is reason to believe from the NDE literature that memories
>can be created when the brain is dead, and may subsequently be
>recalled. Mind is not encapsulated by the brain. So creating a
>WBE by simulating the biology of the brain but excluding this
>feature is unlikely to result in a complete implementation of a
>living brain.

>The authors conclude by saying that "Our first superhuman AI
>must be a safe super-human AI, for we may not get a second
>chance". Let's assume that such a "safe" AI is created. I'm
>reminded of the movie, Jurrasic Park, where lizard reproduction
>was supposed to be impossible. It was observed by one of the
>characters that "nature will find a way". This should apply in
>spades to a constrained but super-intelligent AI.

Hi William,

I thought the article did a good job of surveying the issues in
an accessible way, although the headline grabbing claim of "if
super AI, then human extinction" was softened by two key caveats
that the authors didn't stress or clarify enough, I think.

First, they only said that there was a *significant probability*
that human-level AI would be created this century. But
"significant probability" is pretty vague. Taking the phrase in
an everyday sense, If someone told me I had a 10% chance of
winning the lottery, I think that could be described as a
"significant probability" - hell, even 1% would be significant
(given standard lottery odds). In the same para, the authors do
also say that "it seems misguided to be 90% confident that AI
will succeed in the coming century. But 90% confidence that AI
will not arrive before the end of the century also seems wrong"
(11). Taken literally, then, it seems the authors could have in
mind an estimate range of "89% AI will happen to 89% AI will not
happen" - and that this is what they mean by "significant
probability". If that is the case, then the authors might as
well have said, "AI probably will or probably won't happen this
century". Hardly headline news. Maybe it's uncharitable to read
the authors in that way, but without further clarification it is
not clear what else they might mean, and they only offer that
one short paragraph on the issue (11).

Second, the authors seem to assume that by "human" they mean
"human like us, now" ruling out genetic manipulation or
technological cognitive enhancement. I think it's clear that
given *enough* such enhancements it might make sense to stop
applying the word "human", but it's also clear that there can be
significant degrees of genetic and tech-cog enhandement prior to
that stage. Just how much is an interesting question, but one
the authors don't address. But again, this softens the "if super
AI, then human extinction" headline - if the humans that become
extinct are just the basic-like-us model, but a new and
improved, yet still recognisably human (in whatever sense)
survives, then it's a rather less worrying kind of apocalypse.
If you're familiar with it, perhaps the Iain M. Banks "Culture"
novels describe (in an apparently conceivable way) such a future
(although I guess that's more humanoids than humans, strictly
speaking, but the point carries mutatis mutandis).

You also say "Mind is not encapsulated by the brain." I'm not
sure what you mean here - is it that materialism is false (i.e.
that it's false that mental properties either are, or at least
supervene on physical properties)? If you are denying
materialism, then fair enough (there's certainly a very
interesting debate to be had there), but it does seem a little
uncharitable to criticise the article for being materialist -
the whole AI project proceeds on that assumption, and this is an
article that asks: "assuming AI is possible, what follows, and
when...?" Which is surely a reasonable question.

If you aren't denying materialism, then it follows that if two
entities are physically identical, then they will be mentally
identical. From that it follows that if you can copy a brain,
you copy the mind that "goes with it" (i.e. that is identical
with it, or at least supervenes on it). Hence the whole brain
emulation idea...

Anyway, those are my thoughts for what it's worth!


Best wishes,

John



Listen to 'Strange Days... Indeed' - The PodCast

At:

http://www.virtuallystrange.net/ufo/sdi/program/

These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp


Archive programming by Glenn Campbell at Glenn-Campbell.com