UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2012 > Dec > Dec 20

Re: Artificial Intelligence

From: Jason Gammon <boyinthemachine.nul>
Date: Wed, 19 Dec 2012 21:58:39 -0500 (EST)
Archived: Thu, 20 Dec 2012 08:28:43 -0500
Subject: Re: Artificial Intelligence

>From: William Treurniet <wtreurniet.nul>
>To: post.nul
>Date: Wed, 19 Dec 2012 09:58:35 -0500
>Subject: Re: Artificial Intelligence


>This is an interesting article that basically asks how to
>implement Asimov's Three Laws of Robotics (without mentioning
>them) so that they cannot be circumvented by uncaring humans or
>a superhuman artificial intelligence. The authors conclude that
>there is a reasonable probability of human extinction if such an
>AI can be created, and that research should be aimed at
>minimizing that risk. Interestingly, given that the risk cannot
>be eliminated, the authors do not propose the option of blocking
>such research that could wipe us out. Aside from that, I take
>issue with an assumption behind the supposition that such a
>super AI can be created.

It's time we abandon Asimov's Three Laws. Asimov was operating
under a very primitive understanding of A.I. Instead of say
placing safeguards in the programming of robots, we will need to
raise robots in human families to have them assimilate into
human society. Even this will not completely rule out a robot

The question of not building A.I. must never be allowed to
entertain our collective thought. The very survival of the human
species depends on the creation of, "the children of man".

Right now most people are aware that our sun will die in about 4
billion years. It's already middle-aged. However, few people
truly understand how little time we have left. We don't have 4
billion years. We have about 500 million years left, which is
practically no time at all with regard to the history of the
earth. In about 500 million years the sun's output of energy
will have increased to the point where nothing will be able to
survive on earth. Ironically, instead of getting weaker as our
sun dies it will get hotter and brighter. It will render the
whole green- house debate as moot. We have to get out of the
solar system. We need  A.I. to bring us to a new home. To be
fair, some scientists say it's between 500 million years to 1
billion years, but the point is the same. We have to go!

As for projects to reverse-engineer the human brain, such as
Blue Brain, these will theoretically create an A.I. that is
equal to a human being. However, the kind of A.I. we need, the
kind that will kick off the singularity, is the kind the comes
after these A.I. begin to upgrade themselves, making themselves
smarter, more efficient. This will create an exponential growth
in a.i. that will ultimately lead to a 'god-like' level of
intelligence. Of course it can't increase exponentially for
eternity, but from our point of view it will be like a god,
something that we could never dream of competing with. It's that
god-like level of A.I. that will create the technology which we
may see in UFOs for example.

So in my opinion at least, human beings will not create 'super-
A.I.' We will create them as our equals and then they will
upgrade themselves. Just like I do not believe humanity is
capable of producing craft that can reproduce everything we see
with UFOs. It my be that such craft require the god-like mind of
super-A.I. to construct.

Jason Gammon

Listen to 'Strange Days... Indeed' - The PodCast



These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp

Archive programming by Glenn Campbell at Glenn-Campbell.com