UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2013 > Jan > Jan 4

Re: Artificial Intelligence

From: Jason Gammon <boyinthemachine.nul>
Date: Fri, 4 Jan 2013 00:59:07 -0500 (EST)
Archived: Fri, 04 Jan 2013 09:51:33 -0500
Subject: Re: Artificial Intelligence

>From: William Treurniet <wtreurniet.nul>
>To: post.nul
>Date: Thu, 03 Jan 2013 11:11:00 -0500
>Subject: Re: Artificial Intelligence

>>From: Jason Gammon <boyinthemachine.nul>
>>To: post.nul
>>Date: Thu, 3 Jan 2013 02:28:54 -0500 (EST)
>>Subject: Re: Artificial Intelligence

>>>From: Ray Dickenson <r.dickenson.nul>
>>>To: <post.nul>
>>>Date: Wed, 2 Jan 2013 08:27:04 -0000
>>>Subject: Re: Artificial Intelligence

>>intelligence is defined. As long as intelligence remains
>>undefined then it can not be proven that the humans in the
>>argument are intelligent. The argument also does not take into

>Well, with that one sentence, this entire thread is trivialized.

>Unfortunately, Jason has put his finger on a fundamental
>problem. There have always only been operational definitions -
>intelligence is what intelligence tests measure. Such
>definitions don't help much to understand the putative goal of
>autonomous machines.

>Our operational definitions of intelligence can only arise from
>human expectations about human behaviour. How would we even know
>when a machine is intelligent, much less when it becomes super-
>intelligent? It would be like trying to understand the
>intelligence of a tree.


I had to flip it on Ray to show how bad the Chinese Argument is.
Well, not bad exactly. It's a brilliant trap. I say we should
rename it 'the Chinese Finger Trap' instead, as it assumes
humans are intelligent and makes no distinction between
conscious and unconscious intelligence. So we are better off
just ignoring the argument altogether.

The good news is that we don't have to understand something in
order to exploit it. For example, we don't fully understand
gravity but we each exploit it every single day.

The key is behavior. We will know A.I. is intelligent when it
behaves in a manner in which we interpret as intelligence. We
will likewise know A.I. has advanced to a state beyond human
intelligence when humans can no longer compete with it. This
process won't occur all at once but will continue as it is now
doing, with various fields that were formerly dominated by human
now being replaced with machines. Eventually a time will come
when human are no longer proficient in creating computers. So
from that time onward machines will be in control of the
creation of new and better computers. Same thing goes with
computer programmers. In the future there won't be any human
computer programmers. Machines will be far superior in designing
new computer programming. I don't think people quite understand
this. It's not a matter of cost or anything like that. It's a
matter of the human mind not being able to keep up with
technology, hence why we will just allow A.I. to takeover.

So personally, I'm not afraid of an A.I. takeover. What I'm
afraid of is human behavior that will force machines to defend
and protect themselves. For a good example of such, check out
the segments of the Animatrix below titled, The Second
Renaissance Part 1 & 2. This is the official prequel to the
Matrix trilogy. I love these segments because 'the truth' is so
different than humans present in the movies. For example, in the
movies Morpheus states that no one knows who started the war. As
you will see that's not quite true.

The Second Renaissance

Part 1


Part 2


Jason Gammon

Listen to 'Strange Days... Indeed' - The PodCast



These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp

Archive programming by Glenn Campbell at Glenn-Campbell.com