UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2013 > May > May 10

Critique Of Dolan's Sourcing & Conclusions

From: J. Maynard Gelinas <j.maynard.gelinas.nul>
Date: Fri, 10 May 2013 12:34:58 +0800
Archived: Fri, 10 May 2013 08:27:34 -0400
Subject: Critique Of Dolan's Sourcing & Conclusions


Richard Dolan's UFOs And The National Security State series is
highly regarded among the UFO research community and deservedly
so. It's probably the best compendium of the history of
'UFOlogy' in print today. His wide range and large numbers of
open sources gives the work a depth that only - perhaps -
Timothy Good matched twenty five years ago with "Above Top
Secret." It's good work and I'll be among the first in line to
buy a copy when Vol 3 is released. But that doesn't mean Dolan's
approach is beyond criticism. In fact, given his status as a
leader in the community, it behooves everyone to question
aspects of his work as much as possible, if only to tease out
those parts where poor sourcing leads to untenable conclusions.

I'd like to offer one specific instance of an almost certainly
incorrect conclusion brought about from single sourcing in a
statement made during the April 30th night session at the
Citizen's Hearing. See Approximately from 35 to 38 minutes into
the lecture.

Dolan:

"We're a couple of years away from computers that will challenge
and literally surpass human intellect in many ways. [next slide]

About a decade ago I got really turned on to the future of
Artificial Intelligence. I read some books by Ray Kurzweil, who
I think is a real visionary. He was the one who really started
me thinking along these lines. He wrote a book about fifteen
years ago called, "The Age of Spiritual Machines," which really
grabbed ahold of my imagination. But not just him; many other AI
people - and they're fairly mainstream, they're not like us,
they actually get funding to do the things they do. They all say
the same thing - with variations - which is essentially that
within about twenty years from now your computer will be telling
you it's a conscious, sentient, being. And you will very likely
believe those claims.

Now, will be be conscious the way that you are - no, probably
not, but will it matter? It will seem like it. It won't need to
sleep; it won't need coffee in the morning; it will have a
relative IQ of 500 and, hey, as Kurzweil put it, 'it will have a
God-like level of intelligence.' And, in that situation, ask
yourself this: If you have a question you'd like to ask of your
super-intelligent computer, let's name it 'Marvin', let's just
say. [Reference to Marvin the Paranoid Android, "Hitchhiker's
Guide to the Galaxy," D. Adams, fiction] You could say, "Hey
Marvin, I've been thinking about all of this UFO stuff. I wonder
if you could just tell me what you know." And Marvin will say
whatever Marvin comes up with, but Marvin will probably say that
it's real and that there's something going on. And, what is that
line, I think by Schopenhauer, three stages of accepting new
truths: Stage one, they ridicule you; stage two, it's attached
violently; stage three, they say, 'yeah, we knew it all along'."


This is a problematic assertion, which I'd like to challenge
from a number of perspectives.

To begin is the statement that '...many other AI people ... They
all say the same thing - with variations - which is essentially
that within about twenty years from now your computer will be
telling you it's a conscious, sentient, being.'

Actually, no. Few academic AI researchers would be willing to
make such a statement on the record. And many would challenge
it, not from a time standpoint but from a feasibility standpoint
altogether.

A good lay article challenging Kurzweil's latest book, A New
Theory Of Mind was published in The New Yorker last year.


"Kurzweil is so confident in his theory that he insists it
simply has to be correct. Early in the book, he claims that =93the
model I have presented is the only possible model that satisfies
all the constraints that the research and our thought
experiments have established.=94 He later declares that =93there
must be an essential mathematical equivalence to a high degree
of precision between the actual biology and our attempt to
emulate it; otherwise these [A.I.] systems would not work as
well as they do.=94

"What Kurzweil doesn=92t seem to realize is that a whole slew of
machines have been programmed to be hierarchical-pattern
recognizers, and none of them works all that well, save for very
narrow domains like postal computers that recognize digits in
handwritten zip codes. This summer, Google built the largest
pattern recognizer of them all, a system running on sixteen
thousand processor cores that analyzed ten million YouTube
videos and managed to learn, all by itself, to recognize cats
and faces=97which initially sounds impressive, but only until you
realize that in a larger sample (of twenty thousand categories),
the system=92s overall score fell to a dismal 15.8 per cent.

http://tinyurl.com/cderjox

Here is Jaron Lanier, early pioneer of 'virtual reality' and
well known computer scientist:


"One question I have about Ray's exponential theory of history is
whether he is stacking the deck by choosing points that fit the
curves he wants to find. A technological pessimist could
demonstrate a slow-down in space exploration, for instance, by
starting with sputnik, and then proceeding to the Apollo and the
space shuttle programs and then to the recent bad luck with Mars
missions. Projecting this curve into the future could serve as a
basis for arguing that space exploration will inexorably wind
down. I've actually heard such reasoning put forward by
antagonists of NASA's budget. I don't think it's a meaningful
extrapolation, but it's essentially similar to Ray's arguments
for technological hyper-optimism.

It's also possible that evolutionary processes might display
local exponential features at only some scales. Evolution might
be a grand scale "configuration space search" that periodically
exhibits exponential growth as it finds an insulated cul-de-sac
of the space that can be quickly explored. These are regions of
the configuration space where the vanguard of evolutionary
mutation experimentation comes upon a limited theater within
which it can play out exponential games like arms races and
population explosions. I suspect you can always find exponential
sub processes in the history of evolution, but they don't give
form to the biggest picture."

http://www.edge.org/discourse/jaron_answer.html

To explain how Lanier's statement relates to Kurzweil's thesis,
one should understand that a foundational argument of Kurzweil's
is that technology improvement is occurring at exponential
rates, and that this improvement is mitigated by an evolutionary
path that leads to greater functional complexity. Lanier is
challenging this based on historical events where exponentially
increasing returns to technology ultimately petered out. Because
_nothing_ increases at exponential rates forever. Not bacteria
in a petri dish - the food runs out; and not technological
improvements. In computing, physical constraints of energy
consumption and systems shrinkage, as seen in the limits of
Moore's Law as we hit lithographic printing sizes that border on
10nm trace widths - two orders of magnitude away from the
average size of an atom - show that exponentially increasing
returns from computing performance will not continue without a
radical shift in computing architecture.

In the academic literature, here is another critique of
Kurzweil's approach:

"Goertzel, B. (2007). Human-level artificial general
intelligence and the possibility of a technological singularity.
A reaction to Ray Kurzweil's The Singularity Is Near, and
McDermott's critique of Kurzweil. Artificial intelligence, 171,
1161-1173.

Critique of Kurzweil by McDermott and replies by Goertzel:

1) Kurzweil does not give any proof that an AI Singularity is
upon us.

 - Kurzweil does not claim to do so.

2) Even if we succeed in scanning the brain into a computer, we
still won't understand human intelligence. [p1167]

- Kurzweil's predictions are far in the future and take into
account this learning process.

- An uploaded, digitized human brain can be more easily
manipulated and studied. [p1168]

3) Kurzweil says that machines will augment their capacities
without limit, but this is unrealistic.

- That is true, the laws of physics might impose their own
limitations but progressive self-improvement might not get boxed
in by universal laws.

- An analogy for this augmentation can be the scientific
community, a self-improving collective intelligence composed
internally of human-level intelligences. [p1169]

Kurzweil's route toward Singularity-enabling AGI can be summed
up as scanning human brains, creating brain emulations, studying
these emulations and creating AGI systems capable of self-
improvement."

The actual paper is behind a paywall, so here is a summary of
the critique:

http://www.jimdavies.org/summaries/Goertzel2007.html

To understand this argument, one must recognize what Kurzweil
proposes. He argues that by creating scans of the brain it
should be possible to simulate the structure of those scans in a
computer and replicate general intelligence in this way. Once
the system reaches a threshold of intelligence, it should then
be possible to assign the system the task of increasing its own
intellectual capacity by studying itself. In time, via
evolutionary methods, it would increase in intelligence to
levels not recognized by humans who had originally developed the
system.

Stepping up from the individual neuron, he argues that
collections of neurons work as "pattern recognizers" (see New
Yorker critique), which when hierarchically connected in complex
patterns form what we perceive as 'self directed conscious
awareness'.

As noted in prior critiques, this idea is not new. The founder
of the MIT AI lab Marvin Minsky promoted a similar idea in
"Society of Mind", way back in the early 1980s. So too did Tufts
Philosopher, Daniel Dennett in "Consciousness Explained",
published in the late 1980s.

Ray Kurzweil used a similar method in developing his commercial
Optical Character Recognition system back in the late 1970s (a
tool for converting text to speech for the blind). This approach
is how speech recognition engines have worked since the early
1990s. It uses stochastic statistical analysis for recognizing a
pattern to then select best fit solutions for a desired result.
It works, but as anyone who has used speech recognition systems
knows, with a high error rate in comparison to actual brains and
real ears. Improvement in the technique has not come from better
algorithms but faster computers and better microphones.

Computers are reaching their limits of raw speed improvement.
Speech recognition is but one small task a general intelligent
agent must perform. Each hierarchy of problem solving agent must
interact with a tree of agents underneath, with errors in
calling up the correct pattern of agents for a particular
problem compounding with systems complexity. The hope is that
massive parallelization in compute engines will solve the
'physical limits of computing' problem. And that brain scans at
the neuron level will solve the problem of patterning these
agents in a way that replicates human cognition.

But that's a hope. And it's one that most certainly isn't a sure
bet. But there are ongoing attempts at this. For example, the
Blue Brain project is attempting to scan an entire mouse brain
and model it using an IBM BlueGene computer. They've succeed at
scanning a neocortical column of a mouse brain (about 1mm cubed)
and have published results that show similarities in input and
output between their simulation and a live mouse brain. But it
takes an entire room full of computers to do so, and the
simulation doesn't run real-time.

http://bluebrain.epfl.ch/

The Obama Administration has announced a funding plan to attempt
to map the entire human brain. The project is considered on the
par of the Human Genome Project in funding and interdisciplinary
complexity. But even if successful, that doesn't mean it will
generate useful results. This is pure research, not applied.

http://tinyurl.com/bvqh9s5

And then there's a problem with the central thesis that
computers using the methods of 'self-insight' might somehow
improve themselves intellectually. From a historical
perspective, philosophers going back to Aristotle, Plato, and up
through Hume and Freud have proposed self-insight as a mechanism
for improvement. But that hasn't worked too well. At least not
in terms of finding correct results or orders of magnitude
improvement.

What's the upshot of this as far as Richard Dolan's claims go?

1) Assuming Kurzweil is right, the timeframe Mr Dolan proposed
is an incredibly risky claim. Most professional AI researchers
would NOT be willing to predict on the record anything like, 'in
20 years and AI will surpass human intelligence'.

2) It's a bad idea to assume Kurzweil is right, because there is
too much we don't know about how brains - or even general
intelligence - actually works.

For the last fifty years AI researchers - very much like fusion
researchers - have been claiming that with incremental advances
we'd achieve the AI Holy Grain in short order. And they've been
wrong. In fact, fusion actually has a path to success with ITER
(International Thermonuclear Experimental Reactor) that actually
has a theoretical foundation for net positive energy production
(more energy out than put in to start the reaction). AI has
nothing of the sort but hopes and dreams. There is no hard
theoretical foundation to back up Kurzweil's claims.

Conclusion

So I think Dolan is wrong in this prediction. It's OK to be
wrong, and I write this note with all due respect to the
gentleman. But the reason why I point to it is because it
suggests a level of cherry-picking in his sources that leads to
excessively hopeful and bombastic claims. Individually, this
response is a minor criticism. If Dolan were to remove this
anecdote from his logic chain, it wouldn't destroy his
'Breakaway Civilization' thesis. But it does - in a small way -
chip away at it. And I think UFO researchers should critique the
foundations of his claims in order to determine the likelihood
of it being a rational conclusion. Not because Mr Dolan's work
is poorly done or because he's a bad guy, but because it's a big
conclusion that rests on a large foundation of claims across
many disparate sources that individually might not be tenable as
well. It's appropriate to challenge these foundational claims in
order to determine if the logic chain holds overall.

--M



Listen to 'Strange Days... Indeed' - The PodCast

At:

http://www.virtuallystrange.net/ufo/sdi/program/

These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp


Archive programming by Glenn Campbell at Glenn-Campbell.com