UFO UpDates
A mailing list for the study of UFO-related phenomena
'Its All Here In Black & White'
Location: UFOUpDatesList.Com > 2012 > Mar > Mar 9

Re: Inter-Dimensional Or Demonic

From: David Rudiak <drudiak.nul>
Date: Thu, 08 Mar 2012 14:06:53 -0800
Archived: Fri, 09 Mar 2012 03:41:27 -0500
Subject: Re: Inter-Dimensional Or Demonic


>From: Ray Dickenson <r.dickenson.nul>
>To: <post.nul>
>Date: Wed, 7 Mar 2012 14:55:58 -0000
>Subject: Re: Inter-Dimensional Or Demonic

>>From: Gerald O'Connell <goc.nul>
>>To: <post.nul>
>>Date: Tue, 6 Mar 2012 20:57:31 -0000
>>Subject: Re: Inter-Dimensional Or Demonic

>>>From: Ray Dickenson <r.dickenson.nul>
>>>To: <post.nul>
>>>Date: Sun, 4 Mar 2012 15:08:26 -0000
>>>Subject: Re: Inter-Dimensional Or Demonic

><snip>

>>>And, as noted by researchers, the flow of data along the optic
>>>nerve is two-way (as with hearing). Clearly the brain is
>>>constantly telling the eye _how_and_what_ to see.

>>I think you are overstating the case pretty dramatically there,
>>Ray. The reTturn path info is probably just instructions around
>>attention: moving the antenna, focus etc....There doesn't have
>>to be any interpretive data.

>>Check the next sentence in that post. The possibility of your
>>'purely optical' interpretation was the reason for me next
>>providing the data backing - the new-born kitten experimants -
>>for 'brain control' of the eye.

The new-born kitten experiments show pathology induced in the
brain and do NOT show "brain control" of the eye. More below.

>>The eye is an optical mechanism (ie. all its parts, including
>>the rods and cones, are dedicated to detecting photons) and it
>>contains _no_ cortical matter for processing or converting image
>>data.

No, again wrong. The retina begins as an outcropping of the
primordial brain tissue in fetuses and consists of three
distinct layers, not just one of photoreceptors. The
photoreceptors are primarily transducers, converted light into
electrical signals, so they also have properties of nerve tissue
(but are derived from cilia, as are the hair cells in the ear
that transduce vibrations into electrical signals). Even in the
photoreceptors, some processing of the light is taking place,
such as setting the light sensitivity of the eye, which can
range over a much greater range of magnitudes than film or your
digital camera (thus the problem we've all experience taking
outdoor pictures where the bright sky is washed out while darker
scene details come out too dark--the retina can locally adjust
sensitivity and thus accommodate a much larger range of light
values).

The next two layers are called the bipolar and ganglion cell
layers and most definitely ARE neural tissue that does early
processing on the visual signal. This involves recoding the raw
light/dark/color signals into visual primitives that are
assembled into more complex interpretations in the visual
cortex. The basics of edge and form detection, light and color
constancy, motion detection, and light sensitivity are built
right into the retina.

The retina is NOT just a passive purveyor of lightness and color
values, but actively processing these values. Thus it is only
crudely analogous to a film camera or TV camera. The point is
the retina does early visual processing and IS brain tissue.

Higher vertebrates like ourselves do the vast bulk of visual
processing in our large cortex, but lower vertebrates with tiny
brains may do a big chunk of processing in the retina, not their
brains. The earliest electrophysiological studies of the retina
were done in amphibians like frogs and mud puppies, which happen
to have bipolar and ganglion cells much larger than higher
vertebrates. Thus it was possible to record directly from the
bipolar and ganglion cells and learn what they did or did not
respond to.

One classic paper from 1968 was charmingly titled "What the
Frog's Eye tells the Frog's Brain," by Lettvin et. al. The frog
has very simple and basic visual needs. It needs to find food
and it needs to leap away from danger. That's about it.
Corresponding to this, they found four types of processed visual
responses in the retina, corresponding to four different types
of nerve tissue, which mapped precisely to four different layers
in the optic tectum (the only visual "brain" the frog has). One
process detects only small, convex moving spots (nicknamed "bug
detectors"). The others detect large edges, gross movement, and
dimming of light (hence perhaps a predator approaching). Unless
a bug is moving, a frog doesn't interpret it as possible food.
It is indeed "blind" to its existence because it's retina isn't
constructed to detect it. It will starve to death surrounded by
motionless bugs. That is how specific the frog's retinal system
is (but not ours). Incidentally, the Lettvin paper can be found
here:

http://tinyurl.com/2dtqbo

>>Any such operations could only start in the optical
>>tectum, the first connections of the optic nerve into the brain
>>(although further evidence - see ref below - says that other
>>processing and discrimination takes place deeper in the brain).

But early NEURAL processing takes place in the retina. Obviously only a
very limited aspect of the visual world passes through the "filter" of
the frogs retina (bug or possible predator), but much more of the visual
world in our retinas is passed on to our much bigger brains, which we
know can recognize much more than just food or foe, but identify and
classify millions of objects as well as many kinds of motion (e.g., we
can recognize an animal or person by their gait alone).

Higher vertebrates like us have much simpler retinas than frogs,
passing off the more complex data processing to the brain. You
won't find the equivalent of a "bug detector" in a cat or human
retina.

The main "filter" in our retinas is reducing the light collected
from about 100 million photoreceptors in each retina down to
about one million fibers in the optic nerve. Signals from
multiple photoreceptors in the peripheral retina are gathered
into larger "receptive fields", whereas the very central part of
the retina where the smallest cone photoreceptors are located,
may have a one-to-one correspondence between photoreceptor and
optic nerve fiber. Our detailed vision for form is in the center
of the eye, whereas our detection of gross detail (which helps
us orient ourselves and place gross objects relative to one
another in the visual space) and motion lies more and more to
the periphery. Correspondingly, the bulk of brain tissue is
devoted to processing the central few degrees of our vision, and
less to the much larger area of our visual field in the
periphery.

This was a necessary evolutionary compromise. We would have
needed optic nerves the thickness of your wrist and brains the
size of a desk to process all of the visual field with the same
detail as the central few degrees if there was a one-to-one
correspondence between all photoreceptors and all optic nerve
fibers.

Despite this, our visual systems work remarkably well, still
much better than any computer vision out there. In computer
vision, to avoid overwhelming the system and trying to process
in something like real time, they likewise look for only certain
aspects of the visual scene. Any "seeing" system, however
complex, is going to have to do this. I can imagine future more
complex artificial vision systems than our own small brains can
handle, but all vision systems involve compromises and filtering
of the scene for certain visual primitives that can be analyzed
into more complex attributes. "Seeing" of varying complexity
necessarily involves data processing or filtering of the "pure"
light image into various attributes that can be used in some way
by the vision system.

>Therefore if the eye can be 'ordered' to ignore certain photon
>arrangements and be 'blind' to certain objects which it
>otherwise can optically 'see'

No Ray, our eyes are not "ordered" by the higher brain to ignore
certain objects. I don't know where you get these ideas.

To repeat what I said in an earlier post, even if presented with
an extremely novel visual experience which we can't identify
from prior experience, we are not "blind" to it. We just don't
know what to make of it, but we can still describe it's
attributes, such as color, form, luminosity, distance, motion,
etc. Don't confuse identification and understanding with basic
visual perception.

> - as the kitten experiments
>demonstrate - that 'order' must come from the brain.

Wrong again. The kitten experiments involve creating pathology
in the visual cortex, not the retina, by depriving the kittens
of normal visual experience, e.g., by suturing one of the eyes
shut while the kitten is very young. Neurons in the brain
connected to that eye never develop properly as a result and the
kitten will have extremely poor vision in that eye (mimicking
what can happen in human babies if deprived of normal vision in
an eye). The kittens also never develop stereopsis or pure depth
perception (which requires both eyes being fully functional),
just like humans with similar deprivation. One can even cause
more specific deprivations, such as exposing one eye only to
vertical stripes and the other to only horizontal ones. As you
would expect, each eye ends up being good at detecting the
orientation it experiences and poor at the orientation it
doesn't, but, again, this occurs in the visual cortex, NOT the
retina, and the brain isn't telling the retina what to "see" or
not see.

>The Profs Ian Stewart & Jack Cohen ('Collapse of Chaos' 1994) in
>their chapter 'Eyes Are Not Cameras', show that the purely
>optical interpretation of vision is _not_ sufficient to cover
>even well known facts - and say "Indeed, it is so complicated
>that we currently have no very good idea of just how it works,
>and that's one reason why robot vision remains in a very
>rudimentary state." p. 155

Objects have more than just visual properties, which is why
babies grab, manipulate, throw, bang, mouth, taste, just about
anything they see and can get their grubby hands on. So yes,
learning to "see" things is more than just a purely optical
interpretation, and one of the ways robot vision has badly
lagged behind human vision. (Another way it badly lags behind is
in having only a minute fraction of the data processing power of
the brain.) We live in the complex interactive real world;
computers don't. Optical images only have meaning or utility if
we know what we can do with them or what they can do to us,
which we learn through years of experience.

Thus the blind person with their sight restored in later life
isn't exactly "blind", but they also have trouble making use of
the vision they have, because it is dissociated from the other
things objects are to us through our other sensory and motor
experience. They are no longer sensory blind, but remain largely
functionally blind.

>BTW - We primitive humans have gotten far enough to successfully
>interfere with the sensory apparatus of various beings (plant
>and insect pests - so far as is publically known). It doesn't
>take a genius to realize that even slightly more advanced folk
>could well be interfering with _our_ sensory mechanisms.

But what exactly does that mean? Are you saying they take total
control of our brains so that even in the here and now they can
appear to be a very different form than they really are? Or is
it just a matter of planting screen memories, like we can do
ourselves to some extent with drugs and hypnosis. One is
messing with the purely sensory apparatus, the other with memory
and interpretation, which, again, are not the same things.


David Rudiak



Listen to 'Strange Days... Indeed' - The PodCast

At:

http://www.virtuallystrange.net/ufo/sdi/program/

These contents above are copyright of the author and
UFO UpDates - Toronto. They may not be reproduced
without the express permission of both parties and
are intended for educational use only.

[ Next Message | Previous Message | This Day's Messages ]
This Month's Index |

UFO UpDates Main Index

UFO UpDates - Toronto - Operated by Errol Bruce-Knapp


Archive programming by Glenn Campbell at Glenn-Campbell.com