What is mind? What is intelligence? Though queried and pondered throughout
at least the recorded history of man, little real understanding of the true
nature of understanding has ever been achieved. Indeed, the history of man's
failed insights into what intelligence is like, can be viewed as
both humorous and, somehow, sad, at the same time. So let us first state
a major caveat to our line of enquiry, by way of example: For Descartes,
mind was seen as hydraulics (fluid in the supposed nerve tubes). For the
ancient Greeks, marionettes controlled by strings provided the model of
mind (indeed, neuron is the Greek word for "string"). During
the Industrial Revolution, mind was assuredly based upon complex systems
of pulleys and gears. In the 1930's, mind was envisioned to be like a huge
telephone switchboard. In the 1940's, this model was updated to be a function
of Boolean logic. With the 1960's came the model of digital computers; all
one had to do was study the programs. And the 1970's brought us the mind
as hologram. But now, thanks to our greater understanding, and improved
technology, we've finally figured it out and got it right. Ahem.
One lesson to be learned from observing these blindered views of mind and
intelligence is obvious: an analogy based chauvinistically on current technology
is limited at best, and, more likely, is entirely incorrect. However, perhaps
there is a subtler lesson to be gleaned from these examples: If, instead
of trying to find just what it is that mind is like, we instead set
out to understand just exactly what mind is, we might chart a truer
course in our search. Taking this tack, let us first make an unmitigated
assumption that mind is embodied in brain. In this particular quest, we
will not once address either the likelihood or the import of non-physical
phenomena; the religion of one's choice is more (or less) than adequate
(depending upon one's personal inclinations) to address, and even answer,
those aspects of being. While it may be admitted that this leaves us open
to inadequate explanations resulting from possibly very real, very physical
forces that we have simply not yet discovered as a race, well, in such an
event we will simply lay the blame entirely on the physicists. And, meanwhile,
we can commence our intellectual journey. And if I might offer an early
signpost, or perhaps an inducement from the popular travel brochure: I think
we may finally begin to look forward to a time when our knowledge of the
world - its constituents and its inhabitants - will at last extend to ourselves.
To gain our first insights into the nature of intelligence, let us start
by looking at how intelligence began.
Over a time span of gods - or geology - intelligence (like everything else
you see around you) has emerged as a result of the simplest rule
of this, or perhaps any, universe: survival of the fittest. A rule
which applies equally well, and equally impartially, to basic chemicals
as it does to complex organisms. A rule which is essentially a tautology:
that which remains, persists. Those chemical compounds which are most easily
and profusely created, and which are the most difficult to breakdown, always
will be found in the greatest abundance. Those organisms which reproduce
and survive most effectively will, quite naturally, and unavoidably, be
present in the greatest numbers. And, conversely, if you see it around you
today, then it has either been prolific, durable, or both.
Then intelligence must have been of adaptive value (so far, at least),
or it would not be so abundant today. In fact, given the complexity and
variability of our world ecology, it is easy to understand how inadequate
and limiting to an organism's survivability would be any single set of hardwired
instincts. No repertoire of automatic responses could possibly confer
more than a slight, and short lived, incremental adaptive value on an organism.
Indeed, this is the root of the failure of traditional Artificial Intelligence
(AI), based as it is on repertoires of preordained, hardwired facts and
rules. The infamous "brittleness" of these traditional AI systems,
when faced with any situation not explicitly programmed for in advance,
is directly due to their fundamental formulation. Natural intelligence,
by contrast, originated as a direct result of variability in the environment.
The ability to learn and to adapt one's behaviors to a changing environment,
at however primitive a level, are the basis and origins of natural intelligence.
For all of its strengths as a force of optimization, Natural Selection
is constrained, more often than not, by its own history. Natural Selection
is not a conscious force - an engineer who has the luxury of scrapping all
previous designs and beginning from scratch. As each organism, each natural
design, climbed to the top of its particular hill of fitness, it lost the
ability to adopt radically different designs without first descending that
same hill, thus surrendering the advantages it had fought so hard for, and
opening itself to attack by predators in the surrounding fitness landscape.
Nature's Tabula Rasa was writ upon long ago, by the fundamental physical
forces, and the play of events that led to the formation of our world. And
over time, ever more writing - the successful biological building blocks
and their various methods of organization - has constrained Nature's options
even further. Early on, Nature "discovered" the cell: a conveniently
packaged, durable, and replicable little chemical factory. And special types
of cells, that we now call "neurons", were produced which had
the remarkable capability of both storing and transmitting information about
their immediate environment by simple electrochemical means. Particular
assemblies of these neuronal cells turned out to be able to work together,
in a collaborative, distributed fashion, to store and transmit larger quantities
of information about their environment. Per force now, working with the
good thing it had found, Nature produced more and more elaborate collections
of these special cells, and their attendant sensory mechanisms and musculatures,
and by their proliferation determined which assemblages "worked best".
The ones that have worked the best so far have included (varying amounts
of) the ability to learn about their environment, and to retain that learning,
and to be able to utilize that learning to better adapt to and survive in
that environment; that is, they have been intelligent, to some degree.
Seen in this way, intelligence is rooted in variability, learning, and survival;
its raison d'etre is adaptability. Its form is cellular assemblies
- neural systems . And the consistency of this form throughout our
world's ecology is understandable due to nature's inherent dependency upon
its history. Importantly, I think, it is also clear that intelligence is
better viewed as a spectrum, from the simplest organisms to the most complex,
than as some unique facility manifesting only in human beings.
If we accept these arguments as to the origins and form of natural intelligence,
then perhaps we can use them to guide our investigation into the nature
of mind. Surely the nature of intelligence, in our little world, is at least
partially a result of its physical embodiment in collections of cooperating
cells. Perhaps there are information-theoretic ways, mathematical ways,
to describe the functioning of intelligence. But certainly there are ways,
right now, to study the components of brain - neurons, their interconnections,
and the other physical elements of the brain - that can yield direct insights
into the functioning of the material systems in which we would claim intelligence
has taken root. At long last, we can begin to investigate and understand
the nature of brain, and use these understandings to yield our much sought
after insights into the nature of mind.
The manner in which individual neuronal cells behave, and the mechanisms
by which they learn, strongly affect the behavior of the brain as a whole.
Early neurophysiological research focused heavily on the function of these
cells. And the knowledge so acquired, about cellular function in neural
systems, is vital to our attempt to develop an understanding of brain function.
It might also be mentioned, in passing, however, that unbridled reductionist
tendencies, coupled with the technologically challenging nature of doing
otherwise, have resulted in the expenditure of tremendous energies on the
investigation of individual cellular function without a commensurate effort
to investigate the role of the network in which those cells are embedded.
Intelligence, after all, does not manifest in a single cell... only in large
assemblages of cells. The architecture, or topology, of the connections
between those cells must also drive the behavior of the brain. Only very
recently have efforts at analyzing, modeling, and understanding the nature
of these neural networks been given a significant priority. And the
early results of these investigations are exciting, indeed!
Though I am deliberately following a very physically-based line of reasoning
throughout most of this discussion, I would like to digress for a moment
and talk about a few, important psychological insights into the nature of
mind. The relationship between the physical model of mind I am espousing,
and the psychological theories and observations I will refer to, will hopefully
be made clear as we proceed.
Jean Piaget and Jerome Bruner have both demonstrated, cleverly and effectively,
the multiplicity of human thought. That is, our thought processes
may be shown to be composed of, at least, three distinct modes of reasoning.
Though there is some argument as to the precise ages of transition, Piaget
demonstrated that children pass through first a kinesthetic (or body-learning)
stage of development, followed by an iconic (or visual) stage, finally
graduating to a symbolic (or logical) stage. In each stage, Piaget
demonstrated that later stages were largely inaccessible to the child, and
earlier stages were largely ignored. Bruner demonstrated that while these
three modes of thought were indeed present and dominant during these three
phases of development, all three modes of thought were present throughout
the child's growth, and any of the modes is accessible if the dominant mode
can merely be distracted for a time. Alan Kay has said, "Doing
with images makes symbols." That is, physical interaction
with visual feedback will produce logical understanding. To
induce a deep understanding, it is important to engage all three of these
fundamental modes of thought. These tenets have been put into practice in
educational systems with striking success - an interesting set of stories
in themselves, though time constraints prohibit their elucidation here.
The fundamental truth about intelligence that I wish to expose with these
citations, is that mind, by its very nature, is multifaceted.
One of the most recent, and most insightful, psychological theories of mind,
put forth by Marvin Minsky in his Society of Mind, would splinter the mind
still further. Minsky's view of intelligence is that it is based on a large
collection of simple agents, each of which represents some basic
component of human desires and behaviors. So there might be an agent that
recognizes hunger, another that recognizes pain, and another that effects
movement in a particular direction. Cooperation and competition between
these agents produces the complex sequences of behavior that are observed
in intelligent organisms. Though Minsky notes in an aside that his agents
may indeed be implemented by assemblages of neural cells, he generally steers
clear of discussing how his model of mind might be implemented in hardware,
preferring to call his model a psychological theory. Minsky argues eloquently
for the predictability of a number of human psychological phenomena based
on his theory. And a computer simulation based on Society of Mind, by Michael
Travers, has successfully emulated the foraging, pheromone marking, and
trail following behaviors of entire ant colonies, after specifying only
the agents and connections that make up the individual ants. Here again,
the main point I wish to make at this time, is that mind is best seen and
understood as a collection of cooperating and competing processes.
Bridging the gap between psychological and physiological theories, Paul
Churchland's philosophy of science approach offers a "spreading activation"
model of the nature and function of mind-in-brain. In Churchland's view,
activity in the brain is transmitted, via synaptic connections, to associated
areas in the brain, causing neuronal activity in those associated areas,
which then activates further areas of association, and so on. These activations
may originate from either the sense mechanisms linking us to the outside
world, or from introspections regarding a purely internal world model. Indeed,
the seeming enigma of self awareness follows quite naturally from simple
awareness, when an organism's internal world model becomes sophisticated
enough to include the source of the point of view - when the modeler is
included in the model. Thus Churchland sees the principle function of mind
as this "spreading activation" throughout the brain, and, again,
intelligence is seen to be the result of the interaction between a large
collection of cooperating and competing processes.
To return to purely physiological arguments, I would first like to simply
remind everyone of the widely acknowledged partitioning of mind between
the left and right hemispheres of the brain. Most everyone is aware of the
association of language and analytical skills with the left hemisphere,
and spatial reckoning (and some say creativity) with the right hemisphere.
But, in fact, the brain clearly has been shown to operate on much finer
subdivisions, including the finer elements of language, and detailed maps
of the surface of the body. Groups of neurons are dedicated separately to
grammar, to language content, to each of the fingers, the face, the tongue,
and so on. The action of the brain, then, is necessarily the result of cooperation
and competition between these various neuronal groups. Little wonder that
mind is best seen as multifaceted cooperation and competition.
From the realm of modern day neurophysiology, I would like to point to the
work of Donald Hebb, Ralph Linsker, and John Pearson. Donald Hebb first
proposed a learning paradigm for neural structures based on his investigations
of visual systems in primates in the 1940's. At the time, he had no real
physiological data to support his proposition, though recent research has
revealed some biochemical mechanisms that seem to implement just exactly
what has come to be called Hebbian learning. Basically, Hebbian learning
provides a method for forming associations, by strengthening the connections
between neurons that tend to fire together, and weakening the connections
between neurons that don't. It turns out that application of this simple
learning rule, for appropriate network architectures, will produce just
the kind of cooperation and competition between neurons that our "spreading
activation" model needs to learn and function properly.
Ralph Linsker has provided both a theoretical framework and a series of
computer simulations that explain and demonstrate functionality observed
in early visual cortex of humans. His "Infomax" principle states
that neuronal function, including learning, can be analyzed and predicted
in terms of a maximization of information transfer between layers of neurons.
From a formal statement of this principle, it is possible to show that Hebbian
learning will carry out this information maximization. Computer simulations
based on this principle have been able to reproduce known cellular response
properties and global distribution patterns observed in real organisms.
John Pearson carried out a complex simulation of the somatosensory cortex
(that which responds to the sense of touch). Utilizing a minor modification
of Hebbian learning, he was able to demonstrate not only "self organization"
of neurons - the automatic associational mechanism needed for specialization
of neuronal groups - but also competition for representation of sensory
input in those neuronal groups. That is, first, the connections between
neurons were shown to automatically recalibrate themselves so as to cause
the neurons to form groups that would respond to, say, the front of the
first finger in one group, and the front of the second finger in another
group. Then, excess stimulus to one of the simulated fingers would automatically
cause more neurons to be recruited from other groups, to participate in
the representation of the overstimulated group. This remarkable work was
able to demonstrate two important phenomena known to take place in living
organisms, based on simple neural models and Hebbian learning.
Summarizing, then, I have made some suggestions about why evolution gave
rise to intelligence, and how it came to be embodied in the particular neural
mechanisms we call the brain. I have also suggested that the multifaceted
nature of mind, as discerned through the discipline of psychology, is recognizable
as a direct consequence of the manner in which it is implemented in the
brain. And, finally, I have tried to suggest that theoretical and computer
models of a few, small portions of the brain have lent some credence to
our neural network model of mind-in-brain.
On a final note, I would like to briefly mention a computer simulation of
my own design and implementation: a system named PolyWorld,
which uses the principles of Natural Selection to evolve simulated organisms
in a computational ecology. These organisms use a sense of "vision"
to perceive their simulated environment, and have brains composed of simple
models of neurons, employing Hebbian learning. PolyWorld uses survival
of the fittest to evolve neural organisms capable of surviving in a
dynamic environment. A number of different "species" have already
evolved, with radically different individual and group behavioral dynamics.
While PolyWorld can be used to carry out a wide variety of interesting Evolution
Science and Ethological studies, its primary goal is the evolution of intelligent
artificial life. Taking the earlier stated view that intelligence is a near-continuum
from the simplest organisms to the most complex, and, believing that intelligence
is derived from evolution, based upon learning in a variable environment,
and built from neuronal cells, PolyWorld is an attempt to take the appropriate
first steps towards modeling, understanding, and reproducing the phenomenon
of intelligence.
|