[Note: This has not been entirely cleaned up for conversion to HTML, so
some superscripts and tables and possibly other constructs may be misleading
or difficult to read. I have tried to catch most of the problems that would
lead to true misunderstanding, however. - larryy 4/16/96]
Notes from the Emergent Computation '89 conference
Several members of the Vivarium program - Jay Fenton, Hayden Nash, and Larry
Yaeger - attended the "Emergent Computation" (EC) Conference at
Los Alamos National Laboratory on 5/22-26/89. Jay and Hayden have separately
reported their reactions to the conference, and these are mine.
The theme of the conference was, predictably, emergent computations; i.e.,
highly complex processes arising from the cooperation of many simple processes.
This type of phenomenon - the emergence of complexity from repeated simplicity
- is at the heart of many of the front-most fields of scientific endeavor
today. Many people are familiar with the computer-graphic paisley of the
Mandelbrot set, whose complex form arises from the repeated application
of an extremely simple rule, such as z(t+1) = z(t)^2+c. Despite the simplicity
of this rule, the boundary between those points in the complex plane that
do and that do not go to infinity when repeatedly subjected
to this rule is essentially infinitely complex. This emergent computation
principal is also at the heart of the current field of Neural Networks (or
Connectionism), which finds its inspiration in the human brain's ability
to perform complex computation (reasoning) while it appears to be composed
of many (very many) fairly simple computational elements (neurons).
Cellular Automata (CA's) provide another good example of complex behavior
deriving from simple rules. Then there are classifier systems, biological
immune systems, neurophysiological simulations, autocatalytic networks,
adaptive game theory, choas theory, nonlinear systems, artificial life,
and so on. (See? A simple concept like Emergent Computation gives rise to
quite a complex list of topics!)
While the previous list of fields of study are not traditionally thought
of as directly related to one another (for the most part), they do in fact
share certain fundamental underpinnings, and the purpose of this conference
was partly to explore the possibilities inherent in a cross-fertilization
between these fields of study. In addition, one of the more annoying aspects
of the complexity that arises in each of these fields is its almost complete
unpredictability. And one of the underlying hopes of this conference, if
not its actual theme, was to shed some light on the relation between
the emergent complexity and the simpler, underlying components of any such
system. That is, just as Mandelbrot's fractal dimensions gave us a simple
way to describe one form of complexity, so we would now like to develop
a technique for describing and even predicting the complexity that emerges
from the interaction of many, simpler components, in a general form.
For Vivarium, these principles certainly have significance, given our focus
on an ecology in the computer. The essentially unpredictable nature
of an emergent, complex ecology is what we wish to arrive at by a simulation
based on simple, understandable behavior models. Yet we would, indeed, like
to go that one step farther and be able to predict at least some aspects
of the resultant ecology based upon our understanding of the individuals
which will inhabit it. And a better model for the relation between the group
dynamic and the individual behavior (or an individual's behavior and the
underlying mental agents) would better allow us to see the parallels
between our intermediate ecological modeling and the ultimate goal of agents
in an information-ecology.
I personally found the conference very stimulating and rewarding, with Danny
Hillis's and Ralph Linsker's work, especially, giving me fresh enthusiasm
for an idea that I had been toying with for some time - a Braitenberg/Vehicles-inspired
"Polyworld" of polygonal creatures in a polygonal environment
capable of seeing each other and their environment through visual systems
based on traditional polygon rendering techniques. These visual systems
provide the "grounding", and layered, unsupervised networks provide
the nervous systems for these creatures. If their architectures and body
forms are put under control of a morphogenetic mapping from a simpler genetic
description, then hand-crafting creature designs may be greatly simplified,
and perhaps more significantly, they may be allowed to evolve via natural
selection to more and more competent creatures within their Vivarium, the
computer. This may, in fact, be my next major project here at Apple.
I will try to give a brief description of the more significant presentations
at the conference (well, the ones I liked best, anyway), and include at
least a sentence or two about a significant percentage of the talks (a percentage
of the significant talks?). The opinions expressed regarding merit or lack
there of are, of course, my own. For further details, there will be a proceedings
from Santa Fe Press, and/or there are complete sets of Jay's, Hayden's,
and my notes from the conference, plus photocopies of some of the presenters'
view-foils available from Dave Jones.
Danny Hillis (Thinking Machines, Inc.) - "Intelligence as an Emergent
This was one of the best talks, and probably the most impressive demonstration
of an evolving system (in a computer) that I have seen. He actually chose
to deviate rather widely from his title, and discuss parasitic co-evolution
in an evolutionary "ramp-world", and in the solution of a classic
sorting-algorithm problem. (His paper in the Daedalus AI issue covers the
title's topic quite well if anyone is interested.)
His ramp-world consists of diploid individuals whose genotype specifies
a series of short steps, either up or down, which give rise to what
he likes to think of as ramp phenotypes. He imposes a fitness function
that favors simple ramps with a steady sequence of short upward steps over
their entire length. The form of his fitness function, however, gives rise
(deliberately) to a great many local minima - ramps that are continuous
for say 1/3 their total length, then drop back to the baseline, and repeat
this pattern twice more are almost as good as a single continuous
ramp. This means that it would be quite difficult for his system to ever
evolve to a population of truely optimum individuals, because slight
changes in the vicinity of one of these local minima will always produce
a worse individual, even though a better individual exists
a greater distance away. At replication time, his diploid individuals are
split, employing crossover, into two haploid individuals, mutated, and mated
with other haploid individuals (who are nearby according to a Gaussian distribution)
to produce the next generation of diploid individuals. Seeding this system
with random initial genotypes and then allowing it to evolve produces a
fairly low overall fitness for the total population, getting stuck as it
does in local minima of the overall energy surface. However, he then introduces
a parasitical species, that benefits from finding negative ramp-steps in
the host species, and debilitates the host by its presence. This parasite
will grow in population whenever any large upsurge in the host population
occurs (corresponding to a local minimum in the energy surface). As a result
of the increased parasite population, a sort of plague sweeps through the
host, reducing the host's population, and then the parasite's own population
reduces accordingly. By thus preventing run-away breeding of an only slightly
superior ramp genotype, the host population, in the presence of this parasitical
population, evolves a much higher state of overall ramp-ness, relative to
a host population evolving without parasites. Animated color maps of population
density were used quite effectively to demonstrate the behavior of the both
the host and the parasite populations.
Danny then discussed his application of parasitic co-evolution to the solution
of the minimal Batcher's sorting network (see Knuth Volume III). In 1962,
Bose & Nelson demonstrated a functioning Batcher network with 65 nodes;
in 1964, Batcher and Knuth produced a functioning network with 63 nodes;
this remained the best know solution until 1969 when Shapiro produced a
62 node network; later that year, Green produced a 60 node network. Danny's
system, without parasites found a network with 65 nodes; with
parasites it found a network of 61 nodes.
Danny finished with a story about apes & songs evolving into man &
language, again stressing the significance of co-evolution, and finally
concluded that, based on the growth in our knowledge of the complexities
inherent in real world evolution, natural selection probably is, after all,
adequate to have given rise to intelligence.
Paul Churchland (UC San Diego) - "Perceptual Recognition, Explanatory
Understanding, and Conceptual Change within Neural Networks"
Paul's talk was perhaps a little bit softer, technically speaking, than
some of the other presentations - which is appropriate since he is a practitioner
of the Philosophy of Science - but I found it wonderfully coherent, consistent
with the facts as best they are known, and inspired in its scope. He attempted
to assess the nature and function of human reasoning by starting with the
nature of explanation itself. He began by discussing the "Deductive-Nomological"
model of explanation, whose general form consists of stating that given
1) a Law of Nature and 2) a description of some Initial Conditions, one
may always arrive at 3) a description of the Event to be Explained. Couple
this with "Intertheoretic Reductions" (where one theory contains
and supersedes another), and, he noted, cognition as a "Grand Dance
of Sentences" followed quite naturally from this deductive reasoning
model. He then pointed out a great many difficulties in this model, including:
1) Speed of Access (How does one retrieve the relevant assembly of premises
from the millions of beliefs one holds?)
2) Nomic-Ignorance (People are generally very bad at articulating the "laws"
on which their explanatory understanding rests)
3) Deductive Incompetence
1) Explanatory Asymmetries (Why is the flag pole 50 ft. high? Because its
shadow is 50 ft. long and the sun's elevation is 45°.)
2) Irrelevant Explanations (Why did this salt dissolve? Because I hexed
it, and all hexed salt dissolves in water.)
3) Distinct Explanation Types (Causal, Syndromic, Functional, Reductive,
And yes, this was pretty much an attack on the traditional, symbolic AI
and Cognitive Science functionalist approach to cognition. He then spent
some time talking about a couple of the classic Neural Net (NN) application
examples - Terry Sejnowski's NETtalk (ASCII to phoneme translation) and
pattern classification/discrimination of underwater rocks from mines (in
which a NN was able to perform this discrimination as well as or slightly
better than humans). He correctly pointed out one of the characteristic
behaviors of NN's, which is to perform an optimal partitioning of some descriptive
vector space. Once so partitioned, NN's also prove to be good at vector-completion;
that is, given some subset of a complete feature vector, or even a damaged
version of some feature vector, a NN can be made to settle quickly to a
completed or corrected version of that vector. Churchland then argues that
such vector-completion is the underlying principal of cognition, in what
he refers to as a "Prototype-Activation" model of cognition. Given
this model, he then attempts to explore some of its ramifications, including
a point-by-point solution of all the problems mentioned in association with
the Deductive-Nomological model above:
1) Speed of Access - expect access time of between 1/20 sec and 1 sec
2) Nomic-Ignorance - there are no "laws"; we access prototypes,
3) Deductive Incompetence - there is no deduction; activation of some prototypes
stimulates activation of other prototypes
1) Explanatory Asymmetries - etiological prototypes are temporally
2) Irrelevant Explanations - no hexed salt prototype ever gets built
3) Distinct Explanation Types - simply correspond to distinct kinds of prototypes
(property-cluster = syndromic, etiological = causal, practical = functional,
superordinate = reductive/axiomatic, social interaction = moral/legal, motivational
Churchland then attempts to make some estimate of the capacity of such a
model of the brain, and from the fairly standard estimates of approximately
10^11 neurons and 10^3 connections/neuron, plus, say, 10 distinct possible
values (weights) for a given synapse (connection), he arrives at 10^10^14
(that's 10 to the 10 to the 14 - MacWrite II doesn't have double superscripts
- or 10^100000000000000) possible weight configurations for the human brain.
He also noted that the total number of elementary particles in the universe
is estimated at about 10^87. He even went on to attempt to determine how
many of these possible weight configurations yield "significantly different
conceptual frameworks". As a "lower bound" he estimated the
number of significantly different patterns that may be stored in a "Baby
Net" of about 10^3 neurons and 10^6 connections. Since such a net can
have about 10^10^6 weight configurations, and can certainly sustain many
more than 100 significantly different "conceptual frameworks",
the ratio of the total capacity of the human brain to this baby net, times
at least 100, produces an estimate of the number of significantly different
conceptual frameworks available in the human cortex of about 10^(2*(10^14-10^6))
or 10^99,999,999,000,002. Ample food for thought!
I have hardly done justice to the range and clarity of Churchland's talk,
working from inadequate notes and memory, but hopefully the gist of his
presentation was made clear. He very effectively used a number of diagrams,
graphs, and such to yield a pleasant, intuitive feel to his arguments, and
managed to communicate some fairly profound insights into the nature of
cognition (as I see it, of course).
Ralph Linsker (IBM Thomas J. Watson Research Laboratories) - "How
Information Theory Can Guide the Design of a Perceptual Network"
Ralph started by stating that he wanted to know physically How and physically
Why the visual system comes to be as it is. He then enumerated a few approaches
that other people have taken to develop this understanding, and why he has
Marr et al's top down approach (AI/Engineering) - which he claims has no
surprises... you get what you model, it may or may not work, and it may
or may not correspond to reality.
Emergent - choose rules and see what happens... but it's hard to pick the
Adaptation rules + a specified goal (fitness function)... but a fitness
function chosen at too high a level may not be related to the organism.
Other approach... see if there isn't a method to ask what algorithms can
extract salient features, and how do these behave, then what cell response
does one see.
Ralph then described some of his computer simulations and his excellent
theoretical formulations leading to the development of an information-theoretic
optimal learning algorithm in Neural Networks (NN's) - actually his "Infomax
Principle" (described well in the Computer issue devoted to NN's).
In a layered network with receptive fields, a Hebb-like rule, subject to
a conservation of total strength summed at a cell, will maximize the information
transferred by an individual cell; i.e., it will maximize the variance at
the next layer. This is really the best behavior that a cell may have given
a need for cooperative computation with the other cells in the system but
without any knowledge of their activity. Ralph was able to demonstrate the
emergence of center-surround cells, orientation selective cells in locally-continuous/fractured-band
structures, and ocular-dominance stripes with sharp boundaries in his simulations,
all of these being well known features of real visual systems. In fact,
all of these cell types emerged in the presence of completely random inputs
to the network; that is, structured input was not required to achieve the
self-organization of these optimal information-processing cell types. This
startling discovery helps to explain the presence of such cell types in
newborn infants, and suggests that such cell behaviors are not so much hard-wired
as they are learned in response to a hard-wired, optimal learning algorithm.
Furthermore, Ralph's Infomax Principle, and apparently the physical principles
employed as a result of natural selection, produce input-output mappings
· reduce redundancy (detect & incode input correlation)
· select which signal attributes are represented in the output layer
· introduce redundancy when this mitigates the effects of noise
· induce geometric structure
· represent information rich regions of the input more fully in the
Ralph's work represents one of the most important contributions ever made
to the understanding of the form and function of the brain.
Chris Langton (Los Alamos) - "Computation at the Edge of Chaos:
Phase Transitions and Emergent Computation"
In an effort to better understand "life, intelligence, the economy,
fluid dynamics" and other spheres of study, Chris attempted to define
a formal method of parameterizing the computational capability of emergent
systems. In support of the need for studying these emergent systems he contrasted
two approaches to computation:
Global - predictable, and thus programmable, but having too many states,
Local - unpredictable, and thus difficult to program (which issues he clearly
hoped to address during this talk), but with a manageable number of local
states, each explicitly addressable, and a wide range of global dynamics,
flexible and robust.
He noted that in order to take advantage of the benefits of the Local style
of computation, it is necessary to embed the capacity to transmit and store
information within the local dynamics. He likened the different behavioral
regimes of Cellular Automata (CA's) in particular to the different phase-states
of matter - solid, liquid, and gas corresponding to Wolfram's "fixed
point & periodic", "extended transients", and "chaotic"
classifications, respectively. The "solid" regime is characterized
by both local and global order, minimum entropy, and the capacity to store
but not transmit information. The "gas" regime is characterized
by a lack of either global or local order, maximum entropy, and a capacity
to transmit but not store information. The intermediate "liquid"
regime is characterized by local order but no global order, and varying
degrees of entropy and information storage and transmition capacities. Chris
speculated that we will find the most interesting information processing
systems, both real and artificial, near the boundary between "solid"
and "liquid", between the fixed-point/periodic and extended transient
domains. (Wolfram hypothesized that the extended-transient class of CA's
might be capable of Universal Computation.) Chris defined a parameter lambda,
based on and fairly successfully characterizing the set of possible states
of his CA's. He demonstrated in theory and simulation that Mutual Information
(where two processes are neither fully correlated nor fully uncorrelated)
between two locations in a CA's domain may, with suitable normalization,
be fairly well characterized by his single parameter, lambda, and
is always maximized around lambda = 0.2 to 0.3.
This seemed to me to be a very interesting first step along the path to
being able to predict, if not the actual form, at least the character of
an emergent computation.
Doyne Farmer (Los Alamos) - "A Comparison of Different Methods of
I believe Doyne actually changed the title of his talk on the fly to "Generic
Connectionism: A Rosetta Stone for Adaptive Dynamics or The Berlitz
Guide to Connectionism". He did an interesting job of compiling the
like characteristics of a wide array of fields of study, all of which fall
under the banner of emergent computations in some fashion: neural nets,
classifier systems, idiotypic nets (immune system), autocatalytic nets,
adaptive game theory, adaptive polynomial approximation, and Boolean nets.
He built up a large table that provides the source field's terms for each
of the "Generic Connectionist Jargon" terms: node, signal, connection,
weight, interaction rule, learning algorithm, and meta-dynamics.
Jay Fenton discusses this talk further, so I will not delve into it any
more deeply, except to say that it was indeed quite interesting. Though
I must confess that I have some doubts as to its ultimate usefulness, it
was certainly the most willful effort at explicit cross-fertilization at
John Holland (U. of Mich.) - "Emergent Models"
John discussed methods he has been working on for using classifiers to do
prediction, using locality and distributed information. He noted that it
is somewhat rule-based, but without appriori-supplied syntax or semantics.
He believes that his bucket-brigade system of credit assignment works well,
and has been employed successfully in a system with up to 8000 rules. John
is always fascinating to listen to and to learn from, and I think that the
most interesting new ideas in this talk were related to the introduction
of a "virtual" mode to their classifier system. In this virtual
mode, they use the bucket brigade to update a "virtual strength",
just as the real strength is updated in the "real" mode. The system
is run in the "v"-tagged mode for some time; e.g., 50 times real
time (like the human nervous system). This v-mode effectively repeats all
the look-ahead many times, thus reinforcing the earliest, most virtually-succesful
rules a lot. Then the most successful virtual rule can bid the highest and
thus cause the real action to occur, for which it will be rewarded.
Stuart Kaufman (U. of Penn.) - "Emergent Computation in Disordered
Stuart tells us that Emergent Computation (EC) is a signature of life, an
adaptation of complex systems to their worlds, and wonders just how much
of the order in life is self-organized (emergent), as opposed to selected/mutated.
He goes so far as to claim that Darwin's view is inadequate; that it ignores
EC in dealing only with mutation and selection. Stuart has done some fascinating
work in the past relating to autocatalytic polymer systems, and their possible
relation to the origin of life. He now brings his mathematical skills to
networks of boolean nodes and manages to develop some interesting characterizations
of the networks' behaviors, their relative stability, and ability to support
frozen percolation (akin to Langton's/Wolfram's extended transient behavior).
Stevan Harnad (Princeton U.) - "Grounding Symbols in a Nonsymbolic
Stevan was an interesting crossover speaker, coming from the world of Cognitive
Science, speaking to their vernacular, history, and sensibilities (and thus
setting off some alarms in and taking some flak from the connectionists
and neurophysiologists in the room), but ultimately offering a conclusion
that cognition is based on and should be modeled through connectionism!
I believe that he may have actually provided the appropriate terms - Iconic
and Categorical Representations (to provide grounding) and Symbolic Representation
(derived from these grounded representations) - to permit Cog. Sci. initiates
to swallow connectionism. Symbolic Processing itself is the emergent property
in his view (I agree). His approach, consistent with his background, was
one of the few top-down approaches at the conference, in contrast to a vast
majority of bottom-up, simulate it and see what happens, approaches. He
stated that he believes that Hypothesis and Analysis is the answer, not
Measurement and Synthesis. (Probably there's more than enough room for both,
and each may inform the other.)
William Hamilton (Oxford) - "Evolution of Cooperation"
Some nice theoretical formalism and simulations to show both why sexual
reproduction is favored over asexual (evolutionarily speaking), and why
cooperation between likes and even between unlikes can be selected for (even
though the genes are selfish it need not always be every organism for itself).
James Bower (CalTech) - "Brain and Parallel Computer Maps"
Jim probably best expressed the neurophysiologists viewpoint at this conference.
He stated flatly that the brain is a parallel computer, and you have to
examine as many of the parallel components as possible to understand it.
We need measurments, and we need models, theories, and simulations. His
group is releasing "Genesis" for beta-test soon, a fairly general
purpose and powerful parallel processing simulator capable of modeling the
individual component at any level from ion channels to neurons to large
aglomerations of neurons referred to as computational maps. It runs on a
Sun. It has a nice interface, both for specifying the networks and for examining
their dynamic behaviors. Looks to be a fairly elegant and powerful simulator.
Stephen Omohundro (ICSI, Berkeley) - "Efficient Geometric Learning
Stephen also gave as a title for his talk, "Geometric Learning Algorithms
with Neural Network Behavior". This was an interesting talk about using
k-d trees (after Friedman, Bentley, and Fengel) for partitioning a space.
Their performance, both in performing the initial partioning and in being
subsequently evaluated was compared quite favorably to neural networks applied
to the same task of space partioning.
George Reeke (Rockefeller U.) - "Functionalism vs. Selectionism:
Is the Brain a Computer?"
Reeke presented some of the work that has been going on under Gerry Edelman.
Unfortunately I don't think he did a very good job, as he seemed to be more
absorbed in expounding upon the veritable religion of Gerry's Neural Darwinism
theories than in presenting the significant facts of the research they have
performed. He even mis-explained John Pearson's work under Edelman, involving
comparisons between the simulated and actual somatosensory cortex of a monkey,
leaving out the most significant feature resulting from the simulation -
namely, the self-organization of computational brain maps whose neurons
specialized in either the front or the back of the finger, despite the presence
of sensory input from both front and back of the finger over the entire
region of the simulated cortex corresponding to that finger. Pearson's work
is some of the most impressive work I've ever seen correlating simulation
and real neural systems. Check out Pearson's paper in the proceedings from
the '87 Artificial Life conference at Los Alamos (from Santa Fe Press) instead
of this talk.
David Waltz (Thinking Machines, Inc.) - "On the Emergence of Goal-Seeking
David set himself the task of defining the control system for a particular,
limited robot. He enumerated the goals: 1) Learn to walk and turn, 2) approach
objects and learn "operators", 3) learn spatial environment, 4)
learn to chain operators to achieve goals, 5) learn and operate when one
leg is disabled - all "without cheating, in a real environment".
He then went through an incredibly complex list of features of the environment
that were to be explicitly coded for in the state-vector that would serve
as the robot's memory, and a long list of innate structures/behaviors that
were to built in. So what's cheating, then? It was all speculation at this
point; nothing has been built yet, and nothing new regarding the credit
assignment problem was mentioned.
Melanie Mitchell (Indiana U.) - "Randomness, Temperature, and the
Emergence of Understanding in a Computer Model of Concepts and Analogy-Making"
Melanie reported on the work that she and Douglas Hofstader have been doing
to build an analogy engine. They use it to predict the next letter in a
Stuart Hameroff (U. of Arizona) - "The Microtubular Computer: Biomolecular
Computing Networks within Living Cells"
Suggests that the neuron is not the appropriate level at which to abstract
primitive brain function. Instead he proposes that microtubules, the fundamental
component of the cytoskeletal structures within cells, are the appropriate
brain primitives. He suggests that they might act as digital cellular automata,
and communicate with each other via interconnecting protein structures referred
to as MAPs. He offered some sketchy physiological support for the proposed
behavior of the microtubules, no explanations for how the known functions
of the neuronal cells (ion channel flows, refractory periods, super-excitatory
periods, etc.) might derive from the microtubular computations, nor any
organizing principle (akin to Linsker's Infomax Principle that does suggest
on optimal information processing activity is being carried on at the cellular
level utilizing a simple Hebb rule) for his networks, nor any simulations
that compare with observed phenomena (such as John Pearson/Gerry Edelman's
simulations of the somatosensory cortex of a monkey). He could, of course,
be completely and totally correct - no one really understands the true function
of the brain - but I wouldn't place much stock in his theory at this time.
Certainly the cytoskeleton is present in the cell body, but so are
quarks and gluons; their structural presence does not necessarily imply
a necessity to model the system at that level. Looking for a digital computer,
even running a CA, at the base of cognition is probably a blind alley. That
we might have to refine current models of brain function, however, is hardly
in doubt. Jay was more impressed by this talk than I, and gives it a more
favorable airing in his review of the conference.
The list of the speakers may not be exhausted, but I am. So, for additional
information see our various notes or, probably better, the conference proceedings
when they arrive.
Here are notes from Jay Fenton, who also attended the conference.
There were two presentations that I found particularly informative. The
first was by Doyne Farmer, entitled "A Comparison of Different Methods
of Adaptive Computation". Farmer shows how many models of adaptive
computation are similar and can be understood through a common framework
of "Generalized Connectionist Jargon". Each scheme contains the
notion of a node, a signal, a connection, a weight, an interaction rule,
a learning algorithm, and a scheme of meta dynamics. For example: (following
the category list just given).
Neural Net Classifier System Idiotypic Net (Immune system)
---------- ----------------- ------------------------
neuron message polymer species
activation level intensity antibody - antigen concentration
axon/dendrite classifier chemical reaction of antibodys
synapse strength/specificity reaction affinity (lymphocyte)
sum/sigmoid linear threshold/max -largely unknown-
Hebb/Back Prop bucket brigade clonal selection
plasticity genetic algorithms genetic algorithms
Farmer continues this list for autocatalytic nets, adaptive games, adaptive
polynomial approximation, and boolean networks. Thus one could propose "Farmers
Thesis", which, like "Turings Thesis", implies that all adaptive
computation schemes are brothers under the skin. This means that results
discovered for one scheme should apply to the others as well.
The other paper that most stood out in my mind was entitled: "The Microtubular
Computer: Biomolecular Computing Networks within Living Cells", by
Stuart Hameroff of the Univ. of Arizona.
Most conventional neural net theory views the neuron as a summation gate,
that adds up the activation and inhibition signals on its inputs, and "fires"
if a certain threshold is reached. Based on this assumption, the human mind
contains 10^14th gates, and the space of possible pyschological states is
of the order of 10^12^14th.
A closer examination of a neuron cell reveals a complex internal structure
called the cytoskeleton. The cytoskeleton is composed of protein polymers
called microtubules which are 25 nanometers across. These microtubles are
connected to one another by bridgelike structures called MAPs. These MAPs
may well be the actual gates of a biocomputer. Hameroff presents a theory
of microtubular information processing seeing each microtubuluar subunit
as a celluar automata cell in a hexagonal cylinder, and presented a simulation
showing how MAPs can effect the outcome of computations. The implications
of this are enormous. We can throw out most of the neural net theory as
being much too simple. Each neuron, rather than be in a gate, is a seperate
processor of great complexity. The mind therefore works much closer to the
"nanotechnological limit" than we ever thought.