Apple Advanced Technology Group
Vivarium Program

Artificial Life II Conference Report
by
Larry Yaeger

February 5th through 9th, 1990
Sweeney Center
Santa Fe, New Mexico

Sponsored by:
The Santa Fe Institute &
The Center for Nonlinear Studies, Los Alamos National Laboratory

The second conference on artificial life (AL2) was held recently in Santa Fe, New Mexico. The first AL conference (AL1) was held about two and a half years before (Sept. 1987) at Los Alamos. Having attended both, it is interesting to see the growth in both interest and active work in this essentially interdisciplinary field. The first conference was attended by about 160 scientists and few journalists; the second by about 340 scientists and quite a number of journalists. For the most part, presentations at the first conference were deliberately pedagogical in nature, in an attempt to communicate across disciplinary lines; at this second conference many repeat presenters felt less need to lay basic groundwork and jumped directly into the depths of their latest research results. While this, perhaps, made some of the subjects a bit less accessible to new attendees, it permitted the communication of a great deal of new work, and may even have contributed to a sense of the emergence of a true AL community.

To casual readers, I apologize in advance for the length of this document... I went to this extent and level of detail in order to review the content for my own research purposes, to fix it in my memory, and to serve as a more durable form of memory for subsequent reference. If you wish an almost vicarious experience of the complete conference, then I hope you read and enjoy the entire document. If you just want to hit the highlights, you can look for the "*" marks I have placed beside just the sections that are either general overviews, especially interesting tidbits, or the very best presentations. (In all instances, reading until the next bold type will complete the marked section.) Note, however, that almost all of the presentations were more than a little interesting.

* Artificial Life is defined by Christopher Langton (organizer of AL1, co-organizer with Chuck Taylor of AL2) as "the study of synthetic systems that exhibit behaviors characteristic of natural living systems. It complements the traditional biological sciences concerned with analysis of living organisms by attempting to synthesize life-like behaviors within computers and other artificial media." Furthermore, it is "the attempt to abstract the logical form of life from its material basis", and "assumes life is more a function of the organizational form of the matter than it is of the matter itself". All AL work rests upon the fundamental assumptions that 1) it is possible to capture the essential dynamics of living components, and that 2) simulated components utilizing those dynamics will behave like real organisms.

Much of the work in AL is organized in a "bottom up" fashion, focusing on local rather than global control, simple rather than complex specifications, emergent rather than prespecified behaviors, and, as with Braitenberg's approach in Vehicles, or Watson and Crick's approach to determining the structure of DNA, concentrating on modeling and synthesis as an alternative (and adjunct) to the analysis of complex, frequently incomprehensible data. So too was the conference organized in a bottom up fashion, with presentation subjects moving from simulated physics, to chemistry, to origins of life, to evolution, to development and learning, to ecologies, to societies and cultures. The point was well made by Chris Langton, and by the work at this conference in general, that such bottom up modeling not only has an intrinsic value, but that acknowledging the various levels of organization and complexity can help guide the selection of an appropriate level at which to model the simpler rules that give rise to the complex, emergent behaviors observed at higher levels. Just as simulating fluids at a "droplet" rather than a molecular level is sufficient to produce many of the important and interesting features of even a complex, turbulent flow, so modeling societies at an individual level, or brains at a neuronal level may yield genuine insights into fundamental living processes.

It is interesting that the Vivarium program has long had as its central theme the idea of building ecologies in the computer, having under any other name been engaged in Artificial Life research for a number of years. While the purpose of the Vivarium program is frequently seen in terms of exploring user interface and programming system concepts to support the specification of agent or simulated creature behaviors, the method has been and remains the development of simulated living organisms and ecologies.

Following are reviews - some brief, some fairly extensive - along with some personal thoughts and comments, on the full set of oral presentations at the conference, plus a few other conference events. The speakers are grouped approximately according to the level of organization they addressed. A set of abstracts, a few technical papers, an attendees list with addresses, and my notes from the conference (plus the AL1 and Emergent Computation (EC) conference reports referred to later) are also available through [no one any longer] in the Los Angeles Vivarium office (xxx-xxx-xxxx).


Physics (a Meta-level applicable to all)

Tom Toffoli (MIT) - "Programmable Matter"


Described the beauty, or interestingness, of what we see as coming out of the eternal battle between "exp vs. poly"; i.e., potentially exponential growth in complexity of systems versus limited-resource-imposed polynomial solutions and implementations. He suggests that systems exhibiting limited communication neighborhoods and essentially identical function within each of its components are the elegant solution achieved by nature in the exp vs. poly battle.

He then described a series of computers designed at MIT specifically for running cellular automata (CA's), called the CAM (CA Machine). The CAM implements "programmable matter" where each cell may be thought of as representing a single atom or particle. The dimensions and geometry of the cellular array can be varied, as well as the connectivity and amount of state at each site. He claims the existing CAM-6 represents about 1/30 of a Cray XMP, and describes it as being a "scientific toy". The coming CAM-8 he claims will equal as much as 1000 Cray XMP's, and still describes it as being a "scientific toy", lamenting the overwhelming scaling problems in doing AL simulations at the atomic level: a factor of 10^9 or 10^10 in going from molecules to cells, 10^10 or greater in going from cells to organisms, and 10^6 or 10^7 in going from organisms to colonies.

* Chris Langton (Los Alamos) - "Life at the Edge of Chaos"

Chris gave a more mature version of the talk he gave at the Emergent Computation conference (which I reviewed in an earlier report on that conference). Basically, he has defined a lambda parameter, lambda, which is a ratio of the number of generating states (the number of rules that cause a cell to turn "on") in a CA's state transition table (the look-up table that defines the CA's next state as a function of its current neighborhood) to the total number of states in that transition table. This parameter, which ranges from 0 to .5, encompasses the full range of behavior of a system of CA's, from a system that immediately quenches at one extreme to a system that is completely chaotic at the other. In between these extremes is a regime characterized by local coherency and the transfer of information between coherent patches, which he believes is the interesting behavioral regime. These static and propagating structures provide the basis for embedded computation.

He relates the range of behaviors he observes in his CA's to Wolfram's CA classes: I - Homogeneous (fixed point), II - Heterogeneous (periodic), III - Chaotic (non-periodic), and IV - Extended Transients (possible universal computation), and to physical states of matter: I - Solid (globally ordered), II - Liquid (locally, but not globally, ordered), and III - Gas (chaotic, no order). By both entropic and mutual information measures made over a large number of simulations he is able to demonstrate that there is a kind of critical point around a l value of about 0.24, on the boundary between solid and liquid, between type I and type II CA classes, where the type IV CA class resides, where both persistent and propagating structures coexist, that he calls the "edge of chaos".

It is only here, on the edge of chaos, that he believes living, information processing systems have been able to evolve, and offers the interesting conjecture that evolution is the process of learning to avoid attractors. That is, systems starting out at too low a value of lambda always fall into fixed, static, periodic attractor states; systems starting out at too high a value of lambda always fall into chaotic, disordered attractor states; but when the primordial porridge is just right, an extended transient may be able to develop some local control over parameters affecting its closeness to the critical transition value of lambda and so approach, or at least stay close to that critical value, carry and process information, and survive as a recognizable entity. Evolutionary steps that create new organisms even more capable of tracking that critical lambda, and thus avoid attractors, are successful and the organism becomes a better information processor and survives more effectively, and so on.

* Chemistry and the Origins of Life

A significant number of presentations fell into this category. Most concerned themselves with the origins of life, and with the formation, properties, and dynamics of autocatalytic sets; that is, closed sets of molecules that are able to chemically assist in the production of each other within a "primordial soup" of constituent parts. Such autocatalytic systems are currently felt to be the basis of primitive life, and the answer to the chicken/egg problem posed by the DNA/RNA reproduction scheme employed by most life today: for DNA to reproduce, complex proteins are needed, yet in living systems these proteins are only synthesized by DNA - which came first? A current belief is that primitive life began with an RNA-only scheme, and that RNA-based autocatalytic sets provided a platform for the development of the more reliable DNA/RNA scheme - a scenario which is lent additional credence by the recent discovery that RNA can itself act as an enzyme/catalyst. Based on simulations, graph theory, mean free energies, chemical reaction rates, and direct chemical experimentation the various researchers attempted to define qualitative and quantitative behaviors of complex chemical systems.

Steen Rasmussen (Los Alamos) - "Computational Chemistry"

Steen crafted an elaborate variant of the "Core Wars" game (utilizing the same instruction set) that imposed energy/resource constraints on instruction execution and used a couple of forms of mutation and natural selection to evolve 'fitter' instruction sequences. He was never able to evolve self-sustaining organisms from a raw chemical soup, but with a hand-tailored set of initial conditions he was able to show waves of "chemical evolution", in which different instructions tended to dominate the population. He likened these behaviors to autocatalytic reactions in real chemical soups.

* Stuart Kaufman (Santa Fe Institute) - "Origins of Order: Self-Organization and Selection in Evolution"

Stuart noted that traditional Darwinian views of natural selection tend to ignore the process of self-organization. Stuart has done considerable work (and presented formal analysis results at AL1) on the subject of autocatalytic sets of polymer catalysts, and views the Origin of Life as an expected emergent property of complex systems of polymer catalysts. That is, if you have a sufficiently rich set of catalytic polymers, you will always find some set of autocatalytic members; life is a natural, emergent property of closed loops in randomly connected graphs. He sees spontaneous order and a "selection to edge of chaos" (obviously agreeing with Langton) as the basis of life. He also foreshadowed Danny Hillis's talk in stating that biological evolution is coevolution, with evolving organisms forced to respond to deforming, coupled energy landscapes, rather than simply optimize on some static fitness/energy landscape.

He carried out a formal analysis of the expected number of attractors to be found in a system of finite state automata (FSA) which use only canalyzing Boolean functions (for which each input uniquely determines the function's output state), such as govern nearly every genetic coding/biochemical reaction. Suggesting that these attractors might correspond to viable cell types in a living organism based on such functions, he determines that their number scales as the square root of the number of FSA cells, and notes that the number of cell types in living organisms also scales approximately as the square root of the number of biological cells. The title of this talk is also the title of Stuart's soon-to-be-published book, from Oxford University Press.

J. W. Schopf (UCLA) - "What Must be True for Life to Have Evolved on This Planet"

Basically a review of geologic time periods and the chemical composition of the universe, the earth, and living organisms (noting the similarities). One of the most interesting collections of observations was that oxygen levels were < 1% of the current levels until after about 2 Ga to 1.8 Ga (Ga = Giga annum = billions of years ago); that anaerobic photosynthesis not only does not require oxygen, but does not produce oxygen, while aerobic photosynthesis both requires and produces oxygen; and that a variety of complex chemical processes in the human body are anaerobic in early stages and aerobic in later stages.

Vladimir Kuz'min (USSR Academy of Sciences) - "Symmetry Breaking as a Possible Conducting Principle in Evolution: Example of a Chirality"


This was a fairly dense mathematical treatise on a particular form of symmetry breaking (the usually unpredictable preponderance of one form of a seemingly symmetric and equally likely pair of possible forms): chirality (or chemical/molecular handedness). Language difficulties made it additionally difficult to follow. However, I believe his main thesis was that symmetry breaking may often be seen as the method of development from simple to complex systems. He stated that life requires 1) the ability to self-replicate, and 2) handedness, and furthermore, that 2) was required for 1). He claims that one of the principle problems of biology is to explain the symmetry breaking of the prebiotic world, noting that all proteins have one handedness, all sugars the opposite handedness. He showed analytically (I think) that stable polymer replication requires chiral purity (complete single handedness). And further demonstrated that any simple "linear advantage factor" (in evolutionary fitness, I believe) could not gradually break symmetry, so catastrophic bifurcation is required. He also conjectured that such a linear advantage factor would be essentially insignificant, and thus the sign of chirality is purely accidental.

Ron Fox (Georgia Tech) - "Synthesis of Artificial Life"

Pointed out the inherent difficulties (the extreme magnitude) of digital simulations of natural chemical systems. Also noted that "analog" models are best based on actual molecules requiring the experimenter to be a chemist. Observed that getting from the simple chemicals to the dynamic processes of life is the crux of the matter, and suggests that an analysis of energy flows may be adequate and appropriate to model evolutionary forces. He pointed out that the interesting small molecular combinations [H2O, Ca2+(aq.), H2PO4-, Na+(aq.)] have lower mean free energy than basic elements, down to -3 DGf° (Kcal/gm). Quoting from his abstract: "Life's origin may have involved a cellular structure capable of energy driven metabolism at the level of coenzymes but without genetics or modern proteins. This structure could provide the environment in which primitive genetics and protein synthesis emerged based upon an RNA chemistry. This scenario leads to the view that evolution has diversified polymer sequences over time, rather than having selected a limited number out of a combinatoric plethora present at the start".

* Doyne Farmer (Los Alamos) - "Protolife: Emergence and Evolution of Robust Autocatalytic Sets"

Examining artificial chemical polymer sets using just ligation (condensation or end-joining) and cleavage, Doyne carried out graph theoretic analyses to predict the conditions under which autocatalysis will emerge. He was able to determine critical transition conditions to autocatalysis as a function of the radius (molecular length) of the food set, the probability of catalysis (for any monomer pair to produce a polymer), and the number of "letters" in his "chemical alphabet". Using assigned chemical reaction rates, he was able to determine pumping rates (rate of introduction of "food" chemicals - his environmental conditions) that resulted in a distribution of most of the chemical mass into autocatalytic sets (as opposed to food or background, non-autocatalytic molecular sets). He was then able to examine the robustness of these autocatalytic sets by introducing perturbations in pumping rate or alterations in the food set; some sets remained autocatalytic, some didn't. He pointed out that these systems are lifelike in that 1) information is passed on through time, 2) they grow more complex over time, and 3) they depend upon diversity.

Stuart Kauffman made an interesting statement during the question and answer session following Doyne's talk: if a completely general polymerase emerges, then production of all nodes of the graph becomes possible; this may be a critical threshold/development in the autocatalytic sets, which essentially would give rise to the RNA/DNA structures.

* Peter Schuster (Universitaet Wien, Austria) - "Modeling Evolutionary Optimization by RNA folding"

A really fascinating attempt to study "microscopic evolution" by simulation of RNA folding. For their (with Walter Fontana now of Los Alamos) study, the genotype consisted of strings of just 2 letters (as opposed to the 4 letter sequences of real RNA/DNA), and the phenotype consisted of the 2-dimensional folded molecular structure. The simple 1-dimensional sequence of letters is referred to as the primary form, the folded 2-dimensional shape as the secondary form, and a 3-dimensional folded shape (which their study did not address) as a tertiary form. They modeled chemical strings of length 70 with a 2 letter alphabet, represented by 0's and 1's, but using the G-C interactions (I believe) to yield the chemical bonding energetics that give rise to the secondary shape solutions.

Populations of various genotypes are then evolved using crossover and mutation, with fitness being determined from the phenotypical secondary form (not from the genotype). Their evolutionary fitness function represents a compromise between as many base pairs as possible and as small stacks as possible (bonded pair sequence lengths, I think). From their simulated, but fairly authentic chemical models they were able to determine that the single strand elements (unbound loops) contribute to both higher replication rates and to higher degradation (decomposition) rates. The "selective value" (the phenotypic expression of the secondary form) varies largely and abruptly with individual gene mutations, so the selective landscape for these phenotypes is very ragged (with many local minima). There is a critical mutation rate below which evolution is possible, with "quasi-species" broadening as the mutation rate increases, and above which there is no evolution - just random drift. The radical changes in phenotype with small changes in genotype result in gradual changes in genotype, but radically different secondary solutions and sharp changes in the phenotype during optimization. Currently their crossovers may occur at any point in the primary string; they believe that a more realistic model will emerge if crossover points can be confined to the edges/ends of the already selected sequences.

This talk raised one of the most interesting issues for me, as I consider the genetic code and embryology that will give rise to the "critters" in my PolyWorld ecology-in-a-computer project. That is, contrary to conventional wisdom, which says that small changes in genotype should result in small changes in phenotype (supposedly in order to permit natural selection to perform a graceful, gradient descent type of optimization), Schuster and Fontana's work suggests that the norm may be for small changes in genotype to produce radical changes in phenotype. This is also observed at a macroscopic scale in nature, as evidenced by the Drosophila fruitfly, for which a single gene's mutation can result in two pairs of wings instead of one (with all the attendant reshuffling of the nervous system as well), possibly expressing some genetic ancestor's now suppressed body structure.

Gerald Joyce (Scripps Clinic) - "RNA Evolution"

Though the work in Joyce's lab seemed to be aimed more at novel chemical synthesis and novel synthesis techniques, their work clearly demonstrates some of the fundamental principles of an "RNA world" - where RNA is capable of serving as both genotype and phenotype, thus providing the basis for an RNA organism. Using off the shelf chemicals they have used tailored chemical reactions to provide a "selection pressure" of 106 to 1 in favor of enzymes with a particular trait (a tail of a specific molecular sequence), and in a closed cycle of selection, amplification (providing suitable component chemicals and catalyzers to copy any and all enzymes manyfold), and mutation, they have been able to engineer specific enzymes, including a particular molecule (a mutant form of Tetrahymena ribozyme) that cleaves DNA more efficiently than the non-mutated form cleaves RNA. According to Joyce this engineered molecule is the first RNA enzyme that has been shown to specifically cleave single-stranded DNA.

Ben Zuckerman (UCLA) - "Constraints on and Prospects for Life Elsewhere in the Universe"

With a minimal definition of life (low entropy + complexity; which he pointed out did not rule out automobiles), and a set of minimum conditions to achieve such life:

1) flow of free energy
2) C+N compounds exist near the surface (or matter capable of interacting with the free energy)
3) liquid H2O near surface (or a fluid medium for chemical transport)
4) no free O2 in the atmosphere (O2 being too reducing/chemically reactive)
5) time

Zuckerman delivered up a fairly pessimistic assessment of the possibility of life elsewhere in our solar system (not much was said about life beyond the solar system except that he sees little hope for life evolving except on worlds similar to Earth). Venus fails due to too high pressures and temperatures; Mars due to lack of water (the H2O channels we see are > about 109 years old) and a highly oxidized, reactive surface; Jupiter due to high rate of atmospheric convection (traversal of any hospitable vertical extent of the atmosphere being so brief as to preclude the evolutionary time spans required for even primitive organic chemistry to develop); Saturn for the same convection problems. Uranus and Titan are more likely, though may not provide enough free energy, and reaction times may be too slow. Io's surface is completely recovered every 10 million years or so, yet is too cool for liquid sulphur or silicates at the surface. He also speculated against the likelihood of silicon-based life in general based on the (perhaps excess) stability of silicon, its being less likely to make double-bonds than carbon, and the fact that the Si-Si bond is only 1/2 the strength of the C-C bond. Sigh.

* Evolution, Development, Learning, and Ecologies

The steps on the ladder of organizational complexity do tend to blur a bit, and most of the remaining presentations addressed some combination of limited behavior modeling of individual organisms, evolutionary strategies for selecting fitter populations of these individuals, and observing or attempting to design or select for the emergent behaviors of groups of individuals. A few papers addressed themselves to the ontogeny (individual physiological growth/development) of plants and other organisms. There were also a couple of talks and a panel on computer viruses, worms, etc. that actually sparked quite a bit of discussion at the conference; in order to stick to a simple chronological review of presentations these talks are lumped into this category as well.

David Jefferson (UCLA) - "The Genesys System: Evolution as a Major Theme in Artificial Life"

David (along with Chuck Taylor, conference co-organizer, and Robert Collins, who later presented more recent results) characterized his approach as "organisms as programs". The genotype of his creatures is a bit string, to which GA operators are applied; the phenotype is the program - finite state automata (FSA's) in one case, neural nets (NN's) in another. David attempted to evolve creatures that were capable of exhibiting ant-like trail-following behaviors, using FSA's in one series of experiments, NN's in the other. Genesys, their simulator, is written in C++/Paris for the Connection Machine. For the FSA critters, the genome is all bits of the state transition table; for the NN critters, it is all bits of all weights. After each generation, they reproduce a full population from the top 1% to 10% of the population in a non-elitist fashion (not retaining the best organism, unless it happened to not recombine and not mutate). They were generally using a 0.5%/bit mutation rate, though he felt this might be a bit high. Their fitness function was a numerical score based on how much of the trail the creature successfully navigated within a fixed number of time steps (200, I believe), ranging from 0 to 89. The 0th generation got a (maximum?) 58 score. With FSA's, the 200th generation produced a full-scoring organism (a 14 state FSA). With NN's, they got a full-scoring organism after 50 to 100 generations, though the average score for both was only about 1/2 the maximum. Their organisms clearly learned specific tasks relating to specific trails, and when moved to a new trail learned worse (more slowly) than a fresh set of random organisms (due to the fact that any small change in these pre-trained organisms typically resulted in reduced performance and so was selected against - this is a classic problem with trying to learn on an energy landscape with a lot of local minima).

* Danny Hillis (Thinking Machines Corp.) - "Simulated Evolution and the Red Queen Hypothesis"

The "Red Queen Hypothesis" of the talk's title is a reference to the red chess queen in Alice Through the Looking Glass (Lewis Carroll), who insists that Alice keep running, and when Alice says that she doesn't see why she should, as she isn't actually getting anywhere, the red queen replies that that is precisely the point... "it takes all the running you can do to stay in the same place". (And it continues something like, 'so to actually get anywhere you have to run faster and faster and faster'.) As usual, Danny's talk was one of the highlights of the conference. In support of the red queen's hypothesis, he showed and discussed the results of some evolving systems he has been working with on the Connection Machine. Genetic evolution as a simple optimization procedure on a static, equilibrium fitness landscape, whether consisting of one or many hills is really too simplistic a view. Dynamic, far from equilibrium landscapes are the norm, perhaps necessarily so - as he demonstrates in his "rampworld" and exchange-sort evolution experiments. Real organisms must continually evolve to survive; it is as if every time some species actually succeeds in becoming king of a particular hill, changes in the environment (especially including the other organisms) cause the landscape to shift, and the hill simply drops out from beneath the formerly fit species. It takes all the running you can do to stay in the same place, and you'd best evolve to run faster and faster and faster!

Danny used an example of attempting to evolve solutions to the creation of crossword puzzles to discuss various problems with simple, single species natural selection. Because of overlap in genes (words of different sections of the puzzle overlapping in his crossword example), various partial solutions are not compatible, so recombination doesn't help much, and organisms can't get out of local minima. Mutation is (as is frequently the case) also inadequate. Introducing "temperature" variations (fluctuations in the hospitability of the environment) did not help - constant, "equatorial", benign regions yield more diverse populations (as is seen in the real, biological world). The solution that does work is to introduce a coevolving parasite that specifically thrives on weak spots in the host population. As soon as any variant of the host achieves a significant population size, by having found some locally optimum solution to its fitness constraints, the parasites begin to thrive and attack and reduce the host population, permitting alternative solutions to emerge, build up a large population, and be attacked in their turn...

Danny showed animated color contour maps of population densities evolving over time for both the single-species and the dual-species simulations which clearly illustrated the benefits of coevolution to both the average and the best individual performance of the host species. These videotapes were actually from a series of experiments that attempted to evolve the best possible ramp (horizontal sequence of unit vertical steps); Danny also talked about this experiment at the Emergent Computation conference, and it is discussed in more detail in my report on that conference. He also talked at that conference and this about evolving an optimum exchange sort algorithm, a classic computer science problem; without parasitism the results were unremarkable, but with parasitism he has been able to evolve the second best (shortest) solution known to date.

An interesting observation about solutions evolved in this manner is that in addition to providing a solution, it also provides a set of very adverse tests for the solution (the best parasites), thus improving its reliability and robustness. He noted that, "If airplanes flew based on sorting, I'd rather fly on a plane based on this algorithm [the evolved one] rather than one I drew up by hand".

In response to a question he also made an interesting, for my PolyWorld considerations, almost offhand remark about the desirability of evolving the embryology - a simple, but valuable idea to provide a more open-ended, diverse system. He did however reiterate (from the EC conference) an opinion that small changes in genotype should result in small changes in phenotype, which seems to run counter to Peter Schuster's results and the drosophila fruitfly example discussed above.

Alvy Ray Smith (Pixar) - "Simple, Non-Trivial Self-Reproducing Machines"

Alvy gave a "two page" formal mathematical proof of existence for nontrivial self-reproducing machines, as cellular automata configurations, which was based on the Recursion Theorem and relied only on computation universality. He notes that earlier proofs (3 of them) have been book length, very ad hoc, and depended upon "construction universality".

Jim Crutchfield (UC Berkeley) - "Evolutionary Mechanics: Towards a Thermodynamics of Evolution"

Believes that "computational mechanics" (mathematics of nonlinear processes, chaos theory) will provide a framework for modeling complex phenomena. From his abstract: "The computation mechanics of evolutionary systems suggests methods to quantify the temporal development of structural and behavioral sophistication. The central thermodynamic variables, the statistical complexity and dynamical entropy, measure the effective information processing in an organism and in the environment. Variational principles govern their change and suggest why and how complexity arises in natural systems." He did not actually present any of these mathematical formulations at the conference, but pointed us to a number of his technical papers.

Eugene Spafford (Purdue) - "Anatomy of Computer Viruses, Worms, and Bacteria"

Spafford differentiated between worms (programs that can run on their own), viruses (code segments that must insert themselves into other programs in order to run and reproduce), and bacteria (or rabbits, which reproduce themselves on the local machine - usually until some resource is exhausted - but do not move from machine to machine). Most of Spafford's talk was a fairly predictable elucidation of these invasive programs' taxonomy, structure, and method of action. One frightening statistic that he quoted, I think, was the existence of some 405 known viruses on PC's alone. And he claimed that all of them are still active (noting that contaminated disks are frequently filed away for years only to be finally hauled out to start a fresh round of infection). He says that there is at least one example of a pair of viruses that "mate" and reproduce offspring that are unlike either parent (nVirA & nVirB).

Russell Brand (Lawrence Livermore) - "Computer Worms and the Turing Test"

Russell gave an interesting talk (that heavily motivated audience participation) based on a recent experience he had had in tracking down and identifying a particular breach in machine security at the Lab. He listed a number of the observed characteristics of the security breach, including an initial large number of identical breaches, with no spelling errors, and simultaneous activity on two different machines, which had them fairly convinced that they were dealing with a worm. Additional observation yielded at least one example of what appeared to be a typo, which was their first indication that they might be dealing with a person or persons. For a considerable period of time they were unable to determine whether they were dealing with a worm or human(s), thus indicating the ease with which a program might pass at least a 'no questions asked' Turing test. In the end, it turned out that someone had written up this particular method of security breach, made a number of photocopies, and passed it around; hence they were, in fact, dealing with multiple people. He asked the pointed and interesting question, "What is replicating here?" And answered it: not scripts, not legends, not programs, not program fragments, but "memes" (Richard Dawkins's word for replicating ideas).

Computer Virus Panel with Harold Thimbleby, Hyman Hartman, Eugene Spafford, Russell Brand and David Jefferson

DJ
asked if computer viruses are artificial life, and answered yes because they are complex, self-replicating, demonstrate variation, mutation, recombination and evolution, exhibit purposeful behavior, exist in a homeostatic (stable) relation with the environment, tolerate perturbations in the environment, have a metabolism (exchanging energy and/or information with the environment), "live" in populations, and carry information.

* HT discussed his "Liveware" HyperCard stackware that uses "friendly worms" to synchronize and maintain a database of user interface researchers in Scotland. Liveware currently operates strictly via 'sneakerNet', and functions by examining a newly inserted diskette, then through its version control rules, incorporating data from any compatible liveware stack on the new diskette into the host machine's database. (I believe that the liveware stack on the floppy disk may be configured so as to update itself as well.) Thimbleby stated that both liveware and worms/viruses are autonomous and replicate, but where worms are uncooperative agents, his liveware uses cooperative agents; that is, the behavior of his worms is under the control of the user of the liveware. Actually, I think liveware agents replicate data but not themselves and so may not actually qualify as worms of any persuasion. In any event, I think his liveware work is extremely significant and valuable, especially when extended to software agents that may be used for synchronizing and maintaining the filestores on multiple machines (he is currently working on a virus to keep several Mac filestores in sync automatically).

Consistency is guaranteed in his liveware stacks by insisting that every piece of data has a single owner, who is the only person authorized to make changes to that data, thus providing a method for nonambiguous version control. The current implementation of liveware definitely depends upon friendly users, and has little if any security against willful misuse.

He provides users of liveware with a separate set of controls over the finding, merging, and copying of data between stacks; each function may be set to operate in one of three modes: manual, autotrigger, or confirm. That is, upon insertion of a disk containing a compatible liveware stack, liveware agents on the host machine may or may not automatically carry out the data synchronization process, and may or may not ask for confirmation prior to taking any automatic actions.

Thimbleby is currently looking for a publisher for his Liveware stack and later developments as well. Jay Fenton of Farallon has expressed a possible interest. Thimbleby has spoken to someone at Apple-UK, but doesn't feel that is going to develop any further; if anyone from Apple-US reading this has any suggestions or opinions about pursuing this further, you can (and should!) contact him at (0786) 73171 or [email protected].

Even the current set of liveware tools seems interesting, and the filestore-maintenance extension (especially with specific control over, say, folder-level synchronization; carried out over networks and/or modems; and with some attention to security) seems like one of the better ideas to come along in a while. (Wouldn't it be nice if everyone could keep their own phone extension, mail-stop, department information, and so on up to date in their own local copy of the Apple Phone stack and have those changes propagate automatically to everybody else's copy? Or if that special project folder and your expense report folder on your home machine and your office machine could automatically stay in sync?) At least Thimbleby appears to have implemented a reliable solution to the version control problem (though I think that a finer granularity of ownership and version control than he currently uses in his stacks might be useful in a more general solution). I have some of his technical papers on the subject if anyone is interested.

ES likened the seriousness of the crime of perpetrating a new computer virus on the computer users community to that of someone introducing a new biological virus that kills 5% of the human population of the planet. By and large, of course, computer viruses are unquestionably a terrible thing, but I think his analogy was more than a little stretched (and his paranoia running more than a little rampant).

HH deliberately tried to offer a counterpoint to the rest of the panel's theme, and discussed an interesting, still very speculative theory that much of natural evolution's speciation may in fact have been mediated by viral infection and an attendant alteration either in the infected organism's DNA or in the process of copying it.

RB claimed that a virus is the wrong solution for any problem, and supported his thesis by offering more efficient alternative solutions for a number of possible applications of viruses.

An open discussion sought suggestions for defenses against viruses, and yielded up the fairly obvious ideas of encrypted executables and anti-virus viruses (against RB's better judgment, of course). Danny Hillis, however, offered what I think was mostly a humorous speculation that we use different operating systems on every machine; to make them practical we could evolve them; and communication could proceed via a common interface. HH suggested that the presence of viruses might force evolutionary growth of computers, though I suspect that the only evolution one will see is to better protect themselves against viruses, which will not necessarily be of any additional value to anyone.

Tuesday Evening Demonstrations

Video and computer demonstrations were given of self-assembling models of flagella rotors by Richard Thompson; a robot that bounces a tethered ball by Brian Yamauchi; continuous growth graftal-like computer graphics plants by Karl Sims, an AGAR-like (based on Minsky's Society of Mind) behavior modeling/ecology in the computer system by Patti Maes; effects of gain and neighborhood influence on logistic map functions by John Corliss; microtubule self-assembly by Stuart Hameroff; neural net modeling of the Vestibular Macula "accelerometer" in the head/visual system by David Doshay.

Norm Packard (Santa Fe Institute) - "Complexity Increase in a Simple Model for Evolution"

A fairly simple (deliberately he says, as he wants to develop an "Ising model" for life) ecology in the computer. Rediscovered that crossover is necessary to see significant evolutionary gains in fitness.

* John Holland (no current affiliation?) - "Echo: Explorations of Evolution in a Miniature World"

This was a wonderful talk by the father of genetic algorithms, concerning some totally new work (at least since the Emergent Computation conference last May) he is carrying out entirely on a Mac II in his home. Afterwards I asked him if he was being funded by Apple. He isn't. He should be! At least to the extent of keeping him in the current fastest machine.

He describes this work as being "based on a gleam in Murray Gel-Mann's eye". He has defined a grid world, with "resource fountains" scattered around it, and inhabiting organisms who uptake resources from the environment until they have enough and then they reproduce. On the face of it, then, Echo is much like all the other ecologies in a computer. However, Holland's resources come in a number of flavors (a,b,c), and are the same string elements used to define the (variable length) genetic structure of his organisms. Reproduction requires the accumulation of sufficient quantities of all of the appropriate resources in an internal reservoir, so that the genetic code can be copied letter for letter. Organisms random walk around to get elements to reproduce.

Holland has also introduced a unique method of predation, based on dividing up the genetic specification into "offense" genes and "defense" genes (which can be and usually are different sequences of the resource components/letters), and determines the outcome of organism confrontations by cross comparing the two organisms' offense and defense "strategies" (genetic strings). A score is calculated based on the number of matches between each offense/defense pair, and the higher score yields a probability of absorbing some resources of the loser. The loser's remaining elements are returned to the environment. Each time step, each organism has a probabilistic chance of entering into an interaction with a randomly selected organism.

Without this offense/defense split, selection is simply for the shortest genome, using the most abundant elemental resources. With these organism interactions, complex evolution emerges.

His next step is to make the genome serve as a grammar that generates marker string tags (again, sequences of those same basic resource elements) for the organisms, upon which interaction and preferential reproduction will be based. When this is implemented he will have a genuine embryology (his grammar) generating a true phenotype (the particular marker tags) from the underlying genotype (element strings). He intends for the grammar to be able to evolve like catalysts and classifier systems. With such a system he hopes to be able to evolve special markings and recognition of those markings in other organisms, mimicry, and other observable phenomena of naturally evolved systems.

John ended his talk with some exhortations to researchers in this budding field to: 1) seek answers to questions posed by target disciplines (don't ignore the existing sciences); 2) try to use properties, interactions and parameters relevant to the questions one's simulations are attempting to address; and 3) use simulations in concert with appropriate mathematical theories (to provide guidelines and predictions relevant to the model). Finally he noted the need for a new "mathematics of perpetual novelty", not equilibrium, for studying complex adaptive systems such as ecologies.

Robert Rosen (Dalhousi Univ., Nova Scotia) - "What Does It Take to Make an Organism"

Attempted to relate some existing formalisms to AL. He also at times seemed to be claiming that AL was a mathematical impossibility, yet then turned around and offered "relational biology" as a new/old mathematical formalism that can replace models based on the mathematics of traditional physical sciences and have some hope of succeeding at modeling AL (I think). Claims Rashevsky started "relational biology" (incorporating the dynamical interactions with the environment in a total organism's model) after a series of failures in biological modeling based on a reductionist approach he dubbed "biomimesis". Rosen stated that "biology is not simulable" because it is a "complex" system, meaning that the fact that the "realization" (the "real" environment) is directly influenced (in a Heisenberg sense, I believe he was suggesting) by the "model" (our internal mental models of reality), as well as the model being influenced by the realization, implies a lack of computability. I think he's never heard of non-linear systems before. If you choose to interpret his thesis as a semi-formal statement of the requirement for "grounding" in the Cognitive Psychology sense it may have some merit.

Elliot Sober (affiliation?) - "Learning From Functionalism"

Sober was, in his words, the "token philosopher" at the conference. He slipped back and forth between AI and AL, stating that "AI is to psychology what AL is to biology". He noted that there is a distinction between saying a computer model can help us understand life and saying a computer model can be alive. He traced some historical developments in the history of theories about the mind-body problem and claimed that "Functionalism" can help yield insights in AL and AI (though I never was exactly sure why). He discussed the Turing Test / Imitation Game and noted the two interesting possible error conditions: Type 1 in which the subject passes the test but in fact does not think (this is the most common target for philosophical attack; e.g., Searle's Chinese story/Q&A room and Block's tree of all reasonable conversations - the problem with both these arguments being that they rest on the assumption of the actual constructability of Searle's story-to-answers book and Block's conversation tree). And Type 2 in which the subject fails the test but in fact is intelligent (consider, for example, an artificial organism that is either simply uninterested in mimicking human behavior, or is unable to pass due to a lack of shared environmental and cultural experiences with humans but is still, in fact, intelligent).

Sober simply read a prepared speech in a complete monotone, and basically made everyone glad there weren't any other philosophers speaking (though I remember how genuinely impressive I found the talk by Paul Churchland - a Philosopher of Science - at the EC conference; see that report for more info). He did offer up one good line with a bit of conviction (well, at least vocal inflection), "If a system perceives, remembers, and responds, then why ask if it can think?". But the rest of his talk was devoted to asking anyway...

Ontology of Artificial Life Panel with Peter Cariani, Steen Rasmussen, Norm Packard, Tom Toffoli, Robert Rosen and Elliot Sober

Generally one of the biggest bores at the conference, with everyone sitting around trying to outdo the greatest philosophical thinkers of history by deciding not only the nature and meaning of real life but the nature and meaning of artificial life.

However, Steen Rasmussen gave a fun and elegantly succinct "proof" of the possibility of AL and the benefits one might derive from it (shown here in an only slightly edited form which Steen approved of - in fact he kept my edited version of his handout):

(I) A universal computer is indeed universal and can emulate any process (Turing).
(II) The essence of life is a process (Von Neumann).
Accepting (I) and (II) implies the possibility of life in a computer.
(III) There exist criteria by which we are able to distinguish living from non-living things.
(IV) If somebody manages to develop life in a computer environment which satisfies (III) it follows from (II) that these life-forms are just as much alive and you and I.
(V) If such an artificial organism perceives its reality, R2, for itself R2 is just as real as our "real" reality, R1, is for us.
(VI) From (V) we conclude that R1 and R2 have the same ontological status; although R2 in a material way is embedded in R1, R2 is distinct from R1.
(VII) Given (VI) it follows that it might be possible to learn something about the fundamental properties of realities in general, and of R1 in particular, by studying the details of different R2's. An example of such a property is the physics of a reality.

* The other good bit from this panel was Tom Toffoli's description of a Turing Machine (TM) with a second head. As he noted, the common, "individual" point of view is to say that the first TM head (A) is basically a TM with minor perturbations (due to head B's occasionally writing over head A's tape marks). However, he points out, to really understand, to accurately measure what is going on you must consider the complete, dynamical system - which is not a TM. Correspondingly, a real, living organism and its environment (all elements of that environment) comprise such a system.

Wednesday Evening Demonstrations

Marek Lugowski from Indiana University showed an interesting (though completely ad hoc) tiling algorithm - some quite simple, completely local interaction rules regarding which tiles can flip with other tiles - that permits a tiled pattern to be completely randomized and then self-organize under repeated applications of the algorithmic rules.

Mike Travers gave a nice demo of an upgraded AGAR in operation (his spreading-activation-amongst-mental-agents behavior modeling system based heavily on Minsky's Society of Mind and Tinbergen style Ethology). He demonstrated a more recent version of his ant colony model that was shown at SIGGRAPH and figures prominently in his Masters Thesis.

After another ecology simulator with Turing Machine-like critters called "Turmites" (I missed the presenter's name), and a proposal for another ecology simulator that was decidedly arcane and had never actually been programmed up (by someone whose name I'm not displeased to have missed), James Kalin gave a very nice and well received demonstration of SimCity, the city administration game/simulator.

Peter Todd (Stanford) - "A General Framework for the Evolution of Adaptive Simulated Creatures"

Peter and Geoffrey Miller are using a combination of NN's and GA's to provide both learning and evolution in a simulated organism. They use bit-string genomes that code for the type of connection between neurons, including no connection and connections that are Back-Prop learnable, positive-only, negative-only, or either, and, in recent versions of the simulator, unsupervised (Hebbian learnable). Their system applies GA's in a fairly brute force way to the entire network architecture, the connection strengths, types, and so on. As one might expect, it learns, but slowly.

In their most recent experiments, the creatures roam around a grid world looking for resources that are identifiable both by a "smell" sense and a "vision" sense. Objects in the environment are equally distributed between "food" (which increases the organism's available energy 10 points and improves its fitness), and "poison" (which, of course, has the opposite effects). Though in some of their simulations, the organisms' vision system actually attempted to do pattern recognition of 2x2 cell sized objects within a total 5x5 cell visual input pattern, I believe they are simply attuned to select between one of two colors in these experiments. The sense of smell is based on simple gradients of, again, one of two odors. Food is always one color and poison the other during an individual's lifetime, but the colors may change over evolutionary time scales, and the creatures are provided with "100% accurate color transducers" (their visual input neurons always are stimulated correctly according to the actual color of the object). Smells, on the other hand, are different for food and poison, and are fixed permanently (even over evolutionary time scales), but the creatures' olfactory systems are given "noisy transducers" (there is some probability that the olfactory input neurons may receive reversed sensory input cues). The output from both the olfactory and the visual sensory systems is combined to control the organisms actual response to an object.

They deliberately tailored these sensory systems in the hopes that they would evolve unsupervised, Hebbian learning (that is changeable over the course of an individual's life, and learns directly from experience with the environment) in the color/vision system, but essentially hard-wired connections in the olfactory system that only change on evolutionary time scales. They varied the accuracy of the smell transducer from 50% to 100% accurate, and did indeed find a range between 70% and 90% smell accuracy where their expectations for the emergence of different learning algorithms were fulfilled, and "successful" organisms were evolved that combined the two senses to give good survivability. With 100% smell accuracy they evolved successful organisms, though as one might expect, evolution produced a solution based entirely on smell. Interestingly, with 50% smell accuracy evolution never produced a successful organism! Neither a useful smell- nor vision-based solution could be found, and without the evolution of an at least partially dependable "motivational/emotional/hedonic" solution, an adaptive solution also was never able to evolve.

They made an additional interesting observation, based on earlier experiments, that evolutionary pressure is first on the output motor effectors; motor behavior was evolution's top priority (in order for species to accumulate enough resources and live long enough to propagate).

The most interesting of their results to me was the failure to evolve successful organisms at 50% smell accuracy. This suggests that advanced learning systems may require an evolved innate ensemble of behaviors to use as a bootstrap or platform for their own evolution. One of the PolyWorld goals is to explore the possibility (desirability? necessity?) of using a hand-wired, innate behavior based on an olfactory sense to guide the acquisition of learned behaviors based on a vision sense [smell was left out of the version of PolyWorld that actually got implemented].

Rob Collins (UCLA) - "ArtAnt: Evolution of Central-Place Foraging Strategies in Ant-like Colonies"

Working with David Jefferson and Chuck Taylor, Rob reported some of their most recent work based on evolving simulated ant colonies. They do their simulations on a 1000x1000 grid world broken into 16x16 regions, with 435 units of food per region, 1 colony/region, and 4096 colonies, with 8 ants/colony. The individual ants use pheromones (emitted by the environment, not the ants if I heard him correctly?), food, compass heading, nest direction, and a random number as inputs to a neural net with 1949 weights (note: real ants have 100K to 200K neurons); the outputs are the allowable actions (such as movement directions and food acquisition/disposition). I do not believe that ants from one region were permitted to move into other regions. Each generation was allowed to run for 500 time steps (about 25 min. on a CM2). Reproduction is carried out at the colony level, not the individual organism level, with the colony's chromosomes determined from Ant-0. The colonies are scored based on food acquired less the amount of energy expended on motion and on pheromones (so perhaps the ants do emit pheromones?). The top 10% are selected and mated randomly. All "sister" ants of the same colony are then different recombinations and mutations (at a 0.1%/bit mutation rate) of the two colonies' chromosome sets.

Their ants did evolve to use their food sensors, though not optimally (they frequently ignored close food). No ants ever evolved to use the pheromone trails; they hypothesized that there was too much food available, hence there was no selection pressure to evolve this capability. They want to extend their experiments to more ants, with differing amounts of available food.

David Stork (Stanford) - "Non-optimality via Preadaptation in Simple Neural Systems"

David presented some results of evolving a simple simulated neural circuit that models the crayfish tail-flip circuit. Particularly, he focused on studying the effects of preadaptation of this neural circuit to perform optimally for swimming and then adapting it based on performance of the tail-flip behavior. Real crayfish do indeed use their tail both for swimming and for this tail-flip, predator-avoidance behavior. Examination of the neural circuitry in real crayfish provides a bit of a mystery regarding certain connections whose function appears to be completely overridden by alternate, inhibitory connections between the same neurons, or along the same unique neural pathway. The reason for the existence of these useless synapses is difficult to explain unless, David believes, one invokes a dependency on historical function of the neural circuitry in question. That is, it is well known that evolution and natural selection, by their nature, must carry much of the existing baggage of an organism's physiology across generations, making changes, sometimes subtle, sometimes a bit radical, but never so radical as to make the new organism unfit to survive and propagate the changes. Evolution was only presented with a Tabula Rasa once. David's hypothesis is that a neural circuit that first evolved to swim optimally might very well exhibit non-functional connections after being subsequently evolved to tail-flip optimally.

David's neural models were fairly complex and biologically reasonable, based upon the Hodgkin-Huxley equations. His genome consisted of the various parameters to this model, and some method for expressing the neural network architecture (number of neurons and connectivity). I am not clear about exactly how the synaptic connection strengths were determined (particularly whether there was any learning during an individual's lifetime - I think not), but he mentioned that initial connection strengths were determined by similarities in some of his genome parameters (presumably with previous generation circuits), and reduced, initially, as a function of distance (from the pre-synaptic neuron, presumably). The sign of a synapse (inhibitory or excitatory) was determined by the particular pre-synaptic and post-synaptic neurotransmitters. (He was also evolving the particular neurotransmitters being employed in each neuron.)

His models were then trained for some number of generations with selection pressure solely on the circuit's ability to produce a swimming motion in the tail. These preadapted circuits were then run for a number of generations with the organism's fitness changed to depend solely upon the circuit's ability to produce the tail-flip motion. As he had hypothesized, the resulting neural circuitry did indeed exhibit useless synapses, including some in the same places as in the real network.

Someone during the Q&A session took issue with his particular technique for evolving the neurotransmitters, especially a winner-take-all enforcement of a single neurotransmitter per neuron (Dale's law - now definitely shown to be incorrect), though I doubt he could do much else given the neuronal model he was employing, or that this implementation detail in any way was responsible for the primary results David observed. I do wonder about his method of assigning connection strengths, and a lack of synaptic efficacy variation in an individual's lifetime (though the latter concern may not be an issue if this simple circuit is indeed non-learning in the real organism). The largest concern I had with his model, that I voiced at the conference, was that there was no selection pressure to reduce the circuit complexity in his model (whereas biological systems do have at least some pressure to reduce their metabolic energy expenditures). Indeed, his circuits typically had a significant number of not only useless synapses, but useless - or at least redundant - neurons. I think his basic thesis was entirely correct, but his model was incomplete in a fairly crucial area.

Kristian Lindgren (NORDITA, Denmark) - "Evolution in a Population of Mutating Strategies"

Kristian showed results from an evolving ecology of "strategies" for solving the noisy, infinitely iterated Prisoner's Dilemma (PD) game. That is, a single game consists of two opponents (the "prisoners") deciding which of two possible actions to take ("Cooperate" or "Defect"); the outcome of each of the four possible combinations of decisions by the opponents is scored in such a way as to reward mutual cooperation the most, single player defection (the other player must decide to cooperate) second most, and mutual defection the least. In Kristian's experiments, each PD game used a scoring strategy:

player 2
Cooperate Defect
player Cooperate (3,3) (0,5)
1 Defect (5,0) (1,1)

The PD games are then repeatedly iterated, assuming a meeting between the same two opponents (in which case a good strategy is "Tit-For-Tat"; that is, choose the same action as the opponent did last time). And introducing noise means causing the actual performed action of a player to be opposite to the intended action with some probability p.

Kristian then defines a variable length genetic code that determines a player's decision based on some amount of history of both players' decisions. These genetic codes are 0/1 (Defect/Cooperate) bit strings that code for a unique decision for all possible histories (up to the depth permitted by the length of the genome). The position in the bit string used to decide a particular next play is determined directly from the numerical value of the historical sequence of 0's and 1's (base 2, of course), and the value of the bit in that position is the action to be taken. For example, to support a history with a depth of 3 steps, 8 bits (2m, where m = historical depth) are required in the genome, and a play sequence where the player in question's opponent had most recently defected (0), prior to which the player had cooperated (1), prior to which the opponent had also cooperated (1) would produce bit position (110)2, or 6 in base 10, so the value of the 6th bit in the player's genome would decide his next intended action.

Point mutations and gene duplication rates of 2x10-5 and 10-5, respectively, were used. Kristian noted that even though gene duplication doubles the length of the genome, it does not change the phenotype (strategy); I think this implies that the same number of historical steps still goes into each decision and the longer genome is processed in chunks, with the player simply having the ability to alternate decision strategies, as opposed to using the greater historical depth directly, but am not sure. I do not recall him mentioning crossover and recombination between players, so I suspect that this simple system may have relied on mutation alone for improvement.

As Kristian showed with plots of population fractions, even this simple system yielded some very interesting population dynamics. Starting with just 2 bit (single step history) genomes, the fraction of the total population occupied by the four possible phenotypes trades off in a complicated manner for around 1000 generations, whereupon a stable coexistence dominated principally by a Tit-For-Tat strategy (at about 60%) and a negative Tit-For-Tat strategy (at about 40%) emerged. This remained in approximate equilibrium until around 5000 steps at which point complex dynamics recurred until around 12,000 steps a stable population of (0001) strategies with very low (00010001) strategies emerged. This subsequently became chaotic again until another stable configuration emerged at around 30,000 steps, and so on, with no permanently stable equilibrium condition appearing as long as he ran the simulations.

His point was primarily that this simple system can serve as an effective testbed for studying some aspects of complex system dynamics, as evidenced by the various emergent properties it exhibited, including the intrinsic changes of dimensionality represented by the evolution of longer genomes that were more successful than lower dimensionality players, the periods of stasis with coexistence/mutualism, and population dynamics best represented by a punctuated equilibrium.

Alvy Ray Smith (Pixar) - L-systems, etc.

Due to the absence of one of the intended speakers (Przemyslaw Prusinkiewicz), Alvy held forth in an impromptu talk on Lindenmayer grammars (L-systems), and their application to growing computer graphics plants. Some pretty pictures.

* Martin de Boer (Univ. of Utrecht, The Netherlands) - "Modeling and Simulation of the Development of Cellular Structures"

Martin showed some interesting work, carried out with Przemyslaw Prusinkiewicz and David Fracchia, that coupled some simple cell physics with two dimensional, continuous L-systems to produce some strikingly realistic ontogenic development patterns for algae growths, snail embryos, and ferns. They chose to model 2D growths for computational tractability and because the natural forms could be directly observed (rather than killed and sectioned).

Martin referred to work by Lloyd which showed that specific sites of cell division and attachment are inherited from the parent's microtubules ("inheritance by cytoplasm" ! ). Martin et al developed L-system models, based on observed and measured characteristics of real organisms, that included the generation of special cell wall markers in the grammar. These markers were used to determine cell division sites. Divided cell pairs directly replaced the parent cell as their model 'aged'.

Simple cell physics were determined from an internal, osmotic pressure that was calculated proportional to cell wall length, plus simple Hooke's law springs for the cell walls. After cell division, these forces are out of balance, so the system of cellular forces is allowed to relax to a new equilibrium, then the cells divide again, and so on.

Granted the L-systems were tailored specifically to produce cell division sites that correspond to the actual organism being modeled, but the resulting simulated cellular development patterns were rather astoundingly like the growth patterns in the actual organisms, including large, global features such as overall external shape (matching a heart shape growth accurately in one case). Though this method is a bit of a CPU-hog, it is the most interesting simulated embryology/ontogeny that I have seen, and might end up playing a role in PolyWorld.

Narendra Goel (State Univ. of New York) - "From Artificial Life and Real Life - Computer Simulation of Plant Growth"

Goel is attempting to use context-free L-systems and computer graphics to produce realistic models of plant growth. The work is currently 2D (necessarily, as it is being done on PC's). They intend to correlate parameters in their L-system specification to environmental factors, and thus be able to model and illustrate the impact of these various factors on crop yield. They currently do this by a completely proscribed sequence of changes to the L-system parameters corresponding to a particular set of environmental conditions, but they hope to develop an algorithmic model for relating environment to L-system parameters. They also intend to specify computer graphic lighting parameters that are carefully calibrated against actual spectroscopic light reflection measurements made using real plants (though this aspect of the work is not very far along as yet). He showed a surprisingly realistic looking corn plant modeled with his system, and demonstrated the effects of different environmental conditions on its growth. Everything is currently so hand-tailored to produce this realism that it is difficult to assess how well their system will be able to satisfy their longer term goals, but the images they showed were quite impressive.

The Unknown Researcher (?) - "?"

Unfortunately I missed the presenter's name and the title of this unscheduled talk... but he demonstrated the ability of "Iterated Function Systems" (IFS, a finite set of "contraction mappings"; i.e., functions for which it holds that | f(x2) - f(x1) | < | x2 - x1 |, for all x) to produce images akin to L-system ferns, Sierpinski triangles, and other fractal geometries. He also developed a formal proof of the ability of L-systems to produce equivalent forms to IFS's.

Thomas Ray (Univ. of Delaware) - "Tierra: An Artificial Life Simulator"

Tom demonstrated an AL simulator that successfully evolved assembly language instruction sequences. He gave each organism a set amount of CPU time in which to execute its instructions, which effectively corresponded to the organism's energy supply. He thought of the organisms as "cells", whose size was a function of their memory allocation block size. He referred to them as "semi-permeable" because a cell could not write into another cell's interior, but it could read and execute another cell's code. His assembly language did not support numeric operands, just registers, though he made a comment about cells being able to "create a numerical value in the CX register". He did not have time to effectively present any details about his instruction set. He did show the system running in real time, and noted the emergence of some of the classic ecological dynamics, including mutualism and punctuated equilibrium.

Bruce MacLennan (Univ. of Tenn.) - "Synthetic Ethology as an Approach to the Study of Language"

MacLennan listed the principle language issues as: How do languages emerge? What are the supporting mental states? What constitutes intentionality? How are worlds constituted by language communities? How can syntax emerge from pragmatics? He then sought some answers from Wittgenstein: "Language gains meaning from usage", Heidegger: "Concerns, expectations, and practices of the community give language meaning", and Popper: "Animal intelligence is continuous with human intelligence". I believe he was also quoting Popper when he said, "The main task of the theory of human knowledge is to see it as continuous with animal knowledge, and the discontinuity - if any - from animal knowledge."

MacLennan believes we must relate the ethological context to the physiological methods of communication. Behaviorist and ethological approaches both have shortcomings, but by combining the simplicity and control of behaviorist methods with the ecological and pragmatic validity of natural ethology he believes we can develop a "Synthetic Ethology" that may serve well in developing an understanding of communication and language. He suggests making "REAL" worlds inside the computer; that is, don't think of the computer as an abstract symbol manipulator, but as a mass-energy manipulator - creatures in such a world are real. To define communication he quoted Burghardt (I think), "Communication is the phenomenon of one organism producing a signal that, when responded to by another organism, confers some advantage on the signaler's group".

MacLennan then described a simple world he has simulated in which a number of "simorgs" (simulated organisms) all have completely private local environments which can only be read (not written), and read only by the owning simorg. In addition, there is a shared, global environment which may be both read and written into by every simorg. Thus any information about another simorg's local environment can only happen through communication; that is, by one simorg "emitting a sound" by placing a symbol into the global environment that is received by another simorg. The simorgs internal state is represented by a finite state automata. I was not able to determine much about the actual nature of the local environments, nor therefore the nature of the advantage conferred by communicating with other simorgs about them, nor the complexity of the simorgs defining FSA's, nor how learning occurred in this context.

MacLennan claims that genuine communication emerged between simorgs. He next wants to see if he can understand the content and syntax of these communications. One way in which he analyzes and documents the learning is by studying the statistical characteristics of a "Denotation" matrix. This D matrix builds a table where rows correspond to all possible states of the "local situation", and the columns correspond to the "global communication" (How? Is this the set of symbols? The set of all possible sets of symbols containable by the global environment? Something else?). During the course of the simulation he increments the appropriate entry in the D matrix whenever a successful communication occurs (How is "successful" defined? Is it simply that one simorg reads what another has written?). He states that without learning the D matrix is fairly uniform and has high entropy, and that with learning the D matrix is sparser and has lower entropy.

Sigh. I really think there are some interesting aspects to the way this experiment was designed, but as all of the questions above indicate, its significance is difficult to assess due to an incomplete communication between the speaker and this organism.

Rob J. De Boer (Los Alamos) - "The Development of an Immune System"

Rob comments that the immune system is a complex information processing system embedded in a universe of protein patterns. He notes its ability to carry out recognition tasks in its specificity for particular antigens, its use of long term memory mechanisms to grant immunity that lasts for months and years, its use of evolutionary mutation and selection in antibody maturation to adapt its response to antigens, and its ability to classify self versus foreign that emerges from a selection process during neonatal life. He and Alan Perelson have carried out simulations that explore the selection process that in human beings results in a stable population of some 106 different types of lymphocytes even though more than 1012 different types are producible by the immune system. Their simulation results agree well, at least qualitatively, with the real immune system's behavior.


Pauline Hogeweg (Univ. of Utrect, The Netherlands) - "On the Natural History of Artificial Life"


Pauline exhorts ALife researchers to examine and learn from the Natural History of their simulations so that they may glean some of the same insights that the study of Natural History provides for natural biotic life (hereinafter dubbed "BLife"). She discussed an extremely simple rule set for modeling chimp behavior - that basically will just generate behaviors to seek out either food, or a mate, or a group at every time step - and claimed that a simulated ecology full of these simple chimp models will produce population statistics very like ethological data on chimps. She suggests that the observed ethological behaviors emerge in direct response to simple environmental parameters. (Of course, the parameters in her model were selected specifically to produce the behaviors they agree so well with, so who knows?)

Rik Belew (UCSD) - "Evolving Networks: Using the Genetic Algorithm to Design Connectionist Networks" changed to "Models of Learning and Evolution"

Rik discussed, among some other observations on the ALife research field in general, some of the work he has been doing in using GA's to evolve at least some optimal learning rate parameters for Back-Prop (BP) NN's. The GA did indeed come up with a set of parameters (learning rate of 3.0 and momentum term of 0.3) that seem counterintuitive and yet dramatically improved the NN's learning rate, at least for the problem he was attempting to solve. He also indicated that the GA+BP combination succeeded in producing dramatically more accurate network solutions than either the GA or BP on their own (which suggests he has also been using his GA's to adjust the network weights rather than just its learning rate parameters). This is principally due to the combining of the GA's principle strength of effective wide sampling (on a much more efficient basis than simple random sampling) with BP's principle strength of good local optimization.

Stephanie Forrest (Los Alamos) - "Using GA's to Study the Evolution of Cooperative Behavior"

Stephanie discussed some work she has been engaged in using GA's to try to evolve arms spending strategies that will result in a stable balance of power amongst three countries. A rule that "the two weakest countries are always allied" is enforced, and the new year's arms spending is a fairly simple algebraic function of last year's spending, an "intrinsic self-armament level", and some rate constants. Their fitness is determined by the absolute magnitude of the difference between the expenditures by the strongest country less the sum of the two weaker countries expenditures. (Stephanie notes that later they would like to also minimize the total expenditures.) The results to date show a regular basin of attraction (a stable arms spending strategy) when fitness is precisely at 0 (optimum for the way their fitness is defined), however, with the slightest deviation from this optimum (a fitness of 0.002 is sufficient), the various countries' arms spending policies fluctuate radically.

She has a lot of good ideas for improving the model in the future, including evolving the countries' policies on the same time scale as global interactions, representing each country by its own population of (selfish gene) bitstrings, each country deciding independently how to allocate resources and form alliances, and making fitness dependent on internal national stability and interactions with the other countries.

* David Ackley (Bellcore) - "Learning from Natural Selection in an Artificial Environment"

Thanks to both some genuine technical innovation, and a sense of humor, this was truly one of the high points of the conference. As David (and Michael Littman) point out in their abstract: "The process of natural selection is clearly a source of information about the performance of an individual organism, but -- since the signal for failure is death -- it is not immediately apparent how it could be exploited to perform learning during an individual's lifetime." Their clever solution to this enigma, which they call Evolutionary Reinforcement Learning, was to evolve a moment-to-moment evaluation function within the organism that is subsequently able to provide a moment-to-moment reinforcement signal. They used separate NN's for the organism's behavior model and for its moment-to-moment evaluation function. Then, in order to be able to effectively apply this rapid reinforcement signal and to take advantage of a reasonably well understood NN learning algorithm (Back-Prop), they invented another clever technique that they call Complimentary Reinforcement BackProp (CRBP). CRBP works by using the reinforcement signal and the previous activation levels in the output layer of the network (in this case the behavior modeling network) as a probabilistic generator of a training signal for that output layer (resulting in a vector of desired activation levels which are all either at full activation or no activation). Simple BackProp is then used to update the connection strengths in the behavior modeling network.

So, the rapid evaluation function is evolved over multiple generations under selection pressure provided by the life-or-death/once-per-generation evaluation function, and the behavior model learns as best it can from whatever the current version of the rapid evaluation function is able to tell it.

David presented a wonderfully informative and funny videotape that describes their artificial life environment, called AL. Again from their abstract, their computer simulations "span four orders of magnitude in space and six orders of magnitude in time. Successful individuals may achieve lifetimes of 25,000 steps or more, and initial populations that develop long-term viability may descend through 300 generations or more before arriving at the one million step simulation limit."

I will attempt to obtain a copy of this videotape for public consumption; it's a delightful examination of some excellent work.

John Nagel (affiliation?) - "Animation, Artificial Life, and Artificial Intelligence from the Bottom"

John noted that AI has not succeeded very well with the top-down approach. And had lots of reasons to be encouraged by the AL approach. Quoted from or referred to lots of people's work, including Michael Kass's Luxo lamp animations, Mike Travers's thesis, and Terry Winograd ("The hard thing is deciding what to do in the next 15 seconds."). Recognizes that human level AI is too lofty a short term goal, that even a squirrel might still be too hard, and proposes a squirrel that does the right thing over periods of less than a minute. Quoted Hans Moravec's estimate of 1 gm of brain mass being equivalent to about 1000 MIPS (humans would then be 106 MIPS), but was taken quite severely to task for this by Maureen Gremillion of Los Alamos (who vehemently proclaimed that Moravec's theories are completely wrong - based on her experience in computer modeling of the human visual cortex). Really this entire talk was pretty content free.

Rod Brooks (MIT) - "Real Artificial Life"

Rod showed some fun robots that he and his lab have been developing over the years. All were based on Finite State Machines. All behaviors were built using a layering technique wherein each layer represents a single, simple behavior, and modifies lower level layers through a relatively simple inhibition/excitation mechanism just prior to passing on the signals to the motor actuators. For example, level 0 is usually avoid collisions, level 1 may be to seek motion, and so on. Few if any of the robots had more than 3 seconds of memory. Among the numerous robots he discussed and showed video clips of were a robot that sought out, picked up, and returned empty coke cans (called, of course, the "Collection Machine"), and a robot that sold candy and then used its profits to bribe people to open doors for it (called, yes, the "Confection Machine"). They have experimented with 6-legged insect-like walking robots that are able to traverse rough terrains of jumbled textbooks and so should easily survive the less educated surface of the moon. Rod noted the progression of robot masses over the last few years, from the Confection Machine which weighed in at 50 Kg, to a machine called Squirt that weighs just 50g, to some work in progress now on a 1mm diameter robot with 20 legs. Rod speculated that robots of approximately this size might be able to be constructed that fed off of electrons, lived in the corners of TV screens, and would wipe the dust off of the screen when the set was turned off.


Alan Kay (Apple) - Vivarium overview


Since the Vivarium program, as pointed out earlier, has always had an essentially AL central theme, Alan gave a very appropriate overview of the history of the Vivarium program. He pointed out the original concept started with Ann Marion's simulated aquarium, and noted that our goal is still to provide children with something like a simulated aquarium where they may create both the form and the behaviors of their animals, place them into the "aquarium", and see things from their own animal's point of view.

In terms of the animals' forms, the intention has been to take advantage of the conformally related body shapes shared by many fish. To approach the animals' behavior modeling Alan referred to Grey Walter's The Living Brain and talked about the "turtles" Grey built, Elmer & Elsie. He noted that as their thinking progressed, he and Ann had to begin considering methods for modeling emotions (briefly discussing the Lorentz fluid theory of emotional discharge), and also change their thinking from a simple "kit" to a very general environment builder.

The work to this point had been going on at Atari, but it was at this point that Atari essentially folded. Alan talked briefly about some of the explorations at the MIT Media Lab, and then jumped to two years later when they discovered the Open School in Los Angeles. He described the school (300 children, 10 classrooms, 60 children per cluster, 2 teachers per cluster, clusters are 2 grade levels), and noted that there are about 200 Mac's in the school, which are in use about 2 to 2 1/2 hours per day. After a brief flirtation with VideoWorks, HyperCard was released and rapidly became the lingua franca (sic) for the entire school.

A second strand running through the current incarnation of the Vivarium program is PlayGround, the language and interface that we hope will permit children to define agent/animal behaviors in a simple, straightforward fashion. Alan noted that PlayGround had been under development for a couple of years by Jay Fenton, himself, and Ann Marion, and mentioned testing it in the Open School.

Alan briefly talked about some of the other projects within Vivarium, including the machine learning work by Larry Yaeger and Ted Kaehler, and the Koko software by Larry Yaeger, calling attention to Koko's role as our advisor on animal behavior.

Another continuing strand in the Vivarium Program may be thought of as demonstrations of concept, and Alan talked about the Evans & Sutherland CT6 Kelp Forest simulation work. He then finished with the Educom tape, which uses the Kelp Forest footage, noting the new interface organized around projects rather than applications, the lack of menus entirely, and the use of gestural input.

As usual, Alan got some very enthusiastic responses from the audience.

Mitch Resnick (MIT) - "*Logo: A Children's Environment for Exploring Self-Organizing Behavior"

Mitch discussed his implementation of an extension to Logo, called *Logo, that runs on the Connection Machine, and directly supports the modeling of simple ecologies. He enthusiastically discussed his reasons for wanting to do this: to help children experiment with and develop an understanding of self-organizing behaviors, to provide an ability to show children examples instead of trying to define terms, and by these methods to give children a deeper understanding of the concepts by the very act of their construction of these systems. Mitch called this approach to education "constructionism", and referred to Piaget's theories on building knowledge through what he called "constructivism" (In terms of one of Alan Kay's favorite quotes, "To know the world one must construct it."). Mitch referred to some of the AL work now going on as true "New Wave Science", blending analysis and synthesis.

Mitch's extensions to Logo support the programming of the motions of thousands of turtles (instead of just one), the execution of these turtles' commands in parallel, the dynamic creation and destruction of turtles, the ability for turtles to sense one another locally, the ability for turtles to sense and modify the "environment" around them, and for the environment itself to be computationally active.

With heart and mind very much in concert with the motivations of the Vivarium program, Mitch noted a number of the benefits that might accrue to the children who are allowed to experiment with such a system, including the learning of new concepts (such as diffusion and randomness), developing useful problem-solving methodologies (mixing analysis and synthesis), coming to grips with the sociology of science, and developing a new world view (that incorporates the realities of cooperation and competition, group dynamics, and other expressions of self-organization). He also noted a benefit that accrues to the developers of systems such as these, in these kinds of environments, namely that dealing with and making things more accessible to children forces you to think about and respond to the core issues of your problem area.

While *Logo's actual language syntax looked a little bit arcane (such things as a preceding "@" denoting a neighborhood/environment variable, and preceding quotation mark denoting a turtle variable), his built in distinction between turtle and neighborhood, with supporting program-block constructs ("to turtle-step ... end" and "to neighborhood-step ... end") seem like useful, simplifying abstractions that will help the children in their model-building.


Panel on the Future of Artificial Life with Norm Packard, Rod Brooks, Doyne Farmer, Alan Kay, Chris Langton and Mark Pauline

NP
chaired the panel and kicked it off by asking everyone to comment on what they perceived to be the future of AL in both the short term and the long term. He also wondered aloud just what were likely to be the key issues regarding an artificial organisms' rights.

AK remarked that people in AL research are actually trying to do something in the same vein as Art, referring to the Greek view of Art as an imitation of life. He also noted, responding to NP's comment, that it isn't necessary to worry about legal rights for ALife forms for now... AL at this stage is still more like a puppet show; they are mirrors that help us understand life.

* CL took the visionary road, and stated that ALife is only Artificial in terms of being made by man, or the type of material used. The Life is real. This conference is a metaphor for AL, moving from physics to chemistry to simple replicators to simple organisms to complex organisms to self-organizing AL conferences. Short term he predicts simple extensions of what we have now. Medium term (100 to 1000 years) humans and ALife will grow in symbiosis. Long term, he sees a transition to other life forms (evolution does not stand still).

In response to a comment from the floor, Chris made a rather moving comment about the beginnings and the intended directions of this field: Noting that the first AL conference (organized by Chris) was held at Los Alamos National Laboratory, the birth place of nuclear weapons, which were developed in an atmosphere of 1) secrecy, 2) without consideration for the consequences and 3) with a specific (military) intent... by conscious design and deliberate contrast, the AL1 conference reversed all 3 of these axes to work in an atmosphere of 1) open accessibility to all, 2) trying to be aware of possible consequences, and 3) without any single goal, especially no military goal.

RB believes it is impossible to predict the long term future. Short term we will see infiltration of robots into day to day life the way microprocessors have done. It'll start with toys, then will come smart doors, then communicating household items, then small robots will populate the world in symbiosis with man.

MP says we're lucky that the entire AL field is essentially outside both state science and mass entertainment. This gives science a chance to re-integrate with art. He also wonders down stream what we are going to do to entertain the machines...

DF wonders if there is really going to be a useful theory of emergent complexity, or whether we will just have to add hack to hack to hack. He felt there were better demos at AL1, but that there was better theory this time. AL should be done in the most peaceful way possible. He then commented on some of the early literary references to AL: genetic engineering in Olaf Stapledon's Last and First Men (1930), Aldous Huxley's Brave New World in the '30's, J.D. Bernard's The World of the Flesh and the Devil (and he praised this particular work highly), and, of course, Mary Shelley's Frankenstein. He also made the remark that evolution seems to accelerate constantly, so we'd better get set for Mr. Toad's Wild Ride. (I wonder if this is true, or if evolution might ultimately yield to the classic logistic curve after all.)

Artificial Life/Artificial Night: An Evening of Demonstrations and Performance for the Conferees and General Public

On the evening before the final day's talks, an Artificial Art night was held that was open to both conference attendees and the local civilians. Mitch Resnick showed some LEGO-based Braitenberg Vehicle/Creature construction kits. Slides showing a wonderful series of fine art pieces based on completely fabricated, but painstakingly detailed creatures from other planets, based on alternative chemical compounds and processes (referred to as the Sulpher Creatures) were presented by their designer/ sculptor (some of these suckers were 6 meters tall) Louis Bec. Doctor Skitzenheimer, aka Peter Oppenheimer, gave a glimpse into an artificial menagerie, a surreal environment, and real madness. Rudy Rucker demonstrated some beautiful emergent Zhabotinsky style reactions in his CA program for the IBM PC. Steve Strassman showed the Zeltzer/McKenna film trailer for "Grinning Evil Death". Heaven help us, Jonathan Post read some of his poetry. And Mark Pauline and two other collaborators from Survival Research Laboratories showed some videos from their macabre performance art works (these are the folks that design and build lots of malevolent mechanical automatons and then set them up each other in the name of Art), and attempted but failed to fire a sonic cannon they had brought with them. (They did succeed in getting it fixed before the end of the conference the next day - by cannibalizing an old dishwasher for the needed solenoid - and ended the conference "with a bang". This thing shook a cloud of dust out of the ceiling, vibrated everybody's teeth, and scattered one researcher's pile of notes - at the extreme opposite end of the hall - to the winds.) A good time was had by all.

The Poster Sessions

There was one poster paper that was so impressive I just had to include it here...

* Greg Werner and Michael Dyer (UCLA) - "Evolution of Communication in Artificial Organisms"

This was simply one of the most clever experimental designs I have seen. And I actually think that Werner and Dyer have demonstrated the evolution of at least a simple form of one-way communication between artificial organisms. Their simulated world consists of a simple grid into which "female" and "male" organisms of the same species are introduced. The female organisms are immobile, but can produce a "sound". The male organisms are mobile, but are blind. The male must find the female in order to procreate. Both creatures are modeled by simple Neural Nets. The female has the male's location as one of her inputs (she can "see" him) and sound as her output. The male has the sounds produced by the female as one of his inputs (he can "hear" her) and directional movement as his output. Quoting from their abstract, "Because of the strong selection pressure in this environment to communicate, a system of communication gradually evolves in the population such that the sounds made by 'speaking' animals correspond to actions the 'listening' animals should make in order to find a mate." The female tells the male where to go.

Greg mentioned that their next plans are aimed at introducing selection pressures to cause the animals to develop internal models of their environment. I hope to communicate with him and Michael further on this. (At the risk of over-emphasizing my current prejudices, frankly, it is nice to see Michael Dyer not only shed of his Shank/symbolic-AI historical trappings, but making such a significant contribution to connectionist/AL research.)

The Artificial 4H Awards

Blue-ribbons were awarded to the following artificial contestants:

Best Primordial Soup - Gerald Joyce
Bugs That Learn to Like It - David Ackley
Behaviors That Have the Longest Transients - Kristian Lindgren
Best Urban Planning - James Kalin & (programmer of SimCity)
Most Sophisticated Hardware Bugs - Rod Brooks
Most Sophisticated Software Bugs - Mike Travers
Best Circus & Most Thought Provoking Talk - Mark Pauline

* After all is said and done...

The body of work reviewed here is unique and, to my mind, amazing. In preparing this extensive review, I've seen connections between various presentations that were not obvious at the first. It certainly serves as wonderful food for thought as I commence the construction of my own simulated ecology. Issues I've been pondering regarding the senses of my organisms, their form, their neural architecture, their genetic structure, environmental physics, their ability to affect as well as react to their environment, relations between innate and learned behaviors, possibilities for communication mechanisms between organisms (perhaps even between experimenter and organism)... all have been addressed to some degree in these presentations. Though Geoff Hinton's poignant list of "theories of mind" through the ages (with each age believing that at last they had finally sorted it out) teaches a valuable lesson in perspective, perception, and reality, forgive me if I take considerable pleasure in at least temporarily embracing a belief that we live in a time when the nature of mind and life itself may finally begin to yield up some of their secrets.

An e-mail distribution list for the AL community has been set up. To request inclusion in this list, send mail to [email protected]. Contributions to the complete mailing list may be sent to [email protected]. [The list was moved to UCLA for a while, but has been dormant for a long time now. - larryy 4/16/96]

Finally, quoting from Chris Langton's preface to the proceedings of the first AL conference, "Perhaps ... the most fundamental idea to emerge at the workshop was the following: Artificial systems which exhibit lifelike behaviors are worthy of investigation on their own rights, whether or not we think that the processes that they mimic have played a role in the development or mechanics of life as we know it to be. Such systems can help us expand our understanding of life as it could be. By allowing us to view the life that has evolved here on Earth in the larger context of possible life, we may begin to derive a truly general theoretical biology capable of making universal statements about life wherever it may be found and whatever it may be made of."