There is an extensive history of the use of computers to simulate and/or investigate the evolution of life, of individual genetic systems and/or population genetic/phenotypic systems. The motor propelling most artificial life (Alife) simulations is an algorithm which allows artificial creatures to evolve and/or adapt to their environment. The fundamental algorithms fall into two dominant categories: learning algorithms (e.g., algorithms typified by neural networks) and evolutionary algorithms, typified, for example, by genetic algorithms.
Many artificial life researchers, especially those concerned with higher-order processes such as learning and adaptation, endow their organisms with a neural net which serves as an artificial brain (see, e.g., Touretzky (1088-1991). Neural Information Processing Systems, volume 1-4. Morgan Kaufmann, 1988-1991. Neural networks are learning algorithms. They may be trained e.g. to classify images into categories. A typical task is to recognize to which letter a given hand-written character corresponds.
A neural net is composed of a collection of input-output devices, called neurons, which are organized in a (highly connected) network. Normally the network is organized into layers: an input layer which receives sensory input, any number of so-called hidden layers which perform the actual computations, and an output layer which reports the results of these computations. Training a neural network involves adjusting the strengths of the connections between the neurons in the net.
The other major type of biologically inspired fundamental algorithms are the evolutionary algorithms. While learning processes (e.g., neural networks) are metaphorically based on learning processes in individual organisms, evolutionary algorithms are inspired by evolutionary change in populations of individuals. Relative to neural nets, evolutionary algorithms have only recently gained wide acceptance in academic and industrial circles.
Evolutionary algorithms are generally iterative. An iteration is typically referred to as a “generation”. The basic evolutionary algorithm traditionally begins with a population of randomly chosen individuals. In each generation, the individuals “compete” among themselves to solve a posed problem. Individuals which perform relatively well are more likely to “survive” to the next generation. Those surviving to the next generation may be subject to a small, random modifications. If the algorithm is correctly set up, and the problem is indeed one subject to solution in this manner, then as the iteration proceeds the population will contain solutions of increasing quality.
The most popular evolutionary algorithm is the genetic algorithm of J. Holland (J. H. Holland (1992) Adaptation in Natural and Artificial Systems. University of Michigan Press 1975, Reprinted by MIT Press.). The genetic algorithm is widely used in practical contexts (e.g., financial forecasting, management science, etc.). It is particularly well-adapted to multivariate problems whose solution space is discontinuous (“rugged”) and poorly understood. To apply the genetic algorithm, one defines 1) a mapping from the set of parameter values into the set of (0-1) bit strings (e.g. character strings), and 2) a mapping from bit strings into the reals, the so-called fitness function.
In most evolutionary algorithms, a set of randomly-chosen bit strings constitutes the initial population. In the basic genetic algorithm, a cycle is repeated during which: the fitness of each individual in the population is evaluated; copies of individuals are made in proportion to their fitness; and the cycle is repeated. The typical starting point for such evolutionary algorithms is a set of randomly chosen bit strings. The use of an “arbitrary”, random or haphazard starting population can strongly bias the evolutionary algorithm away from an efficient, accurate or concise solution to the problem at hand, particularly where the algorithm is used to model or analyze a biological history or process. Indeed, the only “force” driving the evolutionary algorithm to any solution whatsoever is a fitness determination and associated selection pressure. While a solution may eventually be reached, because the process starts from a random (e.g. arbitrary) initial state in which the population members bear no relationship to each other, the population dynamics as the algorithm proceeds reveals little or no information reflecting the dynamics of the simulated system.
In addition, evolutionary algorithms are typically relatively high order simulations and provide population level information. Specific genetic information, if it is present at all, typically exists as an abstract representation of an allele (typically as a single character) or allele frequency. Consequently evolutionary algorithms provide little or no information regarding events on a molecular level.
Similarly, neural nets and/or cellular automata, take as their starting point, essentially artificial constructs and utilize internal rules (algorithms) to approximate biological processes. As a consequence such models generally mimic processes or metaprocesses, but again afford little or no information or insight regarding events at the molecular level.