The History of Artificial Life, Part Two

Page content

Artificial life examines the processes and evolution of life using technology such as computers, robotics, and biochemistry. Essentially, the field of artificial life uses non-biological technology to recreate or study biological phenomena. Since the 1970s and the advent of increasingly sophisticated and powerful computer technology, the field has moved ahead more quickly than it had previously.

The 1970s and 1980s: Artificial Life at the Beginning of the Computer Age

Computers had been around long before the 1970s, but it wasn’t until that decade that the field of artificial life began to benefit from the new technology. During the early 1970s few research groups were actively involved in the study of artificial life.

In 1979, biologist Christopher Langton became the first person to create a self-replicating computer “life.” To do so he used an Apple II desktop computer and a simplification of John Van Neumann’s original cellular automaton. In 1982 Langton entered a graduate program under Arthur Burks, a philosophy scholar who headed the Logic of Computers Group at the University of Michigan. Burks had once worked with Von Neumann, and still had an interest in artificial life.

Christopher Langton was an early pioneer who eventually helped to found an entirely new discipline. It was Langton who first described the field, and gave it its name, at a conference in the late 1980s. “Artificial life is the study of artificial systems that exhibit behavior characteristic of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on earth… Microelectronic technology and genetic engineering will soon give us the capability to create new life formsThis capacity will present humanity with the most far-reaching technical, theoretical and ethical challenges it has ever confronted.

Also in the 1980s, the newly-formed Information Mechanics Group at MIT created a computer designed solely to execute cellular automata. The development of the machine allowed larger numbers of scientists to study a phenomenon which had previously excluded many due to the expense of the technology.

With this new, more affordable technology available, a diverse array of scientists began to use cellular automata. Computer scientist Stephen Wolfram used a cellular automaton to explore the development of complex natural phenomena such as plant growth and seashell patterns, while the Unit of Theoretical Behavioral Ecology at the Free University of Brussels used theories developed by entomologist E.O. Wilson to explore the behavior of social insects.

Artificial Life in the Twenty-First Century

The rapid rate of increase in the power of computers has allowed for increasingly complex applications which can be used in modeling artificial life.

In 2006, Peter Turchin (Department of Ecology and Evolutionary Biology, University of Connecticut) and Mikhail Burtsev began building upon models of evolution of animal cooperative behavior which were initially developed in the 1960s by W.D. Hamilton, a British evolutionary biologist. In the new model, the pair has used game theory and a highly complex version of John Conway’s Game of Life to shed some light on the development of territoriality in animals.

Other work includes an ongoing project to create cellular models of artificial life, to build up an accurate but highly complex series of models of biochemical behavior. One such example is BlueGene, a project in the process of creating a series of supercomputers which will be applied to the study of biomolecular processes such as protein folding.