next up previous
Next: Bibliography Up: The work in this Previous: Part II: Theory

Part III: Extensions

The final part of the book contains eight chapters that push the genetic programming framework in new directions, either to allow for new classes of applications or to produce qualitative improvements in performance.

Chapter 12, ``Efficient Evolution of Machine Code for CISC Architectures using Instruction Blocks and Homologous Crossover'' by Peter Nordin, Wolfgang Banzhaf and Frank D. Francone, describes two major advances for the world of linear machine code GP. Both are aimed at significantly extending their commercial GP system Discipulus. The first advance enables machine code GP to evolve machine code for complex instruction set computers (CISC, such as INTEL X86 chips used in most personal computers PCs, JAVA and many embedded processors) whilst retaining the speed and performance advantages of machine code evolution available up to now on reduced instruction set computers (RISC) such as the SUN-Sparc. The second advance is a description of homologous crossover in machine code GP. By ensuring that like parts of programs are crossed over the authors are able to produce a more productive crossover operator in which the offspring are more related to their parent programs. Chapter 12 also contains comparisons between tree and machine code GP and discussions of future applications of machine code GP.

Modern computers typically act on 32 or 64 bits simultaneously. Chapter 13, ``Sub-machine-code Genetic Programming'' by R. Poli and W. B. Langdon, describes several ways in which GP can exploit this internal parallelism. Their technique is described and demonstrated via several Boolean problems and an optical character recognition (OCR) problem. The technique is easy to implement (pointers to publicly available code are included) and can readily speed up any existing GP implementation dramatically (one to nearly two orders of magnitude).

Astro Teller describes in Chapter 14, ``The Internal Reinforcement of Evolving Algorithms'', a new approach in which the behaviour of a parent program is used to create a credit-blame map locating the useful and not-so-useful parts within it. The credit-blame map is used to guide the location of crossover points so that the useful parts of the program are more likely to preserved in its offspring. Each program is represented as a directed graph with similarities to artificial neural networks and data flow machines with indexed memory. PADO evolves programs which correctly classify their input into classes. The new system of internal reinforcement, which is demonstrated within PADO on problems of classifying complex real images and sounds, is shown to improve performance.

Chapter 15, ``Inductive Genetic Programming with Immune Network Dynamics'' by Nikolay I. Nikolaev, Hitoshi Iba and Vanio Slavov, describes a major advance in which previously unconnected streams of GP research (Stroganoff and Immune Genetic Programming) are brought together and shown to be highly effective for example on time-series prediction problems using small populations. Immune Genetic Programming is presented using an analogy of an immune system (GP) evolving to overcome a disease by binding to a number of disease causing antigens (the test cases). Perhaps one of the most important aspects of the new approach is that it provides a principled mechanism for dynamically changing the fitness function (i.e. the active test cases) which improves GP learning efficiency.

Chapter 16, ``A Self-Tuning Mechanism for Depth-Dependent Crossover'' by Takuya Ito, Hitoshi Iba and Satoshi Sato, proposes an improved crossover operator for tree based GP. They note that traditional crossover tends to swap relatively small subtrees and, while in some problems exchanging larger subtrees appears to improve performance, this is not always the case. Therefore they propose a new operator which takes into account this problem dependency and learns, via a self-tuning mechanism, the best size to use for crossover as the GP run proceeds. The mechanism depends on the assumption that, if depth selection probability is efficiently assigned to a tree, the fitness of the structure's offspring will improve and the depth selection probability will be inherited by subsequent generations. That is, their subtree crossover operator coevolves with the GP population and so can adapt to the problem and achieve improved results. They also note a reduction in population ``bloat''.

Chapter 17, ``Genetic Recursive Regression for Modeling and Forecasting Real-World Chaotic Time Series'' by Geum Yong Lee, presents several improved techniques whereby GP can evolve non-linear models of time-varying statistics based upon a history of several GP runs. The improvements are demonstrated on a wide range of different time series data and evolve improved models with small populations and therefore low computational effort.

Chapter 18, ``Coevolutionary Fitness Switching: Learning Complex Collective Behaviors Using Genetic Programming'' by Byoung-Tak Zhang and Dong-Yeon Cho, tackles one of the important unsolved problems: how to get multiple independent programs/agents/ robots to learn to co-operate together to solve complex problems. They present improved learning based upon coevolution and demonstrate it on Robot Soccer games. These games are known for their difficulty and the manual coding of winning teams is far from trivial.

Our final chapter, Chapter 19, ``Evolving Multiple Agents by GP'' by Hitoshi Iba, continues the multi-agent theme. Here Iba uses the difficult robot navigation problem to demonstrate a new multi-agent learning scheme in which the agents learn to communicate with each other. He compares this with more traditional Q-learning approaches (which scale badly). However the chapter also describes efficient means of combining GP and Q-learning.


next up previous
Next: Bibliography Up: The work in this Previous: Part II: Theory
Bill Langdon
1999-02-22