Summary of Genetic Algorithms to Solve the Same Problem Again and Again W.Langdon@cs.ucl.ac.uk (150 lines) 2-Feb-96 Modifications: WBL 2 February 96 Add orero:1996:gagsps reference WBL 29 March 1995 Add Eshelman CHC reference WBL 27 March 1995 Add Perry:popGP reference On 4 Oct 1994 I asked the GA-List (v8n38) if anyone anyone was aware of work that had been done on using a GA to solve a problem which changes every year but not very much. I Suggested this might be solved by seeding the initial GA population in each new year with some of the population from the previous year. This document contains the edited replies I received. Also available via ftp://cs.ucl.ac.uk/genetic/papers/ga-seeding.txt People Interested: ------------------ me W.Langdon@cs.ucl.ac.uk Dave Schaffer ds1@philabs.Philips.COM Larry J. Eshelman Inman Harvey inmanh@cogs.sussex.ac.uk Gregory R. Quinn GRQ@icf.hrb.com Paul Darwen darwen@csadfa.cs.adfa.oz.au Peter Hancock p.j.b.hancock@psych.stirling.ac.uk Dick Botteldooren botteldo@intec.rug.ac.be GAs in changing environments (GA-List v8n39) Stephen Oman stoman@iol.ie References ---------- Seeing the Light: Artificial Evolution, Real Vision I. Harvey and P. Husbands and D. Cliff From Animals to Animats 3, Proc. of 3rd SAB D. Cliff and P. Husbands and J.-A. Meyer and S. Wilson (eds.) MIT Press 1994 Inman Harvey draft thesis Species Adaptation Genetic Algorithms ftp: ftp.cogs.susx.ac.uk in file pub/gacode/thesis.ps.Z @techreport(x, author = {P. J. Darwen and X. Yao}, title = {On Evolving Robust Strategies for Iterated Prisoner's Dilema}, institution = {Department of Computer Science, University College, University of New South Wales, Australian Defence Force Academy}, year = 1994, address = {Canberra ACT 2600 Australia}, month = {May}, size = {15 pages} ) Pruning neural nets by genetic algorithm Artificial Neural Networks 2 Aleksander and Taylor (Eds) 1992 author={Alan C. Schultz and John J. Grefenstette}, title={Improving Tactical Plans with Genetic Algorithms}, booktitle={Proceedings of IEEE Conference on Tools for Artificial Intelligence}, pages={328-334}, publisher={IEEE Society Press}, month=nov, year=1990 @InProceedings{Perry:popGP, author = "J. E. Perry", title = "The effect of population enrichment in genetic programming", booktitle = "Proceedings of the 1994 IEEE World Congress on Computational Intelligence", year = "1994", pages = "456--461", publisher = "IEEE Press", keywords = "genetic algorithms, genetic programming", size = "6 pages", notes = "GP betting on horse races! 40 runs with pop of 2000 but only run for 3 generations. Best of each of these loaded into consolidated run with other 1960 members of population generated at random. Whole run for 50 generations. Works better than 3 traditional runs.", } %A Larry J. Eshelman %T The CHC Adaptive Search Algorithm: How to Have Safe Search When Engaging in Nontraditional Genetic Recombination %K CHC uniform crossover incest population elitist selection, cross-generational competition, implicit parallelism, prevention restarts %B FOGA91 %P 265-283 %Y Workshop held Bloomington, IN JUL 15-18, 1990 @article(orero:1996:gagsps, author = {S. O. Orero and M. R. Irving}, title = {A Genetic Algorithm for generator scheduling in power systems}, journal = {Electrical Power and Energy Systems}, year = 1996, volume = {18}, number = {1}, pages = {19--26}, keywords = {genetic algorithms}, notes = {p22 results from previous solutions mixed with randomly generated strings as the initial population. Table3.2 shows solutions improvemented also reduction is runtime } ) WWW pages --------- mine http://www.cs.ucl.ac.uk/staff/W.Langdon/ Inman's http://www.cogs.susx.ac.uk/users/inmanh/index.html ------------------------------------------------------------ Paul Darwen: We used a GA to search the space of strategies of the game Iterated Prisoner's Dilemma, and seeded the initial population with copies of the known good strategy, Tit-for-tat. > How big a proportion of the new population was seeded in > this way? We tried everything from 0% to 18% in steps of 1%, then 20% to 70% in steps of 10%. The best results were obtained at 10%. More than this reduces the genetic diversity of the population, and the search converges prematurely. > What did you set the remainder of the initial population > to? I'm intending to create it at random. That's what we did too. > Did you change the mutation rate? No. but it would be interesting to try: a low mutation rate would cause the population to fill up with perfect copies of the superior seed individuals. --------------------------------------------------------------------- Gregory R. Quinn I had a two-stage scheduling problem that used seeding. The first stage found best makespan. The makespan was used as a constraint in the second phase, which was optimizing cost for the makespan. In moving from phase 1 to phase 2, I seeded the population with all the schedules with the optimal makespan (without redundancy). I then filled out the initial generation for Phase 2 with random individuals. I left the crossover and mutation probabilities the same. Worked very well. --------------------------------------------------------------------- Inman Harvey The SAGA methodology works exactly on these principles of incremental evolution. (ie seeding the initial population for problem N+1 with (part of) the final population from problem N). Our paper at SAB-94 discusses our use of exactly this process in evolving control systems for robots using real (not simulated) vision for simple navigation tasks. We start with a genetically converged population (clones of best-of-a-random- starting population) at a simple problem; as success is achieved at that we increase the difficulty of the problem, carrying over 100% of the population. Mutation rate is kept fixed throughout (though there are arguments for fluctuating it). In the above paper we use in effect 5 problems of increasingly difficulty, and achieve results with a population of 30 after about 30 generations; where I have good reason to suspect that going from random to the final problem would have been unachievable. My thesis on Species Adaptation Genetic Algorithms discusses the use of genetically converged populations for this process of incremental evolution; it is in the process of having final corrections done, but a not-quite-final version is ftp-able from ftp.cogs.susx.ac.uk in file pub/gacode/thesis.ps.Z Other papers via www on http://www.cogs.susx.ac.uk/users/inmanh/index.html ------------------------------------------------------------ Peter Hancock I've also used the "gradually get harder" technique that Inman describes with success (e.g. Pruning neural nets by genetic algorithm, in Artificial Neural Networks 2, Aleksander and Taylor, (Eds) 1992). I gradually added random values to the seed weights for a net, until the GA was able to produce a net that trained from random values. I would recommend it as a method for problems that are just too hard to be done in one go. There is, of course, no guarantee that there will be a route in parameter space from the easy problem to the hard one, but from my experiences it's worth a bash. However, I'm not sure it is quite what Bill Langdon asked: his problem is merely different, not necessarily harder. ------------------------------------------------------------ Larry Eshelman has long used a scheme in CHC we call soft restarts. make popsize (50) copies of the best guy and mutate popsize-1 (49) at a high rate. The "divergence rate" is somewhat problem specific, but a rate of 35% has been good for many. If there is a serious competing conventions problem with the representation, then lower rates work better. Good luck. Dave Schaffer