________________________________________________________________________________________________
In Britain, UCL has pioneered the use of so-called "intelligent" computing techniques in business sectors ranging from banking and insurance to retail. The techniques we are working on include expert systems, rule induction, fuzzy logic, neural networks, genetic algorithms and dynamical systems theory (chaos theory). Successful applications cover asset forecasting, credit evaluation, fraud detection, portfolio optimisation, customer profiling, risk assessment, economic modelling, sales forecasting, and retail outlet location.
The research material in this chapter spans (i) design of neural network, genetic and other algorithms, (ii) programming environments and application-specific toolkits, (iii) hardware support for intelligent algorithms, and (iv) applications in banking, retail, architecture and fashion design.
Using these intelligent techniques, decision support systems can transform large amounts of quantitative data into intelligible classifications, by spotting trends and patterns. Until recently the primary means of spotting trends in business data was through the use of statistical methods such as statistical clustering and regression. Now intelligent techniques are in many cases outperforming traditional statistical techniques. With our help financial institutions started to realise the potential of these technologies, especially in areas such as financial trading and portfolio management, where even very marginal improvements in performance translates into large financial gains.
Historically, we started by building general-purpose programming environments for neural networks and genetic algorithms, and participating in the major CEC-ESPRIT funded intelligent systems research projects such as PYGMALION and GALATEA for neural networks, PAPAGENA for genetic algorithms, and HANSA for hybrid systems. These programming environments have been given to over 300 users across Europe. Building upon this programming environment research neural networks were applied to financial forecasting. Projects included the DTI/SERC Financial Neural Networks with the TSB and the Henley Centre for Forecasting which produced demonstrators for gilts, currencies and insurance, and a system for predicting the FT100 share movement for Barclays BZW. Over the past two years a number of similar projects for financial institutions have been undertaken.
We are also applying neural networks to retailing applications in areas such as customer profiling. One such project is the DTI/LPAC MANTIS project with Thorn EMI Central Research Labs. and PARSYS profiling the customers of Radio Rentals. We are also doing a significant amount of work for J.S. Sainsbury.
Having successfully applied intelligent systems to financial services, we have started collaborating with the Bartlett School of Architecture and Planning to apply them to the Built Environment. Recently we were jointly awarded a DTI/JFIT project called "Intelligent Architecture" to build an integrated CAD system linking architects, planners, facilities managers, builders, health and safety officers. We see this as a spring-board to many related application domains such as refurbishment and contract pricing.
Excellent links have also been established with SIRA. Initially links were established through the SIRA-UCL Prograduate Partnership providing joint PhD student supervision. We are building on these links, and now have an increasing number of joint project links in clothing CAD, planning and in application-specific integrated circuits - 'Smart Chips'.
Finally, we place great emphasis on technology transfer to industry through what the DTI refer to as "Clubs". In conjunction with the London Business School, we have established a joint LBS-UCL NeuroForecasting club with DTI sponsorship to investigate the application of neural networks to the capital markets.
Laura Dekker, Jason Kingdon, Jose L. Ribeiro-Filho and Philip Treleaven
The ESPRIT III project PAPAGENA (PArallel environments for PArallel GENetic Algorithms), is the single largest European investment into the research, exploration and development of genetic algorithms (GAs), and parallel genetic algorithms (PGAs). The project was successfully completed this year, with UCL being given a special commendation for their rôle. The results of PAPAGENA are being exploited now by a number of other projects, including HANSA , RENEGADE and Intelligent Architecture.
Central to the project is GAME (Genetic Algorithm Manipulation Environment), a general-purpose GA and PGA programming system. The main challenge in constructing a general-purpose GA programming environment is the diversity of sequential and parallel genetic adaptive techniques that exist. To cope with the breadth of GA definition, and to offer the maximum utility in terms of hardware, GAME has been designed with a layered architecture of loosely-coupled, customisable and parallelisable components. This is made possible by four major design features:
The design provides the means for efficiently exploiting any parallel support that may exist, and offers a simple mechanism for dealing with complex algorithm definitions, such that applications as diverse as multi-start hillclimbing, parallel simulated annealing, standard GAs and artificial life can all sit easily within GAME.
Demonstrations and Applications
PAPAGENA has demonstrated the diversity and potential of GA technology to solve previously intractable problems in large-scale, complex and multi-constraint environments. Three of the main application areas tackled during the PAPAGENA project are described below, each presenting different contributions to GA research and design.
Optimisation: Protein Folding
One of the most exciting and recently expanding areas for GA use is in bio-informatics, specifically in the prediction of protein structures, with the potential to open up a vast new world of drug design and medical treatment. However, the difficulty associated with this task cannot be underestimated. The main reason is the number of possible conformations available to a protein an estimated 10100 which is too large for direct search techniques.
Representing the protein conformation as a set of inter- atomic torsion angles, GAs are being used in conjunction with knowledge of chemo-physical and steric constraints, to generate native-like conformations of known and unknown proteins. Using a protein whose conformation is well known - for example, crambin - attempts can be made to calibrate a fitness function. At present the GA uses a fitness function which combines aspects of force-field modelling and structural constraints into a fitness vector.
Genetic Programming: Formulæ for Credit Scoring
Risk assessment is a universal business problem, ranging from a bank assessing a client for a personal loan, to a marketing company determining targets for a direct mail-shot. In each case a decision has to made, on the basis of known facts, as to the likely outcome of the investment.
The principal difficulty in forming mathematical models of these systems is the size, noisiness, inconsistency and incompleteness of the data. In such cases linear techniques tend to be difficult to use, not least because the size and relevance of the data fields is unknown. This has led to increasing interest in automated computational techniques for finding such relationships.
In the genetic programming approach, populations of candidate solutions to the problem are represented and manipulated directly as multi-levelled algebraic parse trees. For example, the formula: tan(number of credit cards)/(age + gender) could be a candidate (albeit unlikely) for a particular training set. These are then scored on the basis of how well they fit the training set, and their ability to predict a validation set. The use of a high-level, direct gene representation (rather than the traditional binary one) allows for a much more intuitive treatment and understanding of the results.
Complex Modelling: Economic Modelling
Another, but less widely recognised aspect of GAs is their capability as complex modelling tools. The locational development application uses GAs to mimic the behaviour of complex multi-agent systems, subject to a variety of economic and physical constraints. For the first time, the scale, complexity and level of detail included in such models provides a realistic means for testing economic theory and new ideas in something other than the real thing.
This form of economic modelling, whilst still in its infancy, has already been adopted by Brandenburg State in Germany as a means for modelling and understanding local labour movements, which have risen considerably since German unification.
Jose L. Ribeiro-Filho (supervised by Philip Treleaven)
The design philosophy of the Genetic Algorithms Manipulation Environment (GAME) is threefold:
Some of the basic requirements for the programming environment include:
The GAME system was designed in accordance with
these requirements. It comprises the following modules:
GAME is central to the PAPAGENA ESPRIT III project, helping with the development of complex parallel applications. The source code has been distributed to several companies and is available free from the Department of Computer Science at UCL.
Ugur Bilge and Philip Treleaven
UCL's work in the European Human and Capital Mobility program RENEGADE project focuses on the following tasks:
The first task involved an investigation of the applicability of Genetic Algorithms for solving Polynomial Regression problems. This application is also used to assess the representation and computational requirements for GAs.
A distributed parallel framework has been proposed for fast PGA simulations. A matrix-based representation in the form of a C library has been developed. This library uses TCP/IP Unix sockets to transmit matrices between the modules of a PGA, where each module runs on separate workstation linked by a LAN.
Building on this parallel distributed framework a game playing application is planned where cooperation within a group is matched by competition between the groups in solving a problem.
Parallel to RENEGADE work, a joint project is under way with a major retailer in the UK. This project involves the design and development of a Supermarket Space Management System based on Genetic Algorithms. The potential benefits of such a system to the retail business are multifold, some of these are:
The initial results confirm that the GA approach is sound, and the Genetic Algorithm Space Optimiser will go live in May 1995.
William Langdon (supervised by Mark Levene)
We are investigating Genetic Programming (GP), i.e. the automatic production of software by evolutionary techniques. Genetic Programming is an extension of the Genetic Algorithm to the evolution of program trees.
There has been only a little work on incorporating memory explicitly within Genetic Programming. However it is anticipated that data structures will be a vital part of real world applications. In software engineering the use of object-oriented techniques, such as C++, has been widely adopted. One of the simpler, yet widely known and used, examples of data structures is the stack. We have demonstrated genetic programming by producing a stack. The stack problem requires five procedures (initialise, top, pop, push and empty) to be evolved simultaneously. It is almost unique for multiple separate GP procedures to be evolved simultaneously.
In Genetic Algorithms each individual is given a fitness and new ones are created from the fitter. In the GP stack problem the fitness of each individual program is found by calling its constituent procedures in a test sequence and comparing the results they return with the anticipated results; no information about the program's internal behaviour, such as its use of memory, is used. A variety of different programs have been evolved which not only passed all the tests but can be shown to correctly implement a stack.
Following the success of the stack example, it was decided to investigate using genetic programming to evolve a (First in First out) queue. It was anticipated that this would be straight forward; this has not proved to be the case. Initially the anticipated circular queue solution did not evolve, instead a number of novel solutions or partial solutions have been found. The principle of these are the "Caterpillar"' and "Shuffler"' solutions. In "Caterpillar"' solutions the queue works its way across memory enqueuing at its head and dequeuing from its tail. The code is correct but to be general would require an infinite memory. "Shuffler"' solutions have evolved where the contents of the queue are shuffled through the memory either when enqueuing or dequeuing. For example, if enqueue writes to the top of the queue, dequeue reads from the bottom and moves the whole of the remaining queue down one memory cell. This is memory efficient but obviously wasteful of CPU effort. After various changes queues based upon circular data structures have been evolved.
| |
| |
| |
| |
| |
| |
+-------------------------------+
Figure 6.1
Circular Inplementation of Queues.
We are investigating the theoretical background of our work using a number of approaches (Markov analysis, the GA schemata theorem, Population genetics, Statistical Mechanics (Entropy), Kauffman NK Fitness Landscapes and the effect of selection on fitness distributions).
Wiilliam Langdon (supervised by Mark Levene)
National Grid plc. are interested in the problem of planning maintenance of equipment which forms the electricity transmission network so as to minimise costs throughout the entire network, throughout the year. There are considerable cost savings to be made by improved planning. W. B. Langdon was awarded a CASE studentship by National Grid plc.
+-------------------------------+
| |
| |
| |
| |
| |
+-------------------------------+
Figure 6.2
Four Node Figure.
Initially we are starting with a small demonstration network (see figure) but intend to apply the lessons learnt to the real network which contains 350 items with varying maintenance requirements over a 52 week plan.
The fitness function is based upon two parts; a benefit for performing maintenance and a penalty for causing line power flows to exceed their nominal capacity. In each week of the year the cheapest electricity generators are used first. Given this generation pattern and the load pattern the electrical power flows for the entire network can be calculated. Each line in the network which exceeds its nominal capacity reduces the fitness of the plan.
Denise Gorse and David Romano-Critchley
Neural reinforcement learning algorithms have been shown to be very useful in situations in which, although it is possible to distinguish between 'good' and 'bad' behaviour of a learning system, it is not possible to give the system explicit instructions as to exactly how it should go about improving its performance. Such situations arise commonly in robotics applications, for example. Reinforcement learning was first defined for networks whose outputs were binary vectors, and almost all work done in this area has so far assumed the target outputs of the network are constrained in this way. This means that such a binary-output network would not in general be able to learn to predict a time series, or to output a control signal (such as the angular position of a robot arm) which naturally takes on continuous values.
We have been developing an extended form of reinforcement training which is able to learn continuous-valued functions. This involves adapting both the mean and variance of the network parameters during training. Adaptation of the variance is done by changing the lengths of the spike trains which carry the pulse-coded representations of the network variables. The new algorithm has been applied very successfully to a variety of benchmark problems such as the balancing of an inverted pendulum and the prediction of sunspot numbers - in this latter time series prediction problem, the reinforcement trained system displayed less than half of the residual variance of a backpropagation trained conventional system with a comparable number of parameters. We believe that this improvement in prediction performance is due to the stochastic reinforcement algorithms's ability to escape from local minima, which are a noted problem for backpropagation networks. The technique is hardware-realisable using probabilistic RAM (pRAM) technology, and the pRAM group at King's College plan to develop a chip to implement the method.
Adrian Shepherd (supervised by Denise Gorse)
Multi-layer perceptrons (MLPs) are conventionally trained using supervised learning algorithms based on the well-known first-order descent method, backpropagation . Such training algorithms have proved inefficient when applied to many practical problems; convergence rates are often slow, and the MLP can get stuck in regions distant from the desired global minimum (owing to the presence of local minima or shallow gradients in the network error-surface). Moreover, performance is highly dependent on the setting of one or more user-defined parameters.
Recent research has shown that neural implementations of classical optimisation techniques are faster than backpropagation and capable of producing much lower network errors. This research compares the performance of methods for general optimisation (conjugate gradient and quasi-Newton methods) with the Levenberg-Marqaurdt method for nonlinear least squares. The latter is shown to be particularly effective in benchmark tests. For large-scale problems, where the number of function evaluations performed each training epoch is an important factor, a novel hybrid line-search algorithm has been developed which performs a single network evaluation per iteration in the best case, and safeguarded quadratic interpolation in the worst case.
An important observation in the course of the above research is that the improvement in training speed with second-order methods appears to be at the cost of a greater tendency to get trapped in local minima. One way to deal with this problem would be to combine second-order methods with stochastic techniques (such as simulated annealing, genetic algorithms, and on-line training) to produce fast global training algorithms. Good results were achieved with a simple two-stage training process consisting of on-line backpropagation followed (when close to the global minimum) by a second-order technique. However, there are difficulties in combining second-order and stochastic techniques in a more general way.
Finally a novel global training strategy, based on homotopic (rather than stochastic) principles, is presented - Expanded Range Adapation (ERA). ERA is an iterative process by which the training targets are progressively expanded from their mean target values in a series of steps. As it requires modification to the pattern targets alone, ERA can be implemented with any training algorithm. With sufficiently small initial ranges, ERA has proved highly effective at avoiding local minima in benchmark tests at the cost of only a small increase in training time.
Konrad Feldman, Suran Goonatilake and Philip
Treleaven
The Neuroforecasting Club, is part of the Department of Trade & Industry Neural Computing Technology Transfer Programme. The club aims to establish an awareness of neural network technology amongst merchant banks whose needs for efficient asset management are of a similar type. The management consortium for the Neuroforecasting Club is a collaboration between University College London and London Business School.
The Neuroforecasting Club applies intelligent systems to:
As part of the project UCL has developed special purpose neural network and genetic algorithm simulation environments for financial modelling. We have developed a novel genetic rule induction system for discovering financial trading rules that is now being used by several Club members. Current UCL research focuses on the use of genetic rule induction for knowledge from trading decisions made by an expert financial trader. Extensive comparisons between this approach and a range of statistical methods including discriminate analysis have also been made.
Meyer Nigri, Tony Wicks and Philip Treleaven
The SMART CHIPS project is working towards the next generation of "smart chips". These smart chips embody intelligent techniques, such as fuzzy logic, neural networks and genetic algorithms , and are increasingly appearing in automotive electronics, office system products and even consumer goods.
The project aims to offer straightforward tools for the implementation of intelligent hardware systems using Application Specific Integrated Circuits (ASICs). These tools comprise a software programming environment and a silicon compiler, which offer the user a unified set of comprehensive solutions for exploiting intelligent techniques, both at software and hardware levels.
Figure 6.3
The SMART CHIPS Environment.
High level software simulations of intelligent techniques on workstations have been used extensively, but remain slow and unsuited for use in embedded applications. Hardware implementation can be achieved through the use of microcontrollers and microprocessors which are well suited for low to medium performance requirements, where a degree of flexibility might be required. However, for high performance applications and those requiring high volume, where unit costs must be minimised, ASIC implementation methods represent the best solution.
The SMART CHIPS project aims to bridge this gap through silicon compilation techniques. This allows users to accurately model their intelligent system at a high level, with a fast, automated process for actual VLSI ASIC realisation. Such a tool allows rapid prototyping and testing of a system for a specific intelligent task and produces a cost effective ASIC implementation.
Programming Environment
The SMART CHIPS programming environment provides the necessary components to allow a target application to be implemented through software simulation, hardware emulation (in standard microcontrollers and microprocessors), and hardware acceleration (through high-performance ASICs). High level software simulation allows rapid prototyping and testing of an application that can then be mapped to either a software or hardware solution.
The environment extends VML (Virtual Machine Language), developed as part of the Galatea Esprit project, to allow accurate high level simulation of intelligent techniques. The language will be used as the input to the silicon compiler and allows straightforward definition of intelligent techniques. The language can be used to accurately model any proposed intelligent method. This modelling, at an early definition stage, is crucial since it allows rapid modifications to be made within the framework of a fast design and test cycle. Furthermore, it removes the need for testing at a later stage.
Silicon Compilation
The SMART CHIPS silicon compiler overcomes the problems found in general purpose silicon compilers by directly targeting a library of hardware components, which are optimised for the execution of intelligent techniques, allowing optimal ASIC realisations.
The silicon compiler will take the high level VML definition and automatically translate it to an ASIC architecture defined in VHDL. Low level synthesis tools can then be used to map these VHDL models to actual chip layouts. The silicon compilation process consists of three fundamental steps: compilation and optimisation of the VML code, translation of this code to a graph based representation which is more suitable for hardware synthesis, and finally the creation of the data path and processing structure of the VHDL smart chip model.
Target Architectures
A number of specialised architectures are presently being developed for fuzzy logic and neural network applications, offering a range of area/speed performance to meet user requirements. UCL's generic neuron, a well proven architecture for area efficient neural network implementation, is being further extended to enhance its performance. For applications requiring very high performance, systolic and bit-serial architectures are being investigated. For fuzzy logic applications two novel architectures are being developed: a generic fuzzy processor, similar in concept to the generic neuron, and a feed-forward fuzzy processor, for very high speed applications. Hybrid architectures are also under investigation that utilise multiple intelligent techniques, including genetic algorithms. By targeting a range of potential architectures the SMART CHIPS approach becomes easily extendable and capable of adapting to demands that may be required by other emergent intelligent techniques.
Sukhdev Khebbal, Danny Shamhong and Philip
Treleaven.
The HANSA (Heterogeneous Application geNerator Standard Architecture) project, is attempting to incorporate object-oriented integration techniques into an standardised application generation framework. The project aims to promote standardisation between European system houses by producing an object-oriented cross-platform framework that will allow developers to rapidly generate applications by configuring industry standard tools, such as databases and spreadsheets, with novel artificial intelligence techniques such as expert systems, neural networks and genetic algorithms (see figure below).
Figure 6.4
The HANSA Framework building upon
Industry Standards.
The HANSA project has adopted the object-oriented philosophy via the C++ language and object-oriented techniques such as the document oriented interface and an interapplication communication protocol. This communication protocol is inspired by, and compatible with Microsoft's OLE (Object Linking and Embedding). The HANSA project uses these object-oriented techniques to develop domain-specific application generators, a toolkit of application specific and industry standard tools, and a generic framework for their combination (see Figure 6.5). The choice of standardising on an OLE style protocol is a direct result of the desire to achieve rapid prototyping of applications, and to allow the use and integration of industry standard packages.
The availability of HANSA application generators will simplify the integration and configuration of tools within a specific domain, decrease time to market applications, while increasing the quality of application packages and facilitating their future maintenance and development.
The HANSA framework and software toolkit operates on PC's running MS-Windows 3.1 or Windows NT and Workstations running UNIX/X-Windows. For UNIX/X-Windows systems, HANSA has developed an OLE-like interface following closely the philosophy of the "Object Management Architecture" (from the Object Management Group), and has built the framework on top of the services supplied by the "Object Request Broker". The common OLE-like interface facilitates the easy porting of application code from one platform to the other. An important contribution of HANSA is the implementation of an OLE style protocol on the UNIX platform.
The HANSA project is coordinated by Thorn EMI CRL, with University College London as associated partner. The European partners include: Brainware, Intelligent Financial Systems Gmbh (IFS) and J&J Financial Consultants from Germany; MIMETICS and PROMIND from France; and O.GROUP Srl. from Italy. The domain specific application generators are being constructed by the partners in the following four business areas:
Figure 6.5
Typical HANSA Application Generation.
Sukhdev Khebbal (supervised by Philip Treleaven)
Fuelled by the need to find solutions to real-world complex problems, there is growing commercial interest in the integration of conventional symbolic techniques (such as expert systems) and the newer adaptive processing techniques (such as neural networks).
With close examination, it is evident that both the symbolic and adaptive approaches have strengths and weaknesses, and that these techniques should be viewed not as competing models but as complementary ones. By integrating these two fundamentally different approaches we can avoid many of the weaknesses inherent in each methodology, whilst capitalising on their individual strengths.
In support of this hybrid strategy, there is an emerging realisation that most complex real world problems are difficult to solve by either symbolic or adaptive processing on their own, but require the synergy of these complementary approaches.
This research into Intelligent Hybrid Systems has been investigating:
To explore the practical aspects of hybrid systems, an object-oriented environment has been constructed that investigations the interface, functionality and architectural requirements for integrating techniques such as Neural Networks and Expert Systems. This hybrid environment adopts an object-oriented approach to implement the core mechanisms for communication and interfacing between techniques.
The environment is being used to solve a real-world Cargo Consignment problem for British Airways. Adopting a hybrid approach to solving complex problems offers greater flexibility and power, than solutions that utilise one processing paradigm.
Suran Goonatilake and John Campbel l
We are investigating the use of Intelligent Hybrid Systems in assisting complex financial decision making . It is observed that there is no single intelligent technique that is appropriate for all decision support tasks. Each intelligent technique has different properties making it suitable for particular tasks over others. Hence we have developed a hybrid system approach for financial decision making combining Genetic Algorithms, Fuzzy Systems, Expert Systems and Neural Networks.
This system has a range of properties making it suited for most complex decision support tasks. These are:
This integration between the different modules is made possible by the use of a common knowledge communication framework. By using production rules with the same syntactic structure as the common knowledge representation scheme, the ability to transfer knowledge between the genetic algorithm, expert and fuzzy systems is demonstrated. The production rule format facilitates users to understand the reasoning processes and also allows them to change the existing knowledge and to add new knowledge.
The system has been demonstrated in the complex task of making foreign exchange trading decisions. The system has been evaluated by making simulated trades in trading the British Pound and the German Deutchmark. The results are very encouraging and illustrate the advantages of the approach.
The advantages of decision combination, by making trading decisions through a process of 'voting' by different modules has been demonstrated. The effectiveness of human traders in performing the same trading task has also been evaluated and compared with the performance of the hybrid system.
Zahid M. Shafi (supervised by Philip Treleaven , with Bill Simmonds from SIRA Ltd.)
We are investigating the application of artificial intelligence to scheduling and optimisation problems. The following techniques will be considered: Genetic Algorithms, Neural Networks, MultiAgent Architectures and Chaos Theory. The initial research will concentrate on identifying a taxonomy of scheduling and optimisation problems. Further work will then indentify the most appropriate technology for a particular class of problem, the solution may be a particular technique or a hybrid of techniques.
Specific demonstration systems will be developed with a view to creating a scheduling/optimisation software workbench. This will eventually allow the rapid development of scheduling or optimisation problem solvers. The commercial application areas include manufacturing production schedules, general logistics and the aviation industry. This work is being done as part of the DTI's Postgraduate Training Partnership initiative with SIRA Ltd.
Laura Dekker, Meyer Nigri and Philip Treleaven
Intelligent Architecture is a DTI/EPSRC-funded project aimed at addressing a major area of research and technology shortfall in the multi-disciplinary built environment sector. Central to the project is the development of an object-oriented 3D modelling environment, providing:
Designers, producers and managers of built environments face considerable problems in reconciling the sometimes conflicting constraints of multiple interacting systems. Too often getting things right in one place produces a problem in another domain; too often the laborious process of redrawing and re-inputting data to different analysis software means that promising and innovative design paths may be left unexplored. The Intelligent Architecture 'workbench' provides an environment for modelling, analysis and experimentation, through the full project lifecycle from inception and financing, through design, construction and project management to space planning, facilities management and urban management.
The project in association with the Bartlett School of Architecture, Planning, Building and Environmental Engineering brings together a consortium from the built environment sector and developers of 3D rendering software and GIS systems.
The Intelligent Architecture Workbench Design
The strategy behind the workbench design is to give a flexible and extensible architecture, so that it need only provide the core functionality to build application-specific problem-solving tools.
3D and abstract objects in the modelled world have an internal state, defined by attributes, contraints, defined by rule sets, and behaviour, in terms of a high-level- language 'script'. Scripts are triggered by events, so that objects can interact, for example to simulate the effects of lighting, heat, air-flow, pedestrian movement and other processes within a built world.
The intelligent toolkit includes neural networks, genetic algorithmsand other evolutionary techniques, together with a rule-based and fuzzy logic system. Inside the workbench itself, the intelligent tools are used, for example, to optimise image rendering performance, and to add intelligence to the user interface.
Dynamic links to other applications allow the workbench to make use of their facilities, by 'plugging-in' both existing and future software tools. This approach to integration is seen as crucial to the acceptance of the workbench, and allows its functionality to extend beyond currently anticipated needs.
Figure 6.6
A Conceptual View of the Intelligent
Architecture Workbench.
The generic nature of the workbench gives scope for its use in other fields that centre on 3D modelling and visualisation, as seemingly disparate as molecular modelling or textile and garment design.
Solving Real Problems from the Built Environment
The workbench is being validated through the development of prototype end-user applications for strategic urban design, and building and facilities management. These make use of intelligent techniques for modelling, analysis, optimisation and pattern recognition.
Among these applications, neural networks and fuzzy logic systems are being used for classification of 3D objects, converting raw geometric data to knowledge that can feed in to analytical processes. Since the design of the classification system is generic, it can be used in a number of ways. Initially, using a spell checker metaphor, the classifier will help users to filter information about the 3D model. Objects are then classified in distinct categories, holding specific information about their properties. This information is then used to activate analysis tools that operate only on specific data, thus reducing communication between the workbench and analysis tools. The classifier will be further expanded to aid users in several laborious tasks found in 3D modelling, such as intelligent editing and selective visualisation.
Genetic algorithms and other evolutionary techniques are being used, in conjunction with a knowledge-base, for scheduling of construction tasks and for spatial planning. The knowledge-base holds data and rules about materials, regulations, the construction processes, and so on, so that spatial and temporal problem constraints can be defined. By providing the capability rapidly to test and optimise qualitatively different approaches, this decision support tool can also help to determine the robustness of any given workplan - an important feature when dealing with a real and unpredictable world.
Laura Dekker (supervised by Philip Treleaven )
One of the primary concerns in architecture is the use of space, and architects have their own language for understanding, modelling and referencing it. Using cases from the built environment as a focus, this work examines the nature of spatial problems, constraint-based systems and optimisation techniques.
Looking at spatial problems as a class, and drawing on past and current approaches to solving them, the aim is to draw out an understanding of their common fundamental characteristics. This confirms that the representation used to describe 3D objects and space is crucial to the successful solution of spatial problems. This work investigates the best minimal representations for defining 3D objects and their relationships in continuous and discretised space, in the context of any particular type of analysis or manipulation. Context is important, in that it determines the appropriate view of objects, so that a stock dealing floor layout optimiser needs to take a different view of the same building than, say, an energy consumption analysis tool. It is vital then to have a polymorphic, extensible representation to deal with this.
As a more concrete product of this work, a set of spatial optimisation tools, aimed at tackling realistically complex problems, is being designed and developed, to integrate with the modelled 3D world and its consituent objects. Genetic algorithms are being used in their traditional rôle as optimisers, together with novel dynamic hill-climbers and other stochastic intelligent techniques. Early investigation has shown the need for integration with a rule-based system for even the smallest problems. Such a hybridisation then allows constraints to be imposed on objects, as well as the application of specific problem-solving heuristics. Though inevitably reducing the generality of the optimisation systems, the use of such localised information capitalises on explicitly and implicitly held knowledge.
Much of this work is integral to the DTI/EPSRC-funded Intelligent Architecture project. This project offers exposure to a wide set of real problem cases, each demonstrating different aspects of spatial use, as a focus for the exploration of spatial representations and optimisation techniques.
Surayya Khan (supervised by Philip Treleaven with Bill Simmonds from SIRA Ltd)
The Department of Computer Science at UCL, SIRA Ltd and Concept II Research Ltd are currently collaborating on a number of projects to apply intelligent systems to the design and manufacture of fashion and textiles.
Present CAD/CAM systems are well suited to certain aspects of the clothing and textile industry, but there are some shortcomings that present opportunites for new developments. The aim of the collaborative projects is to make the most of existing technology, whilst directing any current work in industry and research, towards enhancing existing software with new developments. It is expected that novel intelligent techniques (such as experts systems, neural networks and genetic algorithms) will be applied for the rapid prototyping of ideas in the context of technical knowldege and constraints. Using such techniques, more efficient use will made of the mass of information available in the design and manufacturing processes.
Work on two projects involving collaborations between SIRA, UCL and Concept II Research are already in progress and are described below.
Current work involves investigating the use of Artificial Intelligence techniques, in one or more software tools, to solve specific problems that arise in the design and manufacture of clothing and textiles. The results from the development of such tools offer insight into how design and manufacturing knowledge is best represented within a knowledge base, which can be readily accessed by various other software tools. Such analysis could form the basis for an integrated environment for the whole industry. The research involves the application of genetic algorithms , expert systems and possibly neural networks to solve the lay-planning problem in garment manufacture. Lay-planning involves fitting garment pieces onto cloth, in the most compact configuration, to minimise wastage between pieces. Also included in the research is the development of a knowledge-based design library with intelligent GUI. Collaborating on the lay-planning project are SIRA Ltd and Concept II Research Ltd and possibly Courtaulds on the design library project.
The collaboration between UCL and SIRA Ltd., an independent research centre, has existed for two years and is known as the SIRA/UCL Postgraduate Centre. This is one of the five PTP partnerships created by the DTI to help close the gap between industry and research and promote technology transfer. Concept II Research Ltd is a UK IT company and is the countrys' leading supplier of clothing and textile CAD systems to UK colleges and companies, both in the UK and abroad.
Raghbir Sandhu (supervised by Philip Treleaven with Andrew Newland from KPMG Peat Marwick)
The aim of this work is to investigate Intelligent Spatial Decision Support Systems (ISDSS). We define these as being systems designed primarily for analysis, modelling and decision support with spatial (geographical) data. It is clear that current Geographical Information Systems (GIS) are extremely limited in this regard and thus fall considerably short of user requirements.
Current research indicates that intelligent systems techniques (neural networks, fuzzy logic, expert systems, genetic algorithms and non-linear modelling) are of great value for spatial problem-solving, greatly enhancing the tools available for planners faced with complex spatial problems. Such tools are capable of performing sophisticated modelling, analysis, pattern matching and optimisation.
A further consideration is the large quantities of spatial data which are being collected and stored in GIS. Effective storage and exploration of large spatial data sets is a clear difficulty facing existing GIS. One of the contributing factors is the almost universal reliance on relational database technology, which is inefficient for representing data with a high semantic content.
An object-oriented architecture is being developed as a basis for ISDSS. This will be used to implement a framework for ISDSS development. A simple ISDSS has already been built for KPMG to explore the spatial development of High Technology Industry, which is known to exhibit unique locational factors and growth patterns. This uses a self-organising, non-linear model for prediction and a genetic algorithm for model optimisation. The application has been built for the Apple Macintosh using C++. Further application areas include retail, distribution, banking, insurance, energy, healthcare and construction.