An important and challenging data mining application in marketing is to learn models for predicting potential customers who contribute large profit to a company under resource constraints. In this paper, we first formulate this learning problem as a constrained optimization problem and then converse it to an unconstrained Multi-objective Optimization Problem (MOP). A parallel Multi-Objective Evolutionary Algorithm (MOEA) on consumer-level graphics hardware is used to handle the MOP. We perform experiments on a real-life direct marketing problem to compare the proposed method with the parallel Hybrid Genetic Algorithm, the DMAX approach, and a sequential MOEA. It is observed that the proposed method is much more effective and efficient than the other approaches.
In recent years the the potential and programmability of Graphics Processing Units (GPU) has raised a noteworthy interest in the research community for applications that demand high-computational power. In particular, in financial applications containing thousands of high-dimensional samples, machine learning techniques such as neural networks are often used. One of their main limitations is that the learning phase can be extremely consuming due to the long training times required which constitute a hard bottleneck for their use in practice. Thus their implementation in graphics hardware is highly desirable as a way to speed up the training process. In this paper we present a bankruptcy prediction model based on the parallel implementation of the Multiple BackPropagation (MBP) algorithm which is tested on a real data set of French companies (healthy and bankrupt). Results by running the MBP algorithm in a sequential processing CPU version and in a parallel GPU implementation show reduced computational costs with respect to the latter while yielding very competitive performance.
We present a successful design for a high-performance, low-resource-consuming hardware for Support Vector Classification and Support Vector Regression. The system has been implemented on a low cost FPGA device and exploits the advantages of parallel processing to compute the feed forward phase in support vector machines. In this paper we show that the same hardware can be used for classification problems and regression problems, and we show satisfactory results on an image recognition problem by SV multiclass classification and on a function estimation problem by SV regression.
There is a significant interest in the research community to develop large scale, high performance implementations of neuromorphic models. These have the potential to provide significantly stronger information processing capabilities than current computing algorithms. We present the implementation of five neuromorphic models on a 50 TeraFLOPS 336 node Playstation 3 cluster at the Air Force Research Laboratory. The five models examined span two classes of neuromorphic algorithms: hierarchical Bayesian and spiking neural networks. Our results indicate that the models scale well on this cluster and can emulate between 108 to 1010 neurons. Our study indicates that a cluster of Playstation 3s can provide an economical, yet powerful, platform for simulating large scale neuromorphic models.
This paper is a report on the migration of the molecular docking application, "Autodock" to NVIDIA CUDA. Autodock is a Drug Discovery Tool that uses a Genetic Algorithm to find the optimal docking position of a ligand to a protein. Speedup of Autodock greatly benefits the drug discovery process. In this paper, we show how significant speed up of Autodock can be achieved using NVIDIA CUDA. This paper describes the strategy of porting the Genetic Algorithm to CUDA. Three different parallel design alternatives are discussed. The resultant implementation features ~50x speedup on the fitness function evaluation and 10x to 47x speedup on the core genetic algorithm.
Metaheuristics are used for solving optimization problems since they are able to compute near optimal solutions in reasonable times. However, solving large instances it may pose a challenge even for these techniques. For this reason, metaheuristics parallelization is an interesting alternative in order to decrease the execution time and to provide a different search pattern. In the last years, GPUs have evolved at a breathtaking pace. Originally, they were specific-purpose devices, but in a few years they became general-purpose shared memory multiprocessors. Nowadays, these devices are a powerful low cost platform for implementing parallel algorithms. In this paper, we present a preliminary version of PUGACE, a cellular Evolutionary Algorithm framework implemented on GPU. PUGACE was designed with the goal of providing a tool for easily developing this kind of algorithms. The experimental results when solving the Quadratic Assignment Problem are presented to show the potential of the proposed framework.
Rather than attempting to evolve a complete program from scratch we demonstrate genetic interface programming (GIP) by automatically generating a parallel CUDA kernel with identical functionality to existing highly optimised ancient sequential C code (gzip). Generic GPGPU nVidia kernel C++ code is converted into a BNF grammar. Strongly typed genetic programming uses the BNF to generate compilable and executable graphics card kernels. Their fitness is given by running the population on a GPU with randomised subsets of training data itself derived from gzip's SIR test suite. Back-to-back validation uses the original code as a test oracle.
Over the last years, interest in hybrid metaheuristics has risen considerably in the field of optimization. Combinations of methods such as evolutionary algorithms and local searches have provided very powerful search algorithms. However, due to their complexity, the computational time of the solution search exploration remains exorbitant when large problem instances are to be solved. Therefore, the use of GPU-based parallel computing is required as a complementary way to speed up the search. This paper presents a new methodology to design and implement efficiently and effectively hybrid evolutionary algorithms on GPU accelerators. The methodology enables efficient mappings of the explored search space onto the GPU memory hierarchy. The experimental results show that the approach is very efficient especially for large problem instances.
This paper proposes an evolutionary algorithm for solving QAPs with parallel independent run using GPU computation and gives a statistical analysis on how speedup can be attained with this model. With the proposed model, we achieve a GPU computation performance that is nearly proportional to the number of equipped multi-processors (MPs) in the GPUs. We explain these computational results by performing statistical analysis. Regarding performance comparison to CPU computations, GPU computation shows a speedup of x4.4 and x7.9 on average using a single GPU and two GPUs, respectively.
W.B.Langdon 8 May 2010 (last update 5 June 2013)