Cédric ArchambeauPrincipal Applied ScientistAmazon Web Services, Berlin, Germany cedrica#amazon,com Fellow, Robust Machine Learning ProgramEuropean Laboratory for Learning and Intelligent Systems Berlin Unit Associate MemberDepartment of Statistics University of Oxford, United Kingdom Formerly Honorary Senior Research AssociateCentre for Computational Statistics and Machine Learning University College London, United Kingdom [Google Scholar] [Semantic Scholar] [DBLP] [LinkedIn] Follow me on Twitter: @cedapproxFollow me on Mastodon: @cedapprox@sigmoid.social |

**About me**

My research interests lie in probabilistic machine learning and Bayesian decision making. Machine learning has a large overlap with statistics, plays a central role in data science, and is fuelling the AI revolution we are experiencing today. My recent work focusses on automated machine learning and trustworthy AI.

I joined Amazon, Berlin, as an Applied Science Manager in October 2013 to develop zero-parameter machine learning algorithms. I am now a Principal Applied Scientist at Amazon Web Services, where I oversee the product-related science powering Amazon SageMaker and lead long-term science initiatives in the area of automated machine learning, continual learning, model understanding, and differential privacy. Prior to that, I worked for Xerox Research Centre Europe (now Naver Labs Europe), where I led the Machine Learning group. My team conducted applied research in machine learning, computational statistics and mechanism design, with applications in customer care, transportation and governmental services.

Since 2017, I am an associate member of the department of Statistics at the University of Oxford. I was elected Fellow in the Robust Machine Learning Program of the European Lab for Learning and Intelligent Systems in 2018. I serve as an Action Editor of the Transactions on Machine Learning Research, a new venue for dissemination of machine learning research that is intended to complement the Journal of Machine Learning Research. I was the Tutorials Chair for ECML-PKDD '09 and the Industry Track Chair for ECML-PKDD '12.

I received the Electrical Engineering degree and the PhD in Applied Sciences from the UCLouvain, respectively in 2001 and 2005. As a member of the Machine Learning Group and the Crypto Group, I worked on the European projects OPTIVIP, in which I developed neural networks embedded in a visual prosthesis for the blind, and SCARD, where I demonstrated weaknesses of cryptographic hardware against machine learning-based side channel attacks that exploit electro-magnetic radiation. Next, I did a Post-doc with John Shawe-Taylor at University College London and collaborated closely with Manfred Opper on problems in data assimilation and dynamical systems. I was also an active participant of the PASCAL European network of excellence. Until December 2015, I held a Honorary Senior Research Associate position in the Centre for Computational Statistics and Machine Learning.

**Open source libraries:**

Syne Tune, a library for asynchronous **hyperparameter and neural architecture optimization**. Our goal is to make machine learning more reproducible by covering a broad range of optimisers, offering multi-fidelity and multi-objective algorithms, and making it easy to run experiments on the cloud. We just released a new version with better documentation!

Renate, a **continual learning** library to automatically retrain and retune deep neural networks!

Fortuna, a library for **uncertainty quantification** to help deploy deep learning in a more responsibly and safely!

**Lectures and selected presentations**

Machine Learning module of the StatML Centre for Doctoral Training, Oxford, 2022: Algorithms for Automated Hyperparameter and Neural Architecture Optimization.

Department of Statistics, University of Oxford, 2022: Open (Practical) Problems in Machine Learning Automation.

Machine Learning module of the StatML Centre for Doctoral Training, Oxford, 2021: Algorithms for Automated Hyperparameter and Neural Architecture Optimization and Variational Inference.

Mini-symposium on Bayesian Methods in Science and Engineering at the SIAM Conference on Computational Science and Engineering: Bayesian Optimization by Density-Ratio Estimation. Virtual, 2021.

CVPR 2020 tutorial From HPO to NAS: Automated Deep Learning: Automated HP and Architecture Tuning. (recording)

Computational Statistics and Machine Learning Seminars, Oxford, 2019: Learning Representations to Accelerate Hyperparameter Tuning.

Machine Learning module of the OxWaSP Centre for Doctoral Training, Oxford, 2019: Bayesian Optimisation and Variational Inference.

DALI 2018 workshop on Goals and Principles of Representation Learning, Lanzarote, 2018: Learning Reperesentations for Hyperparameter Transfer Learning.

Congrès MATh.en.Jeans, Potsdam, 2018: L'Apprentissage Statistiqueet son Application en Industrie.

Machine Learning module of the OxWaSP Centre for Doctoral Training, Oxford, 2018: Bayesian Optimisation and Variational Inference.

NeurIPS workshop on Advances in Approximate Bayesian Inference (AABI), Long Beach, 2017: Approximate Bayesian Inference in Industry: Two Applications at Amazon.

Machine Learning Tutorial at Imperial College, London, 2017: Bayesian Optimisation.

Data Science Summer School (DS3), Paris, 2017: Tutorial on Bayesian Optimisation; Amazon: A Playground for Machine Learning.

Machine Learning Summer School (MLSS 2016, Arequipa): Bayesian Optimisation.

Peyresq Summer School in Signal and Image Processing '16: Classification and Clustering.

Engineering in Computer Science '12 at ENSIMAG: Statistical Principles and Methods.

MSc in Machine Learning '11 (Applied Machine Learning) at UCL: Machine Learning at Xerox -- From statistical
machine translation to large-scale image search.

Tutorial on Probabilistic Graphical Models at PASCAL Bootcamp
2010: videolecture
(2 parts).

MSc in Intelligent Systems '08 at UCL: Advanced
Topics in Machine Learning.

CSML'07 reading group on
Stochastic Differential Equations.

**Workshops and seminars**

ELLIS AutoML seminars. This is a virtual seminar series; **everyone is welcome to join**!

Gaussian Process Approximations (GPA) workshop, Berlin, Germany, 2017.

NeurIPS workshop on Learning Semantics, Montreal, Canada, 2014.

NeurIPS workshop on Choice Models and
Preference Learning, Grenada, Spain, 2011.

Workshop on Automated Knowledge Base Construction, Grenoble, France, 2010.

PASCAL2 workshop on
Approximate Inference in Stochastic Processes and Dynamical Systems, Cumberland Lodge, United Kingdom, 2008.

NeurIPS workshop on Dynamical
Systems, Stochastic Processes and Bayesian Inference, Whistler, Canada, 2006.

**Service to the community**

I am Action Editor of the Transactions on Machine Learning Research. I also served as the Tutorials Chair of ECML-PKDD '09 and the Industry Track Chair for ECML-PKDD '12. I was a reserve member of the High-level Expert Group on Artificial Intelligence for the European Commission.

Conference area chair: NeurIPS '11, NeurIPS '13, AISTATS '14, AISTATS '15, ICML '15, NeurIPS '17, ICML '18, NeurIPS '18, IJCAI '19, NeurIPS '19, AISTATS '20, IJCAI '20, NeurIPS '20, ICML '21, ICLR '21, NeurIPS '21, NeurIPS '22, ICLR '22, AISTATS '23 and ICML '23.

Journal reviewer: Journal of Machine Learning Research, Neural Networks, IEEE transactions on Pattern Analysis and Machine Intelligence, IEEE transactions on Neural Networks and Learning Systems, IEEE transactions on Signal Processing, IEEE transactions on Image Processing, Neuroimage, Neurocomputing, and Pattern Recognition.

PASHA: Efficient HPO and NAS with Progressive Resource Allocation.

O. Bohdal, L. Balles, M. Wistuba, B. Ermis, C. Archambeau, G. Zappella

Accepted at International Conference on Representation Learning (ICLR), 2023.

Hyperparameter Optimization.

A. Klein, M. Seeger, C. Archambeau

In Dive Into Deep Learning, vol. 2 (Chapter 19), 2022.

Differentially private gradient boosting on linear learners for tabular data analysis.

S. Rho, C. Archambeau, S. Aydore, B. Ermis, M. Kearns, A. Roth, S. Tang, Y.-X. Wang, S. Wu

NeurIPS workshop on Trustworthy and Socially Responsible Machine Learning, 2022.

Memory Efficient Continual Learning with Transformers.

B. Ermis, G. Zappella, M. Wistuba, A. Rawal, C. Archambeau

Annual Conference on Advances in Neural Information Processing Systems (NeurIPS), 2022.

Private Synthetic Data for Multitask Learning and Marginal Queries.

G. Vietri, C. Archambeau, S. Aydore, W. Brown, M. Kearns, A. Roth, A. Siva, S. Tang, S. Wu

Annual Conference on Advances in Neural Information Processing Systems (NeurIPS), 2022.

Uncertainty Calibration in Bayesian Neural Networks via Distance-Aware Priors.

G. Detommaso, A. Gasparin, A. Wilson, C. Archambeau

Technical report, 2022.

Automatic Termination for Hyperparameter Optimization.

A. Makarova, H. Shen, V. Perrone, A. Klein, J. B. Faddoul, A. Krause, M. Seeger, C. Archambeau

Conference on Automated Machine Learning (Main Track), 2022.

Syne Tune: A Library for Large Scale Hyperparameter Tuning and Reproducible Research. [GitHub]

D. Salinas, M. Seeger, A. Klein, V. Perrone, M. Wistuba, C. Archambeau

Conference on Automated Machine Learning (Main Track), 2022.

PASHA: Efficient HPO with Progressive Resource Allocation. [Code]

O. Bohdal, L. Balles, B. Ermis, C. Archambeau, G. Zappella

Conference on Automated Machine Learning (Late-Breaking Workshop Track), 2022.

Gradient-Matching Coresets for Rehearsal-Based Continual Learning.

L. Balles, G. Zappella, C. Archambeau

Technical report, 2022.

Continual Learning with Transformers for Image Classification.

B. Ermis, G. Zappella, M. Wistuba, A. Rawal, C. Archambeau

CVPR workshop on Continual Learning in Computer Vision, 2022.

Memory-efficient Continual Learning for Neural Text Classification.

B. Ermis, G. Zappella, M. Wistuba, C. Archambeau

Technical report, 2022.

Diverse Counterfactual Explanations for Anomaly Detection in Time Series.

D. Sulem, M. Donini, M. B. Zafar, F. X. Aubet, J. Gasthaus, T. Januschowski, S. Das, K. Kenthapadi, C. Archambeau

Technical report, 2021.

Gradient-matching Coresets for Continual Learning.

L. Balles, G. Zappella, C. Archambeau

NeurIPS workshop on Distribution Shifts: Connecting Methods and Applications, 2021.

Meta-Forecasting by Combining Global Deep Representations with Local Adaptation.

R. Grazzi, V. Flunkert, D. Salinas, T. Januschowski, M. Seeger, C. Archambeau

Technical report, 2021.

Multi-objective Asynchronous Successive Halving.

R. Schmucker, M. Donini, M. B. Zafar, D. Salinas, C. Archambeau

Technical report, 2021.

On the Lack of Robustness of Deep Neural Text Classifiers.

M. B. Zafar, M. Donini, D. Slack, C. Archambeau, S. Das, K. Kenthapadi

Annual Meeting of the Association for Computational Linguistics (ACL), 2021. Findings.

Towards Robust Episodic Meta-Learning.

B. Ermis, G. Zappella, C. Archambeau

Uncertainty in Artificial Intelligence (UAI), 2021.

BORE: Bayesian Optimization by Density-Ratio Estimation.

L. Tiao, A. Klein, M. Seeger, E. Bonilla, C. Archambeau, F. Ramos

International Conference on Machine Learning (ICML), 2021. (

Dynamic Pruning of a Neural Network via Gradient Signal-to-Noise Ratio.

J. Siems, A. Klein, C. Archambeau, M. Mahsereci

ICML AutoML workshop, 2021.

A Resource-efficient Method for Repeated HPO and NAS Problems.

G. Zappella, D. Salinas, C. Archambeau

ICML AutoML workshop, 2021.

A Multi-objective Perspective on Jointly Tuning Hardware and Hyperparameters.

D. Salinas, V. Perrone, O. Cruchant, C. Archambeau

ICLR NAS workshop, 2021.

Overfitting in Bayesian Optimization: an empirical study and early-stopping solution.

A. Makarova, H. Shen, V. Perrone, A. Kelin, J.B. Faddoul, A. Krause, M. Seeger, C. Archambeau

ICLR NAS workshop, 2021.

Amazon SageMaker Automatic Model Tuning: Black-box Optimization at Scale.

V. Perrone, H. Shen, A. Zolic, I. Shcherbatyi, A. Ahmed, T. Bansal, M. Donini, F. Winkelmolen, R. Jenatton, J. B. Faddoul, B. Pogorzelska, M. Miladinovic, K. Kenthapadi, M. Seeger, C. Archambeau

ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2021. Industry track.

Fair Bayesian Optimization.

V. Perrone, M. Donini, B. Zafar, K. Kenthapadi, C. Archambeau

AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES), 2021.

Hyperparameter Transfer Learning with Adaptive Complexity.

S. Horvath, A. Klein, C. Archambeau

International Conference on Artificial Intelligence and Statistics (AISTATS), 2021.

Bayesian Optimization by Density Ratio Estimation.

L. Tiao, A. Klein, C. Archambeau, E. Bonilla, F. Ramos, M. Seeger

NeurIPS workshop on Meta-learning, December 2020. (selected for oral presentation)

Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization.

G. Guinet, V. Perrone, C. Archambeau

NeurIPS workshop on Meta-learning, 2020.

Multi-Objective Multi-Fidelity Hyperparameter Optimization with application to Fairness.

R. Schmucker, M. Donini, V. Perrone, B. Zafar, C. Archambeau

NeurIPS workshop on Meta-learning, 2020.

Model-based Asynchronous Hyperparameter and Neural Architecture Search.

L. C. Tiao, A. Klein, T. Lienart, C. Archambeau, M. Seeger

Technical report, 2020.

LEEP: A New Measure to Evaluate Transferability of Learned Representations.

C. V. Nguyen, T. Hassner, M. Seeger, C. Archambeau

International Conference on Machine Learning (ICML), 2020.

Bayesian Optimization with Fairness Constraints.

V. Perrone, M. Donini, K. Kenthapadi, C. Archambeau

ICML workshop on AutoML, 2020. (

Cost-aware Bayesian Optimization.

E. Hans Lee, V. Perrone, C. Archambeau, M. Seeger

ICML workshop on AutoML, 2020.

Constrained Bayesian Optimization with Max-Value Entropy Search.

V. Perrone, I.Shcherbatyi, R.Jenatton, C.Archambeau, M.Seeger

NeurIPS workshop on Meta-learning, 2019.

Learning Search Spaces for Bayesian Optimization: Another View of Hyperparameter Transfer Learning.

V. Perrone, H. Shen, M. Seeger, C. Archambeau, R. Jenatton

Annual Conference on Advances in Neural Information Processing Systems (NeurIPS), 2019.

Scalable Hyperparameter Transfer Learning.

V. Perrone, R. Jenatton, M. Seeger, C. Archambeau

Annual Conference on Advances in Neural Information Processing Systems (NeurIPS), 2018.

A Simple Transfer Learning Extension of Hyperband.

L. Valkov, R. Jenatton, F. Winkelmolen, C. Archambeau

NeurIPS workshop on Meta-Learning, 2018.

Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start.

V. Perrone, R. Jenatton, M. Seeger, C. Archambeau

NeurIPS workshop on Meta-Learning, 2017.

An interpretable latent variable model for attribute applicability in the Amazon catalogue.

T. Rukat, D. Lange, C. Archambeau

NeurIPS Symposium on Interpretable Machine Learning, 2017.

Bayesian Optimization with Tree-structured Dependencies.

R. Jenatton, C. Archambeau, J. Gonzalez, M. Seeger

International Conference on Machine Learning (ICML), 2017.

Online Optimization and Regret Guarantees for Non-additive Long-term Constraints.

R. Jenatton, J. Huang, D. Csiba, C. Archambeau

Technical report, 2016.

Online Dual Decomposition for Performance and Delivery-based Distributed Ad Allocation.

J. Huang, R. Jenatton, C. Archambeau

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp. 117-126, 2016.

Adaptive Algorithms for Online Convex Optimization with Long-term Constraints.

R. Jenatton, J. Huang, C. Archambeau

International Conference on Machine Learning (ICML), 2016.

Incremental Variational Inference applied to Latent Dirichlet Allocation. [slides]

C. Archambeau, B. Ermis

NeurIPS workshop on Advances in Approximate Bayesian Inference, 2015.

One-Pass Ranking Models for Low-Latency Product Recommendations.

A. Freno, M. Saveski, R. Jenatton, C. Archambeau

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pp. 1789-1798, 2015.

Incremental Variational Inference for Latent Dirichlet Allocation.

C. Archambeau, B. Ermis

Technical report, 2015.

Online Inference for Relation Extraction with a Reduced Feature Set.

M. Rabinovich, C. Archambeau

Technical report, 2015.

Latent IBP compound Dirichlet Allocation.

C. Archambeau, B. Lakshminarayan, G. Bouchard

IEEE transactions in Pattern Analysis and Machine Intelligence (PAMI) 37(2):321-333, 2015.

Overlapping Trace Norms in Multi-View Learning.

B. Behmardi, C. Archambeau, G. Bouchard

Technical report, April 2014.

Towards Crowd-based Customer Service: A Mixed-Initiative Tool for Managing Q&A Sites.

T. Piccardi, G. Convertino, M. Zancanaro, J.Wang, C. Archambeau

Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), pp. 2725-2734, 2014.

Log-linear Language Models based on Structured Sparsity.

A. Nelakanti, C. Archambeau, J. Mairal, F. Bach, G. Bouchard

Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 233-243, 2013.

Bringing Representativeness into Social Media Monitoring and Analysis.

M. Kaschesky, P. Sobkowicz, J. M. Hernández-Lobato, G. Bouchard, C. Archambeau, N. Scharioth, R. Manchin, A. Gschwend, R. Riedl

46th Hawaii International Conference on System Sciences (HICSS), pp. 2003-2012, 2013

Error Prediction with Partial Feedback.

W. Darling, C. Archambeau, S. Mirkin, G. Bouchard

In H. Blockeel, K. Kersting, S. Nijssen, F. Zelezny (Eds.), European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), Lecture Notes in Computer Science (LNCS), 8189:80-94, 2013.

Connecting Comments and Tags: Improved Modeling of Social Tagging Systems.

D. Yin, S. Guo, B. Davison, C. Archambeau, G. Bouchard

In S. Leonardi, A. Panconesi, P. Ferragina, A. Gionis (Eds.), 6th ACM Conference on Web Search and Data Mining (WSDM), pp. 547-556, 2013.

Plackett-Luce regression: a new Bayesian model for polychotomous data

C. Archambeau, F. Caron

In N. de Freitas, K. P. Murphy (Eds.), Uncertainty in Artificial Intelligence (UAI) 28, pp. 84-92, 2012.

Variational Markov chain Monte Carlo for Bayesian smoothing of non-linear diffusions

Y. Shen, D. Cornford, M. Opper, C. Archambeau

Computational Statistics 27:1, 149-176, 2012.

Latent IBP compound Dirichlet allocation

C. Archambeau, B. Lakshimanarayan, G. Bouchard

NeurIPS 24 workshop on Bayesian nonparametrics: Hope or Hype?, 2011.

Sparse Bayesian multi-task learning

C. Archambeau, S. Guo., O. Zoeter

In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. C. N. Pereira, K. Q. Weinberger (Eds.), Neural Information Processing Systems (NeurIPS) 24, pp. 1755-1763, 2011.

Robust Bayesian Matrix Factorisation

B. Lakshimanarayan, G. Bouchard, C. Archambeau

Artificial Intelligence and Statistics (AISTATS) 14. JMLR Workshop and Conference Proceedings 15:425-433, 2011.

Approximate Inference for continuous-time Markov processes

C. Archambeau, M. Opper

In D. Barber, A. T. Cemgil, and S. Chiappa, Inference and Learning in Dynamic Models. Cambridge University Press, 2011.

The Sequence Memoizer

F. Wood, J. Gasthaus, C. Archambeau, L. James, Y. W. Teh

Communications of the ACM, 54(2):91-98, 2011.

Mail2Wiki: low-cost sharing and early curation from email to wikis.

B. V. Hanrahan, G. Bouchard, G. Convertino, T. Weksteen, N. Kong, C. Archambeau, E. H. Chi

In M. Foth, J. Kjeldskov, J. Paay (Eds.), Proceedings of the International Conference on Communities and Technologies (C&T) 5, pp. 98-107, 2011.

Mail2Wiki: posting and curating Wiki content from email [demo]

B. V. Hanrahan, T. Weksteen, N. Kong, G. Convertino, G. Bouchard, C. Archambeau, E. H. Chi

In P. Pu, M. J. Pazzani, E. Andre, D. Riecken (Eds.), Proceedings of the International Conference on Intelligent User Interfaces (IUI), pp 441-442, 2011.

Multiple Gaussian process models [videolecture]

C. Archambeau, F. Bach

NeurIPS 23 workshop on New Directions in Multiple Kernel Learning, 2010. [arXiv]

A Comparison of Variational and Markov Chain Monte Carlo Methods for Inference in Partially Observed Stochastic Dynamic Systems

Y. Shen, C. Archambeau, D. Cornford, M. Opper, J. Shawe-Taylor, R. Barillec

Journal of Signal Processing Systems, 61(1):51-59, 2010.

Stochastic Memoizer for Sequence Data

F. Wood, C. Archambeau, J. Gasthaus, L. James, Y. W. Teh

In L. Bottou and M. Littman, Proceedings of the 26th International Conference on Machine Learning (ICML), Montreal (Quebec), Canada, June 14-18, 2009, pp. 1129-1136. ACM.

The Variational Gaussian Approximation Revisited

M. Opper, C. Archambeau

Neural Computation 21(3):786-792, 2009.

Sparse Probabilistic Projections

C. Archambeau, F. Bach

In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou (Eds.), Neural Information Processing Systems (NeurIPS) 21, pp.17-24, 2009. The MIT Press.

Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods

S. Lise, C. Archambeau, M. Pontil, D. Jones

BMC Bioinformatics, 10: 365-382, 2009.

Switching Regulatory Models of Cellular Stress Response

G. Sanguinetti, A. Ruttor, M. Opper, C. Archambeau

Bioinformatics, 25(10): 1280-1286, 2009. Oxford University Press.

Mixtures of Robust Probabilistic Principal Component Analyzers

C. Archambeau, N. Delannay, M. Verleysen

Neurocomputing, 71(7-9):1274-1282, 2008. Elsevier.

Improving the robustness to outliers of mixtures of probabilistic PCAs

N. Delannay, C. Archambeau, M. Verleysen

In T. Wahio, et al. (Eds.), Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) 12, Lecture notes in Artificial Intelligence (LNAI) 5012:527-535, 2008. Springer.

Variational Inference for Diffusion Processes

C. Archambeau, M. Opper, Y. Shen, D. Cornford, J. Shawe-Taylor

In C. Platt, D. Koller, Y. Singer and S. Roweis (Eds.), Neural Information Processing Systems (NeurIPS) 20, pp.17-24, 2008. The MIT Press.

Using Subspace-Based Template Attacks to Compare and Combine Power and Electromagnetic Information Leakages

F.-X. Standaert, C. Archambeau

In E. Oswald and P. Rohatgi (Eds.), 10th International Workshop on Cryptographic Hardware and Embedded Systems (CHES), Washington, DC, USA, 10-13 August, 2008. Lecture Notes in Computer Science vol. 5154, pp. 411-425. Springer.

Evaluation of Variational and Markov Chain Monte Carlo Methods for Inference in Partially Observed Stochastic Dynamic Systems

Y. Shen, C. Archambeau, D. Cornford, M. Opper, J. Shawe-Taylor, R. Barillec

Proceedings of the 17th IEEE workshop on Machine Learning for Signal Processing (MLSP), Thessaloniki, Greece, 27-28 August, 2007, pp. 306-311.

Gaussian Process Approximations of Stochastic Differential Equations

C. Archambeau, D. Cornford, M. Opper, J. Shawe-Taylor

Journal of Machine Learning Research Workshop and Conference Proceedings, 1:1-16, 2007.

Mixtures of Robust Probabilistic Principal Component Analyzers

C. Archambeau, N. Delannay, M. Verleysen

Proceedings of the 15th European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, April 25-27, 2007, pp. 229-234. D-side.

Robust Bayesian Clustering

C. Archambeau and M. Verleysen

Neural Networks, 20:129-138, 2007. Elsevier.

Automatic Adjustment of Discriminant Adaptive Nearest Neighbor

N. Delannay, C. Archambeau, M. Verleysen

In Y.Y. Tang, P. Wang, G. Lorette and D.S. Yeung (Eds.), Proceedings of the 18th International Conference on Pattern Recognition (ICPR), Hong Kong, P.R.C., 20-24 August, 2006, vol. 2, pp. 525-555. IEEE Computer Society.

Robust Probabilistic Projections

C. Archambeau, N. Delannay, M. Verleysen

In W. W. Cohen and A. Moore (Eds.), Proceedings of the 23rd International Conference on Machine Learning (ICML), Pittsburgh (PA), U.S.A., 25-29 June, 2006, pp. 33-40. ACM.

There are a couple of typos in the Appendix. Here's the corrected version: .

Template Attacks in Principal Subspaces

C. Archambeau, E. Peeters, F.-X. Standaert, J.-J. Quisquater

In L. Goubin and M. Matsui (Eds.), 8th International Workshop on Cryptographic Hardware and Embedded Systems (CHES), Yokohama, Japan, 10-13 October, 2006. Lecture Notes in Computer Science vol. 4249, pp. 1-14. Springer.

Towards Security Limits of Side-Channel Attacks

F.-X. Standaert, E. Peeters, C. Archambeau, J.-J. Quisquater

In L. Goubin and M. Matsui (Eds.), 8th International Workshop on Cryptographic Hardware and Embedded Systems (CHES), Yokohama, Japan, 10-13 October, 2006. Lecture Notes in Computer Science vol. 4249, pp. 30-45. Springer.

Manifold Constrained Finite Gaussian Mixtures

C. Archambeau and M. Verleysen

In J. Cabestany, A. Prieto and F. Sandoval Hernández (Eds.), Computational Intelligence and Bioinspired Systems - 8th International Work-Conference on Artificial Neural Networks (IWANN), Vilanova i la Geltrú (Barcelona), Spain, June 8-10, 2005. Lecture Notes in Computer Science, vol. 3512, pp.820-828. Springer.

Local Vector-based Models for Sense Discrimination

M.-C. de Marneffe, C. Archambeau, P. Dupont, M. Verleysen

In H. Bunt, J. Geertzen and E. Thijsse (Eds.), Proceedings of the 6th International Workshop on Computational Semantics (IWCS), Tilburg, the Netherlands, January 12-14, 2005, pp. 163-174.

Supervised Nonparametric Information Theoretic Classification

C. Archambeau, T. Butz, V. Popovici, M. Verleysen, J.-P. Thiran

In J. Kittler, M. Petrou and M. Nixon (Eds.), Proceedings of the 17th International Conference on Pattern Recognition (ICPR), Cambridge, U.K., August 23-26, 2004, vol. 3, pp. 414-417. IEEE Computer Society.

Flexible and Robust Bayesian Classification by Finite Mixture Models

C. Archambeau, F. Vrins, M. Verleysen

Proceedings of the 12th European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, April 28-30, 2004, pp. 75-80. D-side.

Towards a Local Separation Performances Estimator using Common ICA contrast Funtions?

F. Vrins, C. Archambeau, M. Verleysen

Proceedings of the 12th European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, April 28-30, 2004, pp. 211-216. D-side.

Entropy Minima and Distribution Structural Modifications in Blind Separation of Multi-model Sources

F. Vrins, C. Archambeau, M. Verleysen

In R. Fisher, R. Preuss and U. von Toussaint, Proceedings of the 24th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt), IPP Garching bei München, Germany, July 25-30, 2004, pp. 589-596. American Institute of Physics (AIP).

Prediction of Visual Perceptions with Artificial Neural Networks in a Visual Prosthesis for the Blind

C. Archambeau, J. Delbeke, C. Veraart, M. Verleysen

Artificial Intelligence in Medicine, 32(3):183-194, 2004. Elsevier.

On Convergence Problems of the EM Algorithm for Finite Gaussian Mixtures

C. Archambeau, J.A. Lee, M. Verleysen

Proceedings of the 11th European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, April 23-25, 2003, pp. 99-106. D-side.

Locally Linear Embedding versus Isotop

J.A. Lee, C. Archambeau, M. Verleysen

Proceedings of the 11th European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, April 23-25, 2003, pp. 527-534.

Classification of Visual Sensations Generated Electrically in the Visual Field of the Blind

C. Archambeau, J. Delbeke, M. Verleysen

In D. D. Feng and E. R. Carson (Eds.), Proceedings of the 5th IFAC Symposium on Modelling and Control in Biomedical Systems, Melbourne, Australia, August 21-23, 2003, pp. 223-228. Elsevier.

Width Optimization of the Gaussian Kernels in Radial Basis Function Networks

N. Benoudjit, C. Archambeau, A. Lendasse, J.A. Lee, M. Verleysen

Proceedings of the 10th European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, April 24-26, 2002, pp. 425-432.

Phosphene Evaluation in a Visual Prosthesis with Artificial Neural Networks

C. Archambeau, A. Lendasse, C. Trullemans, C. Veraart, J. Delbeke, M. Verleysen

Proceedings of the 1st European Symposium on Intelligent Technologies, Hybrid Systems and their implementation on Smart Adaptive Systems (EUNITE), Puerto de la Cruz (Tenerife), Spain, December 13-14, 2001, pp. 509-515.

Also published in G. D. Dounias and D. A. Linkens (Eds.), Adaptive Systems and Hybrid Computational Intelligence in Medicine, 2001, pp. 116-122. University of the Aegean.

**Thesis**

Probabilistic Models in
Noisy Environments - And their Application to a Visual Prosthesis for
the Blind

C. Archambeau

Doctoral dissertation,
Université catholique de Louvain, Louvain-la-Neuve, Belgium,
September, 2005.

**WARNING!** Material on this web
site is presented to ensure timely dissemination of technical work.
Copyright and all rights therein are retained by authors or by other
copyright holders, notwithstanding that they have offered their works
here electronically. All persons copying this information are expected
to adhere to the terms and constraints invoked by each author's
copyright. These works may not be reposted without the explicit
permission of the copyright holder. Copyright holders claiming that the
material available above is not in accordance with copyright terms and
constraints are invited to contact the author by e-mail and ask him to
remove the links to specific manuscripts.