About me Currently Director of Artificial Intelligence at Helsing, I was a Principal Applied Scientist at Amazon and the Chief Scientist for Amazon SageMaker until June 2023. Since 2017, I am an associate member of the department of Statistics at the University of Oxford. I was elected Fellow in the Robust Machine Learning Program of the European Lab for Learning and Intelligent Systems in 2018. I am Action Editor of the Transactions on Machine Learning Research, a reviewer of the Journal on Machine Learning Research, and regularly serve as Area Chair of top tier conferences in machine learning, such as NeurIPS, ICML, and ICLR. I received the Electrical Engineering degree and the PhD in Applied Sciences from UCLouvain, respectively in 2001 and 2005, where I was a member of the Machine Learning Group and the Crypto Group. I worked on the European projects OPTIVIP, in which I developed neural networks embedded in a visual prosthesis for the blind, and SCARD, in which I demonstrated weaknesses of cryptographic hardware against machine learning-based side channel attacks exploiting electro-magnetic radiation. Next, I did a Post-doc with John Shawe-Taylor at University College London and collaborated closely with Manfred Opper at the TU Berlin on problems in data assimilation and approximate Bayesian inference. I was also an active participant of the PASCAL European network of excellence. Until December 2015, I held a Honorary Senior Research Associate position in the Centre for Computational Statistics and Machine Learning. I joined Xerox Research Centre Europe (now Naver Labs Europe) in October 2009, where I led the Machine Learning group. I conducted applied research in machine learning, natural language understanding, and mechanism design, with applications in customer care, transportation, and governmental services. I joined Amazon, Berlin, in October 2013 to develop zero-parameter machine learning algorithms. As an Principal Applied Scientist at Amazon Web Services (AWS), I oversaw the product-related science powering Amazon SageMaker and led long-term science initiatives in the area of automated machine learning, continual learning, and responsible AI. My work at AWS lay the foundation of Amazon SageMaker Automatic Model Tuning, Amazon SageMaker Autopilot, Amazon SageMaker Clarify, and AWS Clean Rooms. Open source libraries: Syne Tune, a library for asynchronous hyperparameter and neural architecture optimization. Our goal is to make machine learning more reproducible by covering a broad range of optimisers, offering multi-fidelity and multi-objective algorithms, and making it easy to run experiments on the cloud. We just released a new version with better documentation! Renate, a continual learning library to automatically retrain and retune deep neural networks! Fortuna, a library for uncertainty quantification to help deploy deep learning in a more responsibly and safely!
Lectures and selected presentations Machine Learning module of the StatML Centre for Doctoral Training, Oxford, 2024: Bayesian Optimization and A Primer on Foundation Models. ELLIS Robust ML workshop, Helsinki, 2024: Explaining Probabilistic Models with Distributional Values. Machine Learning module of the StatML Centre for Doctoral Training, Oxford, 2022: Algorithms for Automated Hyperparameter and Neural Architecture Optimization. Department of Statistics, University of Oxford, 2022: Open (Practical) Problems in Machine Learning Automation. Machine Learning module of the StatML Centre for Doctoral Training, Oxford, 2021: Algorithms for Automated Hyperparameter and Neural Architecture Optimization and Variational Inference. Mini-symposium on Bayesian Methods in Science and Engineering at the SIAM Conference on Computational Science and Engineering: Bayesian Optimization by Density-Ratio Estimation. Virtual, 2021. CVPR 2020 tutorial From HPO to NAS: Automated Deep Learning: Automated HP and Architecture Tuning. (recording) Computational Statistics and Machine Learning Seminars, Oxford, 2019: Learning Representations to Accelerate Hyperparameter Tuning. Machine Learning module of the OxWaSP Centre for Doctoral Training, Oxford, 2019: Bayesian Optimisation and Variational Inference. DALI 2018 workshop on Goals and Principles of Representation Learning, Lanzarote, 2018: Learning Reperesentations for Hyperparameter Transfer Learning. Congrès MATh.en.Jeans, Potsdam, 2018: L'Apprentissage Statistiqueet son Application en Industrie. Machine Learning module of the OxWaSP Centre for Doctoral Training, Oxford, 2018: Bayesian Optimisation and Variational Inference. NeurIPS workshop on Advances in Approximate Bayesian Inference (AABI), Long Beach, 2017: Approximate Bayesian Inference in Industry: Two Applications at Amazon. Machine Learning Tutorial at Imperial College, London, 2017: Bayesian Optimisation. Data Science Summer School (DS3), Paris, 2017: Tutorial on Bayesian Optimisation; Amazon: A Playground for Machine Learning. Machine Learning Summer School (MLSS 2016, Arequipa): Bayesian Optimisation. Peyresq Summer School in Signal and Image Processing '16: Classification and Clustering. Engineering in Computer Science '12 at ENSIMAG: Statistical Principles and Methods. MSc in Machine Learning '11 (Applied Machine Learning) at UCL: Machine Learning at Xerox -- From statistical machine translation to large-scale image search. Tutorial on Probabilistic Graphical Models at PASCAL Bootcamp 2010: videolecture (2 parts). MSc in Intelligent Systems '08 at UCL: Advanced Topics in Machine Learning. CSML'07 reading group on Stochastic Differential Equations. Workshops and seminars ELLIS AutoML seminars. This is a virtual seminar series; everyone is welcome to join! Gaussian Process Approximations (GPA) workshop, Berlin, Germany, 2017. NeurIPS workshop on Learning Semantics, Montreal, Canada, 2014. NeurIPS workshop on Choice Models and Preference Learning, Grenada, Spain, 2011. Workshop on Automated Knowledge Base Construction, Grenoble, France, 2010. PASCAL2 workshop on Approximate Inference in Stochastic Processes and Dynamical Systems, Cumberland Lodge, United Kingdom, 2008. NeurIPS workshop on Dynamical Systems, Stochastic Processes and Bayesian Inference, Whistler, Canada, 2006. Service to the community I am Action Editor of the Transactions on Machine Learning Research, a new venue for dissemination of machine learning research that is intended to complement the Journal of Machine Learning Research. I also served as the Tutorials Chair of ECML-PKDD '09 and the Industry Track Chair for ECML-PKDD '12. I was a reserve member of the High-level Expert Group on Artificial Intelligence for the European Commission. Conference area chair: NeurIPS '11, NeurIPS '13, AISTATS '14, AISTATS '15, ICML '15, NeurIPS '17, ICML '18, NeurIPS '18, IJCAI '19, NeurIPS '19, AISTATS '20, IJCAI '20, NeurIPS '20, ICML '21, ICLR '21, NeurIPS '21, NeurIPS '22, ICLR '22, AISTATS '23, ICML '23, NeurIPS '23, ICLR '24, and ICML '24. Journal reviewer: Journal of Machine Learning Research, Neural Networks, IEEE transactions on Pattern Analysis and Machine Intelligence, IEEE transactions on Neural Networks and Learning Systems, IEEE transactions on Signal Processing, IEEE transactions on Image Processing, Neuroimage, Neurocomputing, and Pattern Recognition.
Thesis Probabilistic Models in Noisy Environments - And their Application to a Visual Prosthesis for the Blind C. Archambeau Doctoral dissertation, Université catholique de Louvain, Louvain-la-Neuve, Belgium, September, 2005.
WARNING! Material on this web site is presented to ensure timely dissemination of technical work. Copyright and all rights therein are retained by authors or by other copyright holders, notwithstanding that they have offered their works here electronically. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder. Copyright holders claiming that the material available above is not in accordance with copyright terms and constraints are invited to contact the author by e-mail and ask him to remove the links to specific manuscripts.