Stéphanie ALLASSONIÈRE
École de médecine de l’université Paris-Descartes
Stéphanie a reçu sa thèse de mathématiques appliquées en 2007 sous la codirection d’Alain Trouvé et de Laurent Younes. Après un post-doctorat au Center for Imaging Science de la JHU à Baltimore, elle a rejoint le Centre de mathématiques appliquées de l’Ecole Polytechnique en 2008 comme professeure chargée de cours, puis a été recrutée comme professeure en 2016 à la faculté de médecine de l’université Paris Descartes. Ses recherches se concentrent sur l’analyse statistique de données médicales afin d’extraire des comportements caractéristiques de populations, classifier les patients et proposer des outils d’aide au diagnostic précoce ou à la prise en charge des malades.
Talk: Les algorithmes de type EM : puissance et versatilité
Pierre ALQUIER
ENSAE ParisTech
Talk: Generalization bounds for online variational inference (based on joint works with James Ridgway, Badr-Eddine Chérief-Abdellatif and Mohammad Emtiyaz Kahn)
Bayesian inference provides an attractive online-learning framework to analyze sequential data, and offers nice generalization guarantees. Unfortunately, exact Bayesian inference is rarely feasible in practice and approximation methods are usually employed, like variational approximations. However generalization bounds for many of these approximations are not known. In this talk I will show that this is indeed the case for an online, tempered variational inference method. We do so by deriving a new generalization bound which relies on the convexity of the variational objective. We argue that our result should hold more generally, and present empirical results in support of this.
Chloé-Agate AZENCOTT
Mines ParisTech, Institut Curie
Chloé-Agathe Azencott is an assistant professor of the Centre for Computational Biology (CBIO) of MINES ParisTech and Institut Curie (Paris, France). She earned her PhD in computer science at University of California, Irvine (USA) in 2010, working at the Institute for Genomics and Bioinformatics. She then spent 3 years as a postdoctoral researcher in the Machine Learning and Computational Biology group of the Max Planck Institutes in Tübingen (Germany) before joining CBIO.
Her research revolves around the development and application of machine learning methods for biomedical research, with particular interest for feature selection and the integration of structured information.
Chloé-Agathe Azencott is also the co-founder of the Parisian branch of Women in Machine Learning and Data Science. And yes, she’s Robert’s niece.
Talk: Variable selection in high-dimensional data for precision medicine
Gérard BEN AROUS
Professor of Mathematics, Courant Institute of Mathematical Sciences ; Global Network Professor, NYU Shanghai ; Associate-provost for Quantitative Disciplines, NYU Shanghai
Gérard Ben Arous, a specialist of probability theory and its applications, has been Professor of Mathematics at NYU’s Courant Institute since 2002 and served as its Director and NYU’s Vice Provost for Science and Engineering Development from 2011 to 2016. A native of France, Professor Ben Arous studied Mathematics at École Normale Supérieure and earned his PhD from the University of Paris VII (1981) under Robert Azencott. He has been a Professor at the University of Paris-Sud (Orsay), at École Normale Supérieure, and at the Swiss Federal Institute of Technology (EPFL) in Lausanne, where he held the Chair of Stochastic Modeling. He headed the department of Mathematics at Orsay and the departments of Mathematics and Computer Science at École Normale Supérieure. He also founded the Bernoulli Center, a Mathematics Research Institute, at EPFL.
Professor Ben Arous is a member of the American Academy of Arts and Sciences, a Fellow of the Institute of Mathematical Statistics and an elected member of the International Statistical Institute. He has received various international distinctions, among which a senior Lady Davis Fellowship (Israel), the Rollo Davidson Prize (Imperial College, London), the Montyon Prize (French Academy of Sciences), and is a “Chevalier des Palmes Académiques” for his work promoting French culture in New York.
He works on probability theory (stochastic analysis, large deviations, random media and random matrices) and its connections with other domains of mathematics (partial differential equations, dynamical systems), physics (statistical mechanics of disordered media), or industrial applications, like Data Science recently. He is mainly interested in the time evolution of complex systems, and the universal aspects of their long time behavior. He has trained 35 younger colleagues, 20 PhD students and 15 Postdocs, who are now working in academia or industry across the world, from New York to Paris to Caltech or Boston, Lyon, Santiago, Geneva, Montreal, Berlin and Vienna.
Talk: Sur les traces de Robert Azencott, des verres de spins aux Data Sciences
Lucien BIRGÉ
Professor emeritus at Sorbonne-Université
— 1956 – 1967 : Primary and secondary studies in Montceau-les-Mines (71)
— 1970 – 1974 : Student at ENS
— 1974 – 1981 : Assistant at université Paris VII and research in Statistics under the supervision of Robert Azencott and Didier Dacunha-Castelle
— 1980 : Defense of a Doctorat d’État
— 1981 – 1990 : Professor at université Paris X-Nanterre
— 1990 – 2014 : Professor at université Pierre et Marie Curie
— Since 2015 : Professor emeritus at Sorbonne-Université
Talk: Some attempts toward the construction of a “universal” estimator
It has been known for a long time that the Maximum Likelihood Method, which is often considered as a universal one, is actually definitely not. Although quite good under sufficiently strong assumptions, it may simply not exist or be inconsistent and can behave in a terrible way under some very small deviations from the assumed model : it is not robust. An attempt to replace it by a more universal and robust one dates back to Le Cam (1973) for i.i.d. variables with an extension to independent non i.i.d. ones in 1975. My doctoral thesis was devoted to the generalization of his results in various directions, in particular robustness and I also provided some conditions for the resulting estimator to be minimax, relating the minimax risk to some notion of dimension. After some years of work with Pascal Massart on model selection and its applications to adaptation, I introduced in 2006 a new version of my initial method which included both robustness and model selection and that I called T-estimators since it was derived from tests.
A major progress was made in the construction of such tests by Yannick Baraud (2011) and since then Yannick and me used this new idea to build and study a new family of estimators that we called p-estimators and which can be viewed as robust versions of Maximum Likelihood estimators. I intend to present some ideas underlying their construction.
Gilles BLANCHARD
Postdam University, IHES
Gilles Blanchard est un statisticien, professeur à l’Institut de Mathématiques de l’université de Potsdam, en Allemagne, et actuellement visiteur à l’IHES. Après avoir étudié à l’École Normale Supérieure et obtenu son doctorat à l’université Paris-Nord, il devient chercheur au CNRS et, en 2002, part pour Berlin où il sera d’abord chercheur à l’institut Fraunhofer, puis Weierstrass. Ses recherches portent sur l’étude des propriétés mathématiques de méthodes d’apprentissage automatique (“machine learning”) et se situent à l’interface entre statistiques et informatique théorique.
Talk: Construction of tight wavelet-like frames on graphs (joint work with Franziska Göbel and Ulrike von Luxburg)
We construct a frame (redundant dictionary) for the space of real-valued functions defined on a neighborhood graph constructed from data points. This frame is adapted to the underlying geometrical structure (e.g. the points belong to an unknown low dimensional manifold), has finitely many elements, and these elements are localized in frequency as well as in space. This construction follows the ideas of Hammond et al. (2011), with the key point that we construct a tight (or Parseval) frame. We demonstrate the interest of this representation for denoising.
Olivier Catoni
CREST, Laboratoire de Statistiques
At the end of his PhD under the supervision of Robert Azencott, Olivier Catoni joined the CNRS in 1989 to work on stochastic optimization and some related subjects in statistical physics. After a while he got involved in statistical learning theory, and worked specically on PAC-Bayesian generalization bounds and heavy tailed data, making connections with information theory and statistical mechanics. In the last few years, he got interested in statistical corpus linguistics and its possible application to the analysis of other types of signals.
Talk: Statistical Syntax Analysis (joint work with Thomas Mainguy and Gautier Appert)
We will describe in this talk statistical models based on conditional independence assumptions that provide some kind of syntax analysis. We will show that these models can be applied not only to corpus linguistics, but also to the analysis of statistical samples of digital signals.
Stuart GEMAN
Brown University
Stuart Geman graduated from the University of Michigan with highest honors in physics. He received a masters degree in neurophysiology from Dartmouth College, and the Ph.D. degree in Mathematics from the Massachusetts Institute of Technology. He is currently the James Manning Professor of Applied Mathematics at Brown University. He is a Fellow of the Institute of Mathematical Statistics, a Fellow of the American Mathematical Society, and a member of the U.S. National Academy of Sciences.
Talk: On the computational challenges of Natural and Artificial Intelligence
Jean-Michel MOREL
CMLA, ENS Paris-Saclay
Jean-Michel Morel received the PhD degree in applied mathematics from University Pierre et Marie Curie, Paris, France in 1980. He started his career in 1979 as assistant professor in Marseille Luminy, then moved in 1984 to University Paris-Dauphine where he was promoted professor in 1992. He is Professor of Applied Mathematics at the Ecole Normale Supérieure Paris-Saclay since 1997. His research is focused on the mathematical analysis of image processing. He is a laureate of the Grand Prix Inria-Académie des Sciences, of the Longuet-Higgins prize, and of the CNRS médaille de l’innovation.
Talk: Une énigme de la perception: la détection d’anomalies (travaux communs avec Axel Davy, Mauricio Delbracio, et Thibaud Ehret) / An enigma of perception: anomaly detection
Jean-Philippe VERT
Google Brain, Mines ParisTech
After a PhD in mathematics prepared at ENS Paris on statistical models for natural language processing, JP Vert worked at the interface of machine learning and biology and held different research positions at Kyoto University, Ecole des Mines de Paris, Institut Curie, UC Berkeley, ENS Paris and Google Brain. His main research interest is on the development of statistical models and machine learning approaches for genomic data, with applications in systems biology and cancer precision medicine.
Talk: Machine learning and genomics