French-German Machine Learning Symposium

May 10th & 11th 2021, Munich, Germany (Virtual)

About the Symposium

The French-German Machine Learning Symposium aims to strengthen interactions and inspire collaborations between both countries. We invited some of the leading ML researchers from France and Germany to this two day symposium to give a glimpse into their research, and engage in discussions on the future of machine learning and how to strengthen research collaborations in ML between France and Germany.

Due to Corona the symposium will be virtual (Zoom). All scientific talks will be streamed on YouTube for the public, and questions from the chat will be taken after the talks depending on time. The panel discussions on the Future of Machine Learning will also be on YouTube with possible interaction from the audience.

We want to thank all the panelists and attendees for making this great symposium possible! We are planning to repeat the event in the next year. All talks are still accessible on the YouTube channels in case you missed one, or want to rewatch everything.

YouTube Streams

We will be streaming to two channels on both days. Questions from both streams are monitored, and might be asked to the speakers.

Program

Time Slot

Monday, 10th

Tuesday, 11th

9:00 - 9:10

Introduction

Welcome statement by Markus Söder, Minister President, Free State of Bavaria
Introduction (Jean Ponce & Daniel Cremers)

9:10 - 9:30

Francis Bach

On the convergence of gradient descent for neural networks

SHOW ABSTRACT


Many supervised learning methods are naturally cast as optimization problems. For prediction models which are linear in their parameters, this often leads to convex problems for which many guarantees exist. Models which are non-linear in their parameters such as neural networks lead to non-convex optimization problems for which guarantees are harder to obtain, but many empirical successes are reported. In this talk, I will present recent results on two-layer neural networks where the number of hidden neurons tends to infinity, and show how qualitative convergence guarantees may be derived. (Joint work with Lénaïc Chizat).
Biography Francis Bach is a researcher at Inria, leading since 2011 the machine learning team which is part of the Computer Science department at Ecole Normale Superieure. He graduated from Ecole Polytechnique in 1997 and completed his Ph.D. in Computer Science at U.C. Berkeley in 2005, working with Professor Michael Jordan. He spent two years in the Mathematical Morphology group at Ecole des Mines de Paris, then he joined the computer vision project-team at Inria/Ecole Normale Superieure from 2007 to 2010. Francis Bach is primarily interested in machine learning, and especially in sparse methods, kernel-based learning, large-scale optimization, computer vision and signal processing. He obtained in 2009 a Starting Grant and in 2016 a Consolidator Grant from the European Research Council, and received the Inria young researcher prize in 2012, the ICML test-of-time award in 2014 and 2019, as well as the Lagrange prize in continuous optimization in 2018, and the Jean-Jacques Moreau prize in 2019. He was elected in 2020 at the French Academy of Sciences. In 2015, he was program co-chair of the International Conference in Machine learning (ICML), and general chair in 2018; he is now co-editor-in-chief of the Journal of Machine Learning Research.

Nicolas Mansard

Learning to walk: Optimizing trajectories and policies for real robots and dynamic tasks

SHOW ABSTRACT


Robotics offers a challenging problem in the field of artificial intelligence. Among the various open problems in robotics, I favor legged locomotion the most, for the variety of open questions, but also the representativeness of these problems for many other robots. Legged locomotion implies autonomy, dynamics, mobility, accuracy and speed, dimensionality and controllability. The current solution to this challenge, beautifully demonstrated by Boston Dynamics this winter, implies tailored reduced models and trajectory optimization of the so-called centroidal dynamics. While we proudly contributed to this approach, we now believe that the next level is to produce similar movements, or better, without relying on ad-hoc simplification. We will explain how careful modeling, numerical optimization and predictive control enable us to optimize the whole-body dynamics, accounting in real time for the 30 robot motors and various balance or collision constraints. While the optimal control problem to solve in real time is nonconvex, we relied on a memory of motion, trained off-line on a large database of planed movements, to guide the numerical algorithm to the optimum. The resulting control corresponds to the optimal policy, yet it is optimized in real-time from a gross approximation by rolling out the robot predicted movements. Our next step is to understand how reinforcement learning (RL) algorithm can be made more accurate, and faster to converge, in order to obtain a fair closed-form approximation of the optimal policy, which matches the accuracy level that is required by our legged machines. To that end, we are investigating second-order RL algorithms, exploiting the derivatives of the model and locally optimal roll-outs.
Biography Nicolas Mansard is /directeur de recherche/ (senior researcher) at Gepetto, LAAS-CNRS, Toulouse, France. He is one of the lead members of the Gepetto team, among 8 permanent researchers and about 25 post-docs and PhD students. Gepetto is recognized for its expertise in legged robotics, and developed the humanoid research platform, strong of 3 full-size humanoid robots and a family of smaller quadrupeds. Nicolas Mansard coordinates the EU project "Memory of Motion" and holds the chair "Artificial and Natural Movement" of the Toulouse AI Lab ANITI. He received the CNRS Bronze Medal and the award of the best computer-science project of the French research agency ANR. Before that, he was visiting researcher at U.Washington, post-doctoral researcher at AIST Japan and Stanford U. and received his PhD from U.Rennes in 2006.

9:30 - 9:50

Philipp Hennig

Probabilistic Uncertainty in Computation

SHOW ABSTRACT


As ML computations increasingly become embedded within larger chains of computation, and data-subsampling drastically affects computational precision, tracking and controlling computational errors across the program flow becomes a key functionality of ML solutions. I will present some recent results showcasing the ability of probabilistic numerical algorithms to address this issue in a mathematically clean, and computationally lightweight fashion.
Biography Philipp Hennig holds the Chair for the Methods of Machine Learning at the University of Tübingen, and is an adjunct research scientist at the MPI for Intelligent Systems. Since his PhD with David MacKay, he is interested in the connection of computation and inference. His work has helped establish the notion of probabilistic numerics — the description of computational algorithms as machine learning agents. Hennig is the deputy speaker of the Cyber Valley Initiative of the State of Baden-Württemberg, and a director of the ELLIS program for Theory, Algorithms and Computations.

Michael Moeller

Deep Learning vs. Energy Minimization Methods for Inverse Problems

SHOW ABSTRACT


Many practical applications require to infer a desired quantity from measurements that contain implicit information about them, commonly resulting in ill-posed inverse reconstruction problems. While classical approaches formulate their solution as the argument that minimizes a suitable cost function, recent works excel on image reconstruction benchmarks using deep learning. This talk will discuss advantages and drawbacks of both approaches, exemplify recent advances on their confluence, and highlight future challenges in learnable energy minimization methods and bi-level learning.
Biography Michael Moeller got his diploma and PhD degrees in applied mathematics from the University of Muenster, Germany, in 2009 and 2012, both in collaboration with the University of California, Los Angeles, (UCLA), where he spent two years as a visiting researcher. After working for the Arnold and Richter Cinetechnik (ARRI), one of the worlds leading manufacturers for professional motion picture cameras, from 2012 to 2014, he joined the Computer Vision group at the Technical University of Munich as a Postdoc. Since 2016 he is a professor, and, since 2018, head of the chair for Computer Vision at the University of Siegen. He received several honors including best paper honorable mention awards at CVPR 2016, and GCPR 2020.

9:50 - 10:10

Coffee Break

10:10 - 10:30

Bernd Bischl

AutoML - Current Challenges and Underexplored Issues

SHOW ABSTRACT


AutoML has become a standard tool for automatically configuring complex ML systems in the last 10 years, in order to shield users from manual trial and error decisions regarding model selection, hyperparameters, preprocessing and feature extraction. Although many practical tools are already freely available, many challenges remain, and this talk will briefly introduce and discuss the most relevant ones from our perspective. This includes: a) Recent benchmarking results and the state of available data; b) the need for multicriteria evaluation and optimization to cope with complex application scenarios; c) measuring and optimizing for interpretability and trustworthiness.
Biography Bernd Bischl holds the chair of "Statistical Learning and Data Science" at the Department of Statistics at the Ludwig-Maximilians-University Munich and he is a co-director of the Munich Center for Machine Learning (MCML), one of Germany’s national competence centers for ML. He studied Computer Science, Artificial Intelligence and Data Sciences in Hamburg, Edinburgh and Dortmund and obtained his Ph.D from Dortmund Technical University in 2013 with a thesis on “Model and Algorithm Selection in Statistical Learning and Optimization”. His research interests include AutoML, model selection, interpretable ML, as well as the development of statistical software. He is an active developer of several R-packages, leads the “mlr” (Machine Learning in R) engineering group and is co-founder of the science platform “OpenML” for open and reproducible ML. Furthermore, he leads the Munich branch of the Fraunhofer ADA Lovelace Center for Analytics, Data & Applications, i.e. a new type of research infrastructure to support businesses in Bavaria, especially in the SME sector.

Klaus Müller

Machine Learning meets Quantum Physics

SHOW ABSTRACT


I provide a brief introduction to Machine Learning as an enabling technology for Quantum Physics. Interestingly novel insights in Physics and Chemistry can be drawn from the machine learning models trained.
Biography Klaus-Robert Müller studied physics at the Technische Universität at Karlsruhe, Karlsruhe, Germany, from 1984 to 1989. He received the Ph.D. degree in computer science from Technische Universit at Karlsruhe in 1992. He has been a Professor of computer science with TU Berlin, since 2006. From 2012 he has at the same time been a distinguished Professor at Korea University. In 2020 and 2021, he is on a sabbatical leave from TU Berlin and with the Brain Team, Google Research, Berlin, Germany. He is also directing and co-directing the Berlin Machine Learning Center, Berlin, and the Berlin Big Data Center, Berlin, respectively. Dr. M uller was elected member of the German National Academy of Sciences, Leopoldina, in 2012 and the Berlin Brandenburg Academy of Sciences in 2017 and an External Scienti c Member of the Max Planck Society in 2017. In 2019 and 2020, he became a Highly Cited Researcher in the cross-disciplinary area. Among others, he was awarded the Olympus Prize for Pattern Recognition in 1999, the SEL Alcatel Communication Award in 2006, the Science Prize of Berlin by the Governing Mayor of Berlin in 2014, the Vodafone Innovations Award in 2017, and the 2020 Best Paper Award in the journal Pattern Recognition.

10:30 - 10:50

Cordelia Schmid

Do you see what I see? Large-scale learning from multimodal videos

SHOW ABSTRACT


In this talk we present recent progress on large-scale learning of multimodal video representations. We start by presenting VideoBert, a joint model for video and language, repurposing the Bert model for multimodal data. This model achieves state-of-the-art results on zero shot prediction and video captioning. Next we show how to extend learning from instruction videos to general movies based on cross-modal supervision. We use movie screenplays to learn a speech to action classifiers and use these classifiers to mine video clips from thousands of hours of movies. We demonstrate a performance comparable or better than fully supervised approaches for action classification. Next we present an approach for video question answering which relies on training from instruction videos and cross-modal supervision with a textual question answer module. We show state-of-the-art results for video question answering without any supervision (zero-shot VQA) and demonstrate that our approach obtains competitive results for pre-training and then fine-tuning on video question answering datasets. We conclude our talk by presenting a recent video feature which is fully transformer based. Our Video Vision Transformer (ViViT) is shown to outperform the state-of-the-art on video classification. Furthermore, it is flexible and allows for performance / accuracy trade-off based on several different architectures. 
Biography Cordelia Schmid holds a M.S. degree in Computer Science from the University of Karlsruhe and a Doctorate, also in Computer Science, from the Institut National Polytechnique de Grenoble (INPG). Her doctoral thesis received the best thesis award from INPG in 1996. Dr. Schmid was a post-doctoral research assistant in the Robotics Research Group of Oxford University in 1996--1997. Since 1997 she has held a permanent research position at Inria, where she is a research director. Dr. Schmid has been an Associate Editor for IEEE PAMI (2001--2005) and for IJCV (2004--2012),editor-in-chief for IJCV (2013--2019), a program chair of IEEE CVPR 2005 and ECCV 2012 as well as a general chair of IEEE CVPR 2015, ECCV 2020 and ICCV 2023. In 2006, 2014 and 2016, she was awarded the Longuet-Higgins prize for fundamental contributions in computer vision that have withstood the test of time. She is a fellow of IEEE. She was awarded an ERC advanced grant in 2013, the Humbolt research award in 2015 and the Inria & French Academy of Science Grand Prix in 2016. She was elected to the German National Academy of Sciences, Leopoldina, in 2017. In 2018 she received the Koenderink prize for fundamental contributions in computer vision that have withstood the test of time. She was awarded the Royal Society Milner award in 2020. Starting 2018 she holds a joint appointment with Google research.

Jean-Bernard Lasserre

The Cristoffel-Darboux kernel as a tool in data analysis

SHOW ABSTRACT


If the Christoffel-Darboux (CD) kerbnel is well-known in theory of approximation and orthogonal polynomials, its striking properties seem to have been largely ignored in the context of data analysis (one main reason being that in data analysis one is faced with measures supported on finitely many points (the data set)). In this talk we briefly introduce the (CD) kernel, some of its properties, and claim that it can become a simple and easy to use tool in the context of data analysis, e.g. to help solve some problems like outlier detection, manifold learning and density estimation.
Biography CNRS Directeur de Recherche emeritus, and member of Institute of Mathematics, University of Toulouse, he holds the ``Polynomial Optimization" chair at the Artificial and Natural Intelligence Toulouse Institute (ANITI). Recipient of the 2015 John von Neumann theory and Khachyan prizes, 2009 Lagrange prize in continuous optimization, Siam Fellow, 2019 Simons Research Professor at CRM, and Laureate of an ERC-Advanced Grant. Co-author of articles and books in Applied Mathematics, Markov control Processes, Markov chains, Probability, Operations Research, and Optimization, he is particularly interested in tools from algebraic geometry (e.g. positivity certificates) for optimization in a broad sense, and some Machine Learning applications.

10:50 - 11:10

Hinrich Schuetze

Humans Learn From Task Descriptions and So Should Our Models

SHOW ABSTRACT


In many types of human learning, task descriptions are a central ingredient. They are usually accompanied by a few examples, but there is very little human learning that is based on examples only. In contrast, the typical learning setup for NLP tasks lacks task descriptions and is supervised with 100s or 1000s of examples. This is even true for so-called few-shot learning, a term often applied to scenarios with tens of thousands of "shots". Inspired by the GPT models, which also exploit task descriptions, we introduce Pattern-Exploiting Training (PET). PET reformulates task descriptions as cloze questions that can be effectively processed by pretrained language models. In contrast to GPT, PET combines task descriptions with supervised learning. We show that PET learns well from as little as ten training examples and outperforms GPT-3 on GLUE even though it has 99.9% fewer parameters.
Biography Hinrich Schuetze (PhD 1995, Stanford University) is Professor for Computational Linguistics and director of the Center for Information and Language Processing at the University of Munich (LMU Munich). Before moving to Munich in 2013, he taught at the University of Stuttgart. He worked on natural language processing and information retrieval technology at Xerox PARC, at several Silicon Valley startups and at Google 1995-2004 and 2008/9. He is a coauthor of Foundations of Statistical Natural Language Processing (with Chris Manning) and Introduction to Information Retrieval (with Chris Manning and Prabhakar Raghavan). His h-index is 55 (https://scholar.google.com/citations?user=qIL9dWUAAAAJ). He was awarded a European Research Council Advanced Grant in 2017. Hinrich serves as action editor for TACL and was the president of the Association for Computational Linguistics in 2020.

Matthias Hein

Out-of-distribution aware training for robust representations

SHOW ABSTRACT


Neural networks don't know when they don't know but making them know when they don't know yields more robust representations. This talk gives an overview of recent work including out-of-distribution aware semi-supervised learning.
Biography Matthias Hein is Bosch endowed Professor of Machine Learning at the University of Tübingen. His main research interests are to make machine learning systems robust, safe and explainable and to provide theoretical foundations for machine learning, in particular deep learning. He serves regularly as area chair for ICML, NeurIPS or AISTATS and has been action editor for Journal of Machine Learning Research (JMLR) from 2013 to 2018. He is an ELLIS Fellow and has been awarded the German Pattern recognition award, an ERC Starting grant and several best paper awards (NeurIPS, COLT, ALT).

11:10 - 11:30

Coffee Break

11:30 - 11:50

Chloe-Agathe Azencott

Feature selection in high-dimensional data, with applications to genome-wide association studies

SHOW ABSTRACT


Many problems in genomics require the ability to identify relevant features in data sets containing many more orders of magnitude than samples. One such example is genome-wide association studies (GWAS), in which hundreds of thousands of single nucleotide polymorphisms are measured for orders of magnitude fewer samples. In this talk, I will present methods based on structured regularization and post-selection inference to address the statistical and computational issues that arise in this context.
Biography Chloé-Agathe Azencott is an assistant professor of the Centre for Computational Biology (CBIO) of MINES ParisTech and Institut Curie (Paris, France). She earned her PhD in computer science at University of California, Irvine (USA) in 2010, working at the Institute for Genomics and Bioinformatics. She then spent 3 years as a postdoctoral researcher in the Machine Learning and Computational Biology group of the Max Planck Institutes in Tübingen (Germany) before joining CBIO. She has been holding a PrAIrie Springboard Chair since 2019. Her research revolves around the development and application of machine learning methods for biomedical research, with particular interest for feature selection and the integration of structured information. Chloé-Agathe Azencott is also the co-founder of the Parisian branch of Women in Machine Learning and Data Science.

Thomas Brox

Fostering Generalization in Single-view 3D Reconstruction

SHOW ABSTRACT


Single-view reconstruction was popularized with deep learning. After the initial excitement about the possibility to retrieve good looking depth maps and even full 3D object shapes from only a single view, recently, some doubts have appeared on how deep learning manages the task and how sustainable the process is when it is applied outside academic benchmarks. Especially for 3D shape reconstruction, performance is very poor when deviating too much from the training distribution and even providing ground truth depth maps as input cannot fix the problem. In this talk, I will argue that this is an issue of the training procedure, which focuses entirely on global shape priors, whereas there is nothing in the procedure that fosters recombination of local structures in the data. I will also present a simple way to fix this and demonstrate that this has dramatic effects on generalization.
Biography Thomas Brox received his Ph.D. in computer science from the Saarland University, Germany in 2005. Since 2010, he is heading the Computer Vision Group at the University of Freiburg. Since 2020, he is also an Amazon Scholar in the Tübingen lablet. Thomas Brox is interested in visual representation learning, video analysis, and learned 3D representations. In particular, he explores ways to train deep networks with less data and less human supervision, which is today the bottleneck for most application domains. He is also investigating the robustness of learned models to data variation and more natural ways of learning. Works from his lab, such as the U-Net and the FlowNet, have become quite popular. For his work on optical flow estimation, he received the Longuet-Higgins Best Paper Award and the Koenderink Prize for Fundamental Contributions in Computer Vision. He was also awarded an ERC starting grant.

11:50 - 12:10

Nicholas Ayache

AI for Healthcare, hopes and challenges

SHOW ABSTRACT


TBA
Biography Nicholas Ayache is Research Director of class exceptional at Inria, head of the Epione research project team dedicated to e-Patients for e-Medicine, and Scientific Director of the new AI institute 3IA Côte d’Azur. He is a member of the French Academy of sciences and Academy of Surgery. His current research focuses on AI methods to improve diagnosis, prognosis and therapy from medical images and clinical, biological, behavioral and environmental data available on the patient. N. Ayache was an invited scientist at MIT, Harvard, and Brigham and Woman’s Hospital (Boston) in 2007. He held a Chair of Professor at Collège de France in 2014 where he introduced a new course on the “Personalized Digital Patient”. He served as Chief Scientific Officer (CSO) of the Institut Hospitalo-Universitaire (IHU) of Strasbourg (2012-2105). N. Ayache graduated from Ecole des Mines (Saint Etienne) in 1980 and obtained his PhD and Thèse d’Etat (Habilitation) from University of Paris-Sud (Orsay, France). N. Ayache is the author and co-author of more than 400 scientific publications that received 44,000+ citations (h-index 104+) according to Google Scholar. He co-founded seven start-up companies in image processing, computer vision, and biomedical imaging. N. Ayache is co-founder and co-Editor in Chief of the Elsevier journal Medical Image Analysis (IF = 11.148 in 2020). N. Ayache received the International Steven Hoogendijk Award in 2020, the Grand Prize of the City of Nice in 2019, the Grand Prize Inria - Académie des sciences in 2014, a European Research Council (ERC) advanced grant (2012-2017), the Microsoft Prize for Research in Europe (Royal Society & Academy of Sciences) in 2008, and the EADS Foundation Prize in Information Sciences in 2006. N. Ayache is a Fellow of the American Institute for Medical and Biological Engineering (AIMBE), of the European Alliance of Medical and Biological Engineering and Science (EAMBES) and of the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society from which he received in 2013 the “Enduring Impact Award”.

Julien Mairal

Lucas-Kanade reloaded: super-resolution from raw image bursts

SHOW ABSTRACT


This presentation addresses the problem of reconstructing a high-resolution image from multiple lower-resolution snapshots captured from slightly different viewpoints in space and time. Key challenges for solving this super-resolution} problem include (i) aligning the input pictures with sub-pixel accuracy, (ii) handling raw (noisy) images for maximal faithfulness to native camera data, and (iii) designing/learning an image prior (regularizer) well suited to the task. We address these three challenges with a hybrid algorithm building on the insight that aliasing is an ally in this setting, with parameters that can be learned end to end, while retaining the interpretability of classical approaches to inverse problems. The effectiveness of our approach is demonstrated on synthetic and real image bursts, setting a new state of the art on several benchmarks and delivering excellent qualitative results on real raw bursts captured by smartphones and prosumer cameras.
This is a joint work with Bruno Lecouat and Jean Ponce.
Biography Julien Mairal is a research scientist at Inria. He received a graduate degree from Ecole Polytechnique, France, in 2005, and a Ph.D. from Ecole Normale Superieure, Cachan, France, in 2010. He spent two years as a post-doctoral researcher in the statistics department of UC Berkeley, before joining Inria in 2012. His research interests include machine learning, computer vision, mathematical optimization, and statistical image and signal processing. In 2013, he received the Cor Baayen prize, awarded every year by ERCIM to a promising young researcher in computer science and applied mathematics. In 2016, he received a Starting Grant from the European Research Council (ERC), and he received the IEEE PAMI young researcher award in 2017 and the ICML test-of-time award in 2019.

12:10 - 12:30

Charles Bouveyron

Statistical Learning on Interaction Data for Public Health

SHOW ABSTRACT


This talk will focuses on the problem of statistical learning with relational data in Public Health. We will in particular discuss the cases of learning from networks and high-dimensional data. Two model-based approaches will be shortly described and illustrated on real-world problems: the analysis of the COVID-19 publication network and the study of a large-scale pharmacovigilance data set.
Biography Charles Bouveyron is Full Professor of Statistics with Université Côte d’Azur and the director of the Institut 3IA Côte d'Azur, one of the four French national institutes on Artificial Intelligence. He is the head of the Maasai research team, a joint team between INRIA and Université Côte d'Azur, gathering mathematicians and computer scientists for proposing innovative models and algorithms for Artificial Intelligence. His research interests include include high-dimensional statistical learning, adaptive learning, statistical network analysis, learning from functional or complex data, with applications in medicine, image analysis and digital humanities. He has published extensively on these topics and he is author of the book "Model-based Clustering and Classification for Data Science" (Cambridge University Press, 2019). He is the founding organizer of the series of workshops StatLearn. Previously, he worked at Université de Paris (full Professor, 2013-2017), Université Paris 1 Panthéon-Sorbonne (Ass. Professor, 2007-2013) and Acadia University (Postdoctoral researcher, 2006-2007). He received the Ph.D. degree in 2006 from Université Grenoble 1 (France) for his work on high-dimensional classification.

Daniel Cremers

Visual SLAM in the Age of Self-Supervised Learning

SHOW ABSTRACT


While neural networks have swept the field of computer vision and replaced classical methods in most areas of image analysis and beyond, extending their power to the domain of camera-based 3D reconstruction remains an important open challenge. In my talk, I will advocate hybrid methods which integrate deep network predictions to boost the performance of direct visual SLAM approaches. The resulting method allows us to track a single camera with a precision that is on par with state-of-the-art stereo-inertial odometry methods.
Biography Daniel Cremers studied physics and mathematics in Heidelberg, Indiana and New York. After his PhD in Computer Science (2002) he spent three years at UCLA and in Princeton. In 2005 he became associate professor in Bonn. Since 2009 he holds the Chair of Computer Vision and Artificial Intelligence at TU Munich. He obtained five ERC grants (including Starting, Consolidator and Advanced) and received numerous awards, including the Gottfried Wilhelm Leibniz Award (2016), the biggest award in German academia. He is co-director of the Munich Center for Machine Learning, of Ellis Munich and of the Munich Data Science Institute. He is member of the Dagstuhl Scientific Directorate and of the Bavarian Academy of Sciences and Humanities.

12:30 - 13:30

Lunch Break

13:30 - 13:50

Laurent Besacier

Self Supervised Representation Learning for Pre-training Speech Systems

SHOW ABSTRACT


Self-supervised learning using huge unlabeled data has been successfully explored for image processing and natural language processing. Since 2019, recent works also investigated self-supervised representation learning from speech. They were notably successful to improve performance on downstream tasks such as speech recognition. These recent works suggest that it is possible to reduce dependence on labeled data for building speech systems through acoustic representation learning. In this talk I will present an overview of these recent approaches to self-supervised learning from speech and show my own investigations to use them in spoken language processing tasks for which size of training data is limited.
Biography Laurent Besacier is a principal scientist at Naver Labs Europe since January 2021 where he is leading the Natural Language Processing (NLP) group. Before that, he was full professor at the University Grenoble Alpes (UGA) since 2009 where he lead the GETALP group (natural language and speech processing) for 8 years. Laurent is still affiliated with UGA. His main research expertise and interests lie in the field of natural language processing, automatic speech recognition, machine translation, under resourced languages, machine-assisted language documentation and the evaluation of NLP systems. Laurent is also involved in MIAI (Grenoble AI institute) where he holds a chair simply entitled ‘Ai & Language’.

Eric Moulines

Invertible-flow non equilibrium sampling

SHOW ABSTRACT


Simultaneously sampling from a complex distribution with intractable normalizing constant and approximating expectations under this distribution is a challenging problem. We introduce a novel scheme, inspired by the the non-equilibrium sampling introduced by (Rotskoff and Vanden-Eijnden (2019), called Invertible Flow Non Equilibrium Sampling (InFine). InFine departs from classical Sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC) approaches. InFine constructs unbiased estimators of expectations and in particular of normalizing constants by combining the orbits of a deterministic transform started from random initializations. When this transform is chosen as an appropriate integrator of a conformal Hamiltonian system, these orbits are optimization paths. InFine is also naturally suited to design new MCMC sampling schemes by selecting samples on the optimization paths.Additionally, InFine can be used to construct an Evidence Lower Bound (ELBO) leading to a new class of Variational AutoEncoders (VAE).
Biography Eric Moulines received the the Engineering degree from Ecole Polytechnique, Paris, France, in 1984, the Ph. D. degree in electrical engineering from Ecole Nationale Supérieure des Télécommunications, in 1990. In 1990, he joined the Signal and Image processing department at Télécom ParisTech where he became a full professor in 1996. In 2015, he joined the Applied Mathematics Center of Ecole Polytechnique, where he is currently a professor in statistics. His areas of expertise include computational statistics (Monte Carlo simulations, stochastic approximation), statistical machine learning, statistical signal processing and time-series analysis. His current research topics cover high-dimensional Monte Carlo sampling and stochastic optimization with applications to uncertainty quantifications, generative models (variational autoencoders, Generative Adversarial Networks). He has published more than 120 papers in leading journals in signal processing, computational statistics, and applied probability. E. Moulines is an EURASIP and IMS fellow. He was the recipient of the 2010 Silver Medal from the Centre National de Recherche Scientifique and the 2011 Orange prize of the French Academy of Sciences, 2016 Fellow of the IMS. He received the technical achievement award from EURASIP in 2020. He was elected to the academy of sciences in 2017.

13:50 - 14:10

Gérard Biau

Some Theoretical Insights into Wasserstein GANs

SHOW ABSTRACT


Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation. Building on these successes, a large number of empirical studies have validated the benefits of the cousin approach called Wasserstein GANs (WGANs), which brings stabilization in the training process. In the present contribution, we add a new stone to the edifice by proposing some theoretical advances in the properties of WGANs. First, we properly define the architecture of WGANs in the context of integral probability metrics parameterized by neural networks and highlight some of their basic mathematical features. We stress in particular interesting optimization properties arising from the use of a parametric 1-Lipschitz discriminator. Then, in a statistically-driven approach, we study the convergence of empirical WGANs as the sample size tends to infinity, and clarify the adversarial effects of the generator and the discriminator by underlining some trade-off properties. These features are finally illustrated with experiments using both synthetic and real-world datasets.
Biography Gérard Biau is a full professor at the Probability, Statistics, and Modeling Laboratory (LPSM) of Sorbonne University, Paris. His research is mainly focused in developing new methodologies and rigorous mathematical theory in statistical learning and artificial intelligence, whilst trying to find connections between statistics and algorithms. He was a member of the Institut Universitaire de France from 2012 to 2017 and served from 2015 to 2018 as the president of the French Statistical Society. In 2018, he was awarded the Michel Monpetit - Inria prize by the French Academy of Sciences. He is currently director of Sorbonne Center for Artificial Intelligence (SCAI).

Laura Leal-Taixé

Learning Intra-Batch Connections for Deep Metric Learning

SHOW ABSTRACT


The goal of metric learning is to learn a function that maps samples to a lower-dimensional space where similar samples lie closer than dissimilar ones. In the case of deep metric learning, the mapping is performed by training a neural network. Most approaches rely on losses that only take the relations between pairs or triplets of samples into account, which either belong to the same class or to two different classes. However, these approaches do not explore the embedding space in its entirety. To this end, we propose an approach based on message passing networks that takes into account all the relations in a mini-batch. We refine embedding vectors by exchanging messages among all samples in a given batch allowing the training process to be aware of the overall structure. Since not all samples are equally important to predict a decision boundary, we use dot-product self-attention during message passing to allow samples to weight the importance of each neighbor accordingly. We achieve state-of-the-art results on clustering and image retrieval on the CUB-200-2011, Cars196, Stanford Online Products, and In-Shop Clothes datasets.
Biography Prof. Dr. Laura Leal-Taixé is a tenure-track professor (W2) at the Technical University of Munich, leading the Dynamic Vision and Learning group. Before that, she spent two years as a postdoctoral researcher at ETH Zurich, Switzerland, and a year as a senior postdoctoral researcher in the Computer Vision Group at the Technical University in Munich. She obtained her PhD from the Leibniz University of Hannover in Germany, spending a year as a visiting scholar at the University of Michigan, Ann Arbor, USA. She pursued B.Sc. and M.Sc. in Telecommunications Engineering at the Technical University of Catalonia (UPC) in her native city of Barcelona. She went to Boston, USA to do her Masters Thesis at Northeastern University with a fellowship from the Vodafone foundation. She is a recipient of the Sofja Kovalevskaja Award of 1.65 million euros for her project socialMaps, and the Google Faculty Award.

14:10 - 14:30

Katharina Morik

Trustworthy Machine Learning — The Care Label Approach

SHOW ABSTRACT


Trustworthy AI has become an important issue. Explainable and fair AI have already matured and research on robustness is flourishing. Where most papers address developers or application engineers, at ML2R, we offer care labels that are easy to understand at a glance. They move beyond the description of the methods by exploiting theoretical results about methods, where possible. Moreover, the certification suite generates systematic testing which checks a given implementation on a certain hardware justifying the particular labels. Two first instances have been developed, one for probabilistic graphical models, and one for ResNet-18 and MobileNetV3 applied to ImageNet.
Biography Katharina Morik received her doctorate from the University of Hamburg in 1981 and her habilitation from the TU Berlin in 1988. In 1991, she established the chair of Artificial Intelligence at the TU Dortmund. She is a pioneer of bringing machine learning and computing architectures together so that machine learning models may be executed or even trained on resource restricted devices. In 2011, she acquired the Collaborative Research Center SFB 876 "Providing Information by Resource-Constrained Data Analysis consisting of 12 projects and a graduate school. She is a spokesperson of the Competence Center for Machine Learning Rhein Ruhr (ML2R) and coordinator of the German competence centers for AI. She was a founding member and Program Chair of the conference series IEEE International Conference on Data Mining (ICDM) and is a member of the steering committee of ECML PKDD. Together with Volker Markl, Katharina Morik heads the working group "Technological Pioneers" of the platform "Learning Systems and Data Science" of the BMBF. Prof. Morik is a member of the Academy of Technical Sciences and of the North Rhine-Westphalian Academy of Sciences and Arts. She has been awarded Fellow of the German Society of Computer Science GI e.V. in 2019.

Volker Markl

Data Management meets Machine Learning: Scalable Data Processing for Machine Learning Processes

SHOW ABSTRACT


Machine learning (ML) models deployed in production environments are commonly embedded in an iterative process involving numerous steps, including source selection, information extraction, information integration, model building, and model application. In this presentation, we will take a closer look at this process from the perspective of the data management community, which has provided numerous tools and technologies to improve both the effectiveness and efficiency of this iterative process for traditional relational data analyis. In particular, we will discuss the differences and potential for applying these DM techniques to the management of both ML and data science-oriented computer programs. In particular, leveraging declarative languages in conjunction with automatic distribution, parallelization and hardware adaption help, to reduce the complexity of the specification and improve human latency. Furthermore, exploiting scalable, parallel and distributed algorithms jointly with modern hardware, to improve both the efficiency of the deployment and technical latency. Lastly, we will present a concrete example of an open-source data stream processing system based on our research, and highlight the current research agenda of BIFOLD at the intersection of Big Data and Machine Learning.
Biography Volker Markl is a German Professor of Computer Science. He leads the Chair of Database Systems and Information Management at TU Berlin and the Intelligent Analytics for Massive Data Research Department at DFKI. In addition, he is Director of the Berlin Institute for the Foundations of Learning and Data (BIFOLD). He is a database systems researcher, conducting research at the intersection of distributed systems, scalable data processing, and machine learning. Volker led the Stratosphere project, which resulted in the creation of Apache Flink. Volker has received numerous honors and prestigious awards, including best paper awards at ACM SIGMOD, VLDB, and ICDE. In 2014, he was elected one of Germany‘s leading “Digital Minds“ (Digitale Köpfe) by the German Informatics Society. He was elected an ACM Fellow for his contributions to query optimization, scalable data processing, and data programmability. He is currently President of the VLDB Endowment, and serves as advisor to academic institutions, governmental organizations, and technology companies. Volker holds eighteen patents and has been co-founder and mentor to several startups.

14:30 - 14:50

Coffee Break

14:50 - 15:10

Yann LeCun

The Rise of Self-Supervised Learning

SHOW ABSTRACT


One of the hottest areas of machine learning in recent times has been Self-Supervised Learning (SSL). In SSL, a learning machine captures the dependencies between variables, and learns representations of the data without requiring human-provided labels. SSL pre-training has revolutionized NLP and is making very fast progress in speech and image recognition. SSL may enable machines to learn predictive models of the world through observation, and to learn representation of the perceptual world with massive amounts of uncurated data, thereby reducing the amount of labeled samples to learn a downstream task. I will review some of the most recent progress and promising avenues for SSL.
Biography Yann LeCun is VP & Chief AI Scientist at Facebook and Silver Professor at NYU affiliated with the Courant Institute of Mathematical Sciences & the Center for Data Science. He was the founding Director of Facebook AI Research and of the NYU Center for Data Science. He received an Engineering Diploma from ESIEE (Paris) and a PhD from Sorbonne Université. After a postdoc in Toronto he joined AT&T Bell Labs in 1988, and AT&T Labs in 1996 as Head of Image Processing Research. He joined NYU as a professor in 2003 and Facebook in 2013. His interests include AI machine learning, computer perception, robotics and computational neuroscience. He is the recipient of the 2018 ACM Turing Award (with Geoffrey Hinton and Yoshua Bengio) for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing", a member of the National Academy of Engineering and a Chevalier de la Légion d’Honneur.

Bernhard Schölkopf

Causal Learning

SHOW ABSTRACT


The talk will discuss the relationship between statistical and causal learning, and some thoughts on how to address shortcomings of statistical learning by taking into account causal structures. It will also discuss the problem of ‘disentanglement’ from the causal point of view.
Biography Bernhard Schölkopf's scientific interests are in machine learning and causal inference. He has applied his methods to a number of different fields, ranging from biomedical problems to computational photography and astronomy. Bernhard has researched at AT&T Bell Labs, at GMD FIRST, Berlin, and at Microsoft Research Cambridge, UK, before becoming a Max Planck director in 2001. He is a member of the German Academy of Sciences (Leopoldina), has (co-)received the J.K. Aggarwal Prize of the International Association for Pattern Recognition, the Academy Prize of the Berlin-Brandenburg Academy of Sciences and Humanities, the Royal Society Milner Award, the Leibniz Award, the Koerber European Science Prize, the BBVA Foundation Frontiers of Knowledge Award, and is an Amazon Distinguished Scholar. He is Fellow of the ACM and of the CIFAR Program "Learning in Machines and Brains", and holds a Professorship at ETH Zurich. Bernhard co-founded the series of Machine Learning Summer Schools, and currently acts as co-editor-in-chief for the Journal of Machine Learning Research, an early development in open access and today the field's flagship journal.

15:10 - 15:30

Panel Discussion

The Future of Machine Learning
(live streamed)

Jean Ponce

Learning to deblur sharp images

SHOW ABSTRACT


I will address in this presentation the problem of non-blind deblurring and demosaicking of noisy raw images. The proposed approach adapts to raw photographs a learning-based approach to RGB image deblurring by introducing a new interpretable module that jointly demosaicks and deblurs them. The model is trained on RGB images converted into raw ones using a realistic invertible camera pipeline, and its effectiveness is demonstrated on several benchmarks. The proposed algorithm is also used to remove a camera’s inherent blur (its color-dependent point-spread function) in real images, in essence deblurring sharp images. Joint work with Thomas Eboli (Inria) and Jian Sun (Xi'an Jiaotong University).
Biography Jean Ponce is a Research Director at Inria and a Visiting Researcher at the NYU Center for Data Science, on leave from Ecole Normale Superieure (ENS) / PSL Research University, where he is a Professor, and served as Director of the Computer Science Department from 2011 to 2017. Dr. Ponce graduated in Mathematics from ENS Cachan in 1982. He received his "Doctorat de Troisieme cycle" (PhD degree) and his "Doctorat d'Etat" (Habilitation degree) in Computer Science in 1983 and 1988 from the University of Paris Sud Orsay. Before joining ENS and Inria, Jean Ponce held positions at MIT, Stanford, and the University of Illinois at Urbana-Champaign, where he was a Full Professor until 2005. His research spans Computer Vision, Machine Learning, and Robotics. He is the author of "Computer Vision: A Modern Approach", a textbook translated in Chinese, Japanese, and Russian, and is (slowly) writing a new textbook, "Geometric Foundations of Computer Vision". Jean Ponce is the Scientific Director of the Paris PRAIRIE Interdisciplinary AI Research Institute. He is the Sr. Editor-in-Chief of the International Journal of Computer Vision and has served on the editorial boards of Computer Vision and Image Understanding, Foundations and Trends in Computer Graphics and Vision, the Siam Journal on Imaging Sciences, and the IEEE Transactions on Robotics and Automation. He will be General Chair for the International Conference on Computer Vision in 2023, and has served as both Program Chair and General Chair for the IEEE Conference on Computer Vision and Pattern Recognition, and General Chair for the European Conference on Computer Vision. Jean Ponce is an IEEE Fellow, an ELLIS Fellow, and a former Sr. Member of the Institut Universitaire de France. He is the recipient of two US patents, an ERC Advanced Grant, the 2016 and 2020 IEEE CVPR Longuet-Higgins Prizes, and the 2019 ICML Test-of-Time Award.

15:30 - 15:50

Kristian Kersting

Neuro-symbolic Concept Learning

SHOW ABSTRACT


Machine learning models may show Clever-Hans like moments when solving a task by learning the “wrong” thing, e.g. making use of confounding factors within a data set. Unfortunately, it is not easy to find out whether, say, a deep neural network is making Clever-Hans-type mistakes because they are not reflected in the standard performance measures such as precision and recall. While visual explanations go beyond precision and recall, they are also just a reminiscent of a child who points towards something but cannot articulate why something is relevant. In contrast, symbols may allow to access deep networks at the concept level and in turn to fix Clever-Hans behavior at a more general level. I will demonstrate that this is not insurmountable by presenting novel object-based deep networks for 2D/3D scene interpretation of images. This talk is based on joint works with Adam Kosiorek, Patrick Schramowski, Wolfgang Stammer, and Karl Stelzner.
Biography Kristian Kersting is a Full Professor (W3) at the CS Department of the TU Darmstadt, Germany. He is the head of the AI and Machine Learning (AIML) lab, a member of the Centre for Cognitive Science, a faculty of the ELLIS Unit Darmstadt, and the founding co-director of the Hessian Center for Artificial Intelligence (hessian.ai). After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI) as well as deep (probabilistic) programming and learning. Kristian has published over 180 peer-reviewed technical papers, co-authored a Morgan&Claypool book on Statistical Relational AI and co-edited a MIT Press book on Probabilistic Lifted Inference. Kristian is a Fellow of EurAI and ELLIS as well as a key supporter of CLAIRE. He received the Inaugural German AI Award 2019, several best paper awards, and the EurAI Dissertation Award 2006.

15:50 - 16:10

Coffee Break

16:10 - 17:00

Panel Discussion

French-German Research Collaborations
(not live streamed)

Organizers

Jean Ponce

Inria

Daniel Cremers

Technical University of Munich

Program & Website Chair

Technical Support

Zorah Lähner

University of Siegen

Lukas Koestler

Technical University of Munich

Supported by