LIG keynote (grande conférence du LIG)
When |
Jun 01, 2017
from 02:00 to 03:00 |
---|---|
Where | UGA Campus - Amphithéâtre du bâtiment IMAG |
Add event to calendar |
vCal iCal |
Abstract: Induction, the act of drawing general conclusions from specific observations, is an ubiquitous phenomenon in many cognitive processes. It is also the core mechanism in most machine learning algorithms. Nowadays, machine learning makes almost daily the headlines of newspapers, realizing seemingly superhuman exploits, that, for some commentators, forecast a future where robots will be smarter than humans. However, Hume, in 1737, asserted that there is no absolute ground upon which to base and justify induction. What has happened since then? Has the science of Machine Learning finally solved the problem of induction?
In this talk, I will discuss several inductive schemes that have been invented in machine learning to allow inductive leaps. For each of them, I will explain what the theory guarantees and I will examine on what basis and assumptions these guarantees reside. At the same time, I will try to lay out the genealogy of these theoretical perspectives in order to perceive future possible evolutions.
The outline of the presentation will be as follows:
- What is induction? How ubiquitous it is and what are some of its limitations
- The no-free-lunch theorem and its consequences
- Inductive principles in the history of machine learning, and, for each of them: what types of inductive jumps have been the focus of attention? How has performance been measured? And what were the underlying theories?
- The Empirical Risk Minimization principles and its descendants
- Do new learning frameworks, such as online learning and transfer learning, necessitate to new inductive principles? And on what type of theoretical framework could they be based?
Bio: Antoine Cornuéjols is Professor in Computer Science at AgroParisTech and head of the LINK (Learning and INtegration of Knowledge) team of the UMR518 MIA-Paris. He has been thinking and working on artificial intelligence and machine learning since his doctorate studies at UCLA and Orsay University, from which he graduated. He is specifically interested in on line learning, transfer learning and collaborative learning, settings where the classical machine learning approaches based on the assumption of a stationary environment must yield to new principles. He is co-author of two books: one on Machine Learning’s concepts and algorithms (in French, 2 editions) and one on Phase Transitions in Machine Learning. He has published a large number of research articles in major journals and conferences and is regularly a program committee member for prestigious conferences or journals.