Skip to content

Open source content from the Hi! PARIS Summer School 2022 πŸ‘©β€πŸ«

Notifications You must be signed in to change notification settings

paulinamoskwa/Hi-PARIS-Summer-School

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Hi! PARIS Summer School 2022

πŸ“– About

Hi! PARIS proposed its second summer school on July 4-7, 2022. Hi! PARIS is the interdisciplinary center for Data Analytics and Artificial Intelligence for Science, Business and Society. Founded by HEC Paris and Institut Polytechnique de Paris (IP Paris) and joined in 2021 by Inria, the center sets a standard of excellence for high-level research projects, educational programs, and business applications.

The Hi! PARIS Summer School 2022 on AI & Data for Science, Business and Society covers a wide range of topics in Artificial Intelligence and Data Science from a variety of perspectives. This summer school offers courses that range from introduction to Deep Reinforcement Learning to Intelligent Risk Management, Image Recognition using Deep Learning, Optimal Transport for Machine Learning and Supervised Learning on multivariate brain signals.

This summer school was of interest to PhD track students and final year students going for a PhD, to current PhD students, to academics and to research engineers who want to expand their knowledge in these areas. The Hi! PARIS Summer School was held at Telecom Paris, in Palaiseau.

The full program is available here. Additional info are available at Hi! PARIS Summer School.

πŸ“š Attended tutorials

Title
Speaker
Abstract
RΓ©mi Flamary
Ecole Polytechnique
This tutorial aims at presenting the mathematical theory of optimal transport (OT) and providing a global view of the potential applications of this theory in machine learning, signal and image processing and biomedical data processing. The first part of the tutorial presents the theory of optimal transport and the optimization problems through the original formulation of Monge and the Kantorovitch formulation in the primal and dual. The algorithms used to solve these problems are discussed and the problem is illustrated on simple examples. It is also introduced the OT-based Wasserstein distance and the Wasserstein barycenters that are fundamental tools in data processing of histograms. Finally, there are presented recent developments in regularized OT that bring efficient solvers and more robust solutions. The second part of the tutorial presents numerous recent applications of OT in the field of machine learning and signal processing and biomedical imaging. It is shown how the mapping inherent to optimal transport can be used to perform domain adaptation and transfer learning. Finally, it is discussed the use of OT on empirical datasets with applications in generative adversarial networks, unsupervised learning and processing of structured data such as graphs.
Alexandre Gramfort
INRIA
Understanding how the brain works in healthy and pathological conditions is considered as one of the major challenges for the 21st century. After the first electroencephalography (EEG) measurements in 1929, the 90's was the birth of modern functional brain imaging with the first functional MRI (fMRI) and full head magnetoencephalography (MEG) system. Presently new tech companies are developing new consumer grade devices for at home recordings of neural activity. By offering noninvasively unique insights into the living brain, these technologies have started to revolutionize both clinical and cognitive neuroscience. The availability of such new devices made possible by pioneering breakthroughs in physics and engineering now pose major computational and statistical challenges for which machine learning currently plays a major role. In this course it is discovered hands-on the types of data one can collect to record the living brain. Then, it is discussed about state-of-the-art supervised machine learning approaches for EEG signals in the clinical context of sleep stage classification as well as brain computer interfaces. ML techniques that are explored are based on deep learning as well as Riemannian geometry that has been proven very powerful to classify EEG data. It is done so with MNE-Python (https://mne.tools) which has become a reference tool to process MEG/EEG/sEEG/ECoG data in Python, as well as the scikit-learn library (https://scikit-learn.org). For the deep learning aspect it is used the Braindecode package (https://braindecode.org) based on PyTorch. The teaching is done hands-on using Jupyter notebooks and public datasets. Finally this tutorial is an unique opportunity to see what ML can offer beyond standard applications like computer vision, speech or NLP.
Mitali Banerjee
HEC Paris
This 3 hour module offers a hands-on introduction to deep learning based image recognition tools. Participants gain familiarity with preparing and importing images into software (python) and applying one of the foundational deep learning architectures to classify the images and create vector representations. There are discussed different applications of the output of deep learning tools to extract managerial and scientific insights. In particular, the course discusses applications of these tools to creating large-scale measures that have otherwise proven to be elusive to measure or susceptible to bias in measurement.
Corentin Tallec
DeepMind
Be it on Atari Games, Go, Chess, Starcraft II or Dota, Deep Reinforcement Learning (DRL) has opened up Reinforcement Learning to a variety of large scale applications. While it could formally appear as a straightforward extension of reinforcement learning to deep learning based function approximations, DRL often involves more than simply plugging the newest deep learning architecture into the best theoretical reinforcement learning method. In this tutorial, we journey through the recent history of DRL, from the now seminal Neural fitted-Q, to the most popular Deep Q-Network (DQN). Alongside the lecture, the practical session revolves around implementing and testing DRL algorithms in JAX and Haiku on simple environments.
Geoffroy Peeters
IP PARIS – Telecom Paris
As in many fields, deep neural networks have allowed important advances in the processing of audio signals. In this tutorial, we review the specificities of these signals, elements of audio signal processing (as used in the traditional machine-learning approach) and how deep neural networks (in particular convolutional ones) can be used to perform feature learning (without prior knowledge --1Dconv, TCN--, or using prior knowledge --source/filter, auto-regressive, HCQT, SincNet, DDSP--). We then review the dominant DL architectures, meta-architectures and training paradigms (classification, metric learning, supervised, unsupervised, self-supervised, semi-supervised) used in audio. We exemplify the used of those for some key applications in music and environmental sounds processing: sound event detection, localization, auto-tagging, source separation, generation.
Krikamol Muandet
Max Planck Institute for Intelligence Systems
Data-driven decision-making tools have become increasingly prevalent in society today with applications in critical areas like health care, economics, education, and the justice system. To ensure reliable decisions, it is essential that the models learn from data the genuine correlations (i.e., causal relationships) between the outcomes and the decision variables. In this tutorial, it is given an introduction to the causal inference problem from a machine learning perspective including causal discovery, treatment effect estimation, instrumental variable (IV), and proxy variables. Then, it is reviewed recent developments in how we can leverage machine learning (ML) based methods, especially modern kernel methods, to tackle some of these problems.