Riccardo De Santi
Decision Making Under Uncertainty. Foundations and applications in Scientific Discovery.
I am a PhD student in Machine Learning at the ETH AI Center. I am lucky to be advised by Andreas Krause (Learning and Adaptive Systems group), Niao He (Optimization and Decision Intelligence group), and Kjell Jorner (Digital Chemistry Laboratory). I am also part of the Institute of Machine Learning at ETH. During my Bachelor I have started my research journey in Reinforcement Learning with prof. Marcello Restelli at Politecnico di Milano. Afterwards, during a MS in Machine Learning and Theoretical CS at ETH, I have done visitings at the University of Oxford and Imperial College London, both times under the supervision of profs. Michael Bronstein and Marcello Restelli. My research has been recently awarded with an Outstanding Paper Award at ICML 2022.
research interests
I am broadly interested in the foundations of algorithmic decision-making and its applications in automatic scientific discovery. This spans a wide spectrum of areas including:
- Decision Making Under Uncertainty (Reinforcement/Active Learning, Bayesian Optimization, Bandits)
- Optimal Experimental Design
- Submodular and Non-Convex Optimization
- Causality and Geometric Machine Learning
I strive to design reliable algorithms with guarantees leading to theoretical understanding of the underlying problems as well as relevant for real world applications mostly related with digital chemistry, including molecular design, drug discovery, and experimental design in chemical spaces.
If you are a MS student wishing to work with me, feel free to contact me here.
Contacts: Google Scholar | Twitter | LinkedIn | Github | rdesanti [at] ethz [dot] ch
news
Jun 1, 2024 | Global Reinforcement Learning: Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods and Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction accepted at ICML 2024! |
---|---|
Jan 17, 2024 | Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning accepted at ICLR 2024! |
Nov 23, 2023 | My TEDx talk Beyond the Limits of the Mind: Scientific Discovery Reimagined is now available online! |
Nov 22, 2023 | On December 1st I will ufficially start my PhD at the ETH AI Center advised by Andreas Krause, Niao He, and Kjell Jorner. |
Oct 27, 2023 | Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning accepted at the Causal Representation Learning Workshop at NeurIPS 2023! |
selected publications
- AAAIProvably efficient causal model-based reinforcement learning for systematic generalizationProceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2023Workshop on Spurious Correlations, Invariance, and Stability at ICML 2022 and A Causal View on Dynamical Systems Workshop at NeurIPS 2022