Riccardo De Santi

ETH AI Center Ph.D. student. Exploration for Out-of-Distribution Discovery: from Theory to Molecules.

desanti_riccardo.jpg

Office: Caltech, ANB 328

Pasadena, California, USA

Currently, I’m at California Institute of Technology (Caltech), visiting Yisong Yue’s group and Frances H. Arnold’s Nobel-awarded lab, working to close the loop between generative exploration and chemical wet-lab discovery. I am a Ph.D. student at ETH Zurich, advised by Andreas Krause, Niao He, and Kjell Jorner, and supported by the ETH AI Center and NCCR Catalysis.

My research focuses on developing generative algorithms for discovery beyond the data — bridging flow and diffusion modeling, decision-making under uncertainty, and optimization, to enable new-to-nature discovery. Broadly, I aim to contribute to the foundations of a science of generative discovery: principled methods allowing generative models to explore complex spaces beyond pre-trained priors.

This research program builds on my earlier work on the foundations of exploration in RL, which includes an Outstanding Paper Award at ICML with Marcello Restelli, and research visits with Michael Bronstein at the University of Oxford and Imperial College London on geometric and causal inductive biases for exploration.

I serve as a mentor for LeadTheFuture, aiming to inspire young minds. In my spare time, I seek to understand life as intelligent computation through biochemistry, surf, and nature.

Feel free to reach out if you wish to collaborate, exchange ideas, or seek thesis supervision.

Contacts:   rdesanti@ethz.ch   |   Google Scholar   |   Twitter   |   LinkedIn   |   Github

news

Apr 30, 2026 Three works on Flows for Discovery accepted at ICML 2026: (1) A Unified Density Operator View of Flow Control and Merging, (2) Constrained Flow Optimization via Sequential Fine-Tuning for Molecular Design, and (3) Efficient Tail-Aware Generative Optimization via Flow Model Fine-Tuning.
Apr 15, 2026 A Unified Density Operator View of Flow Control and Merging accepted with Oral presentation at the Workshop on Real-World Constrained and Preference-Aligned Flow and Diffusion-Based Models at ICLR 2026
Jan 26, 2026 Three works on Flows for Discovery accepted at ICLR 2026: (1) Verifier-Constrained Flow Expansion for Discovery Beyond the Data, (2) Riemannian Optimization via Pre-trained Diffusion Models, and (3) Value Matching: Scalable and Gradient-Free Reward-Guided Flow Adaptation.
Oct 15, 2025 Co-designed with Andreas Krause the new Diffusion Generative Models module for ETH Zurich’s Probabilistic AI course (~500 students).
Oct 6, 2025 Constrained Flow Optimization via Sequential Fine-Tuning for Molecular Design accepted as an Oral at the Frontiers in Probabilistic Inference Workshop at NeurIPS 2025
Sep 18, 2025 Flow Density Control: Generative Optimization Beyond Entropy-Regularized Fine-Tuning has been accepted as Spotlight at NeurIPS 2025
Jul 8, 2025 Flow Density Control: Generative Optimization Beyond Entropy-Regularized Fine-Tuning accepted as an Oral at the Workshop on Generative AI and Biology at ICML 2025

selected publications

  1. ICMLOral Presentation
    A Unified Density Operator View of Flow Control and Merging
    Riccardo De Santi, Malte Franke, Ya-Ping Hsieh, and 1 more author
    International Conference on Machine Learning (ICML), 2026
    Oral at Real-World Constrained and Preference-Aligned Flow and Diffusion-Based Models Workshop at ICLR 2026
  2. ICMLOral Presentation
    Constrained Molecular Generation via Sequential Flow Model Fine-Tuning
    Sven Gutjahr*, Riccardo De Santi*, Luca Schaufelberger*, and 2 more authors
    International Conference on Machine Learning (ICML), 2026
    Oral at Frontiers in Probabilistic Inference Workshop at NeurIPS 2025
  3. ICLR
    Verifier-Constrained Flow Expansion for Discovery Beyond the Data
    Riccardo De Santi*, Kimon Protopapas*, Ya-Ping Hsieh, and 1 more author
    International Conference on Learning Representations (ICLR), 2026
  4. NeurIPS SpotlightSpotlight and Oral Presentation
    Flow Density Control: Generative Optimization Beyond Entropy-Regularized Fine-Tuning
    Riccardo De Santi, Marin Vlastelica, Ya-Ping Hsieh, and 3 more authors
    Advances in Neural Information Processing Systems (NeurIPS), 2025
    Spotlight at NeurIPS 2025 and Oral at Workshop on Generative AI and Biology at ICML 2025
  5. ICML
    Provable Maximum Entropy Manifold Exploration via Diffusion Models
    Riccardo De Santi*, Marin Vlastelica*, Ya-Ping Hsieh, and 3 more authors
    International Conference on Machine Learning (ICML), 2025
  6. ICMLOutstanding Paper
    The Importance of Non-Markovianity in Maximum State Entropy Exploration
    Mirco Mutti*, Riccardo De Santi*, and Marcello Restelli
    International Conference on Machine Learning (ICML), 2022
    Outstanding Paper Award at ICML 2022