Claudio Mayrink Verdun

Harvard University - John A. Paulson School of Engineering and Applied Sciences.

claudio2.jpg

150 Western Ave, Allston, MA 02134

Hi! I am Claudio. Thanks for visiting my website and for your time.

I am a mathematician working with mathematics of AI and machine learning at Harvard’s School of Engineering and Applied Sciences under the mentorship of Flavio Calmon. My research focuses on trustworthy machine learning, exploring concepts such as fairness and arbitrariness, and also on mechanistic interpretability techniques for large generative models. In particular, I tried to understand how parsimonious models, particularly sparsity, can be used to leverage the understanding of machine learning. Over the years, I’ve harnessed optimization, statistics, and signal processing techniques to advance both the theory and practice of artificial intelligence. My work includes developing provably fast and scalable methods for machine learning models and creating uncertainty quantification techniques for high-dimensional problems involving large datasets. I’m also passionate about applying these advanced techniques to practical domains, such as medical imaging (particularly MRI) and the use of AI for education.

I had the privilege of completing my Ph.D. in mathematics under the guidance of Felix Krahmer within the Optimization and Data Analysis group, while concurrently affiliated to the Information Theory group under the leadership of Holger Boche at the Technical University of Munich.

news

Oct 10, 2024 Our paper on measuring bias in text-to-image models with a proportional representation metric has been accepted at the NeurIPS 2024 Workshop on Algorithmic Fairness through the lens of Metrics and Evaluation.
Sep 26, 2024 Our paper on uncertainty quantification for high-dimensional learning has been selected as a spotlight at the NeurIPS 2024.
Sep 26, 2024 Our paper on multi-group proportional representation has been accepted at the NeurIPS 2024.
Sep 26, 2024 Our paper about mechanistic interpretability was one of the 5 papers accepted as an oral contribution (top 3.6%) at the ICML Mechanistic Interpretability Workshop 2024 and also has been accepted at the NeurIPS 2024!
Jul 21, 2024 Our paper on implicit bias of gradient descent on overparametrization models is out on arXiv.

selected publications

  1. Non-Asymptotic Uncertainty Quantification in High-Dimensional Learning
    Frederik Hoppe, Claudio Mayrink Verdun, Hannah Laus, Felix Krahmer, and 1 more author
    NeurIPS (Spotlight, top 3%), 2024
  2. chess.png
    Measuring progress in dictionary learning for language model interpretability with board game models
    Adam Karvonen, Benjamin Wright, Can Rager, Rico Angell, and 5 more authors
    NeurIPS, 2024
  3. MPR.png
    Multi-Group Proportional Representation
    Alex Oesterling, Claudio Mayrink Verdun, Carol Xuan Long, Alex Glynn, and 4 more authors
    NeurIPS, 2024
  4. MRI_CI.gif
    High-Dimensional Confidence Regions in Sparse MRI
    Frederik Hoppe, Felix Krahmer, Claudio Mayrink Verdun, Marion I. Menzel, and 1 more author
    In ICASSP (Best Student Paper Award), 2023
  5. matrix_completion_animation.gif
    A scalable second order method for ill-conditioned matrix completion from few samples
    Christian Kümmerle, and Claudio Mayrink Verdun
    In ICML, 2021
  6. covid.png
    Group testing for SARS-CoV-2 allows for up to 10-fold efficiency increase across realistic scenarios and testing strategies
    Claudio Mayrink Verdun, Tim Fuchs, Pavol Harar, Dennis Elbrächter, and 5 more authors
    Frontiers in Public Health (highlighted by David Donoho at https://www.youtube.com/watch?v=VOzl-RC4IIs, 2021
  7. Q_function.png
    Iteratively reweighted least squares for basis pursuit with global linear convergence rate
    Christian Kümmerle, Claudio Mayrink Verdun, and Dominik Stöger
    NeurIPS (Spotlight, top 3%), 2021