Claudio Mayrink Verdun
Harvard University - John A. Paulson School of Engineering and Applied Sciences.
150 Western Ave, Allston, MA 02134
Hi! I am Claudio. Thanks for visiting my website and for your time.
I am a mathematician working with mathematics of AI and machine learning at Harvard’s School of Engineering and Applied Sciences under the mentorship of Flavio Calmon. My research focuses on trustworthy machine learning, exploring concepts such as fairness and arbitrariness, and also on mechanistic interpretability techniques for large generative models. In particular, I tried to understand how parsimonious models, particularly sparsity, can be used to leverage the understanding of machine learning. Over the years, I’ve harnessed optimization, statistics, and signal processing techniques to advance both the theory and practice of artificial intelligence. My work includes developing provably fast and scalable methods for machine learning models and creating uncertainty quantification techniques for high-dimensional problems involving large datasets. I’m also passionate about applying these advanced techniques to practical domains, such as medical imaging (particularly MRI) and the use of AI for education.
I had the privilege of completing my Ph.D. in mathematics under the guidance of Felix Krahmer within the Optimization and Data Analysis group, while concurrently affiliated to the Information Theory group under the leadership of Holger Boche at the Technical University of Munich.
news
Oct 10, 2024 | Our paper on measuring bias in text-to-image models with a proportional representation metric has been accepted at the NeurIPS 2024 Workshop on Algorithmic Fairness through the lens of Metrics and Evaluation. |
---|---|
Sep 26, 2024 | Our paper on uncertainty quantification for high-dimensional learning has been selected as a spotlight at the NeurIPS 2024. |
Sep 26, 2024 | Our paper on multi-group proportional representation has been accepted at the NeurIPS 2024. |
Sep 26, 2024 | Our paper about mechanistic interpretability was one of the 5 papers accepted as an oral contribution (top 3.6%) at the ICML Mechanistic Interpretability Workshop 2024 and also has been accepted at the NeurIPS 2024! |
Jul 21, 2024 | Our paper on implicit bias of gradient descent on overparametrization models is out on arXiv. |