About
Hello! I am a PhD student in the CSE department at UC San Diego in the machine learning and theory groups. I am very fortunate to be advised by Professor Mikhail Belkin. Prior to my PhD, I completed an MS in computer science at Columbia, where I researched under the excellent mentorship of Professor Alexandr Andoni.
I was previously a Student Researcher at Google DeepMind working on feature learning. I was also an ML Research intern at Goldman Sachs.
I am supported by the ARCS Foundation Fellowship.
Broadly, I am excited about developing theory-driven machine learning and algorithmic methods. I am especially interested in:
1.) Feature learning
2.) Deep learning theory
3.) Data-dependent kernel machines
Feel free to email me: dbeaglehole {at} ucsd {dot} edu
(* denotes equal contribution).
Pre-prints
- Emergence in non-neural models: grokking modular arithmetic via average gradient outer product
Neil Mallinar, Daniel Beaglehole, Libin Zhu, Adityanarayanan Radhakrishnan, Parthe Pandit, Mikhail Belkin - Mechanism of feature learning in convolutional neural networks
Daniel Beaglehole*, Adityanarayanan Radhakrishnan*, Parthe Pandit, Mikhail Belkin - Mechanism of feature learning in deep fully connected networks and kernel machines that recursively learn features
Adityanarayanan Radhakrishnan*, Daniel Beaglehole*, Parthe Pandit, Mikhail Belkin
(twitter link) - Fast, optimal, and dynamic electoral campaign budgeting by a generalized Colonel Blotto game
Thomas Valles, Daniel Beaglehole
Publications
- Mechanism for feature learning in neural networks and backpropagation-free machine learning models
Adityanarayanan Radhakrishnan*, Daniel Beaglehole*, Parthe Pandit, Mikhail Belkin
Science - Average gradient outer product as a mechanism for deep neural collapse
Daniel Beaglehole*, Peter Súkeník*, Marco Mondelli, Mikhail Belkin
Conference on Neural Information Processing Systems (NeurIPS 2024) - Feature learning as alignment: a structural property of gradient descent in non-linear neural networks
Daniel Beaglehole, Ioannis Mitliagkas, Atish Agarwala
Workshop on High-dimensional Learning Dynamics (HiLD) @ ICML 2024 - On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions
Daniel Beaglehole, Mikhail Belkin, Parthe Pandit
SIAM Journal on Mathematics of Data Science (SIMODS) ,
Conference on the Mathematical Theory of Deep Neural Networks (DeepMath 2022) - Sampling Equilibria: Fast No-Regret Learning in Structured Games
Daniel Beaglehole*, Max Hopkins*, Daniel Kane*, Sihan Liu*, Shachar Lovett*
Symposium on Discrete Algorithms (SODA 2023)
(twitter link)- This began as an earlier version:
An Efficient Approximation Algorithm for the Colonel Blotto Game
Daniel Beaglehole
- This began as an earlier version:
- Learning to Hash Robustly, Guaranteed
Alexandr Andoni*, Daniel Beaglehole*
International Conference on Machine Learning (ICML 2022)
(twitter link) (presented by Prof. Andoni for NeurIPS’21 ANN competition)
Presentations
- Google Brain: (“Feature learning in neural networks and kernel machines that recursively learn features”, 3/2023)
- Yale University: Inference, Information, and Decision Systems Group, (“Feature learning in neural networks and kernel machines that recursively learn features”, 3/2023)
- UCSD: Theory Seminar, (“Learning to Hash Robustly, Guaranteed”, 10/2021)
- Goldman Sachs: Data Science and Machine Learning paper club, (“Learning to Hash Robustly, Guaranteed”, 07/2021)
- Goldman Sachs: Summer internship final presentation, (“Predictive Clustering Time Series for Finance”, 08/2021)