About

Hello! I am a PhD student in the CSE department at UC San Diego in the machine learning and theory groups. I am very fortunate to be advised by Professor Mikhail Belkin. Prior to my PhD, I completed an MS in computer science at Columbia, where I researched under the excellent mentorship of Professor Alexandr Andoni.

I was previously a Student Researcher at Google DeepMind working on feature learning. I was also an ML Research intern at Goldman Sachs.

I am supported by the ARCS Foundation Fellowship.

Broadly, I am excited about developing theory-driven machine learning and algorithmic methods. I am especially interested in:
1.) Feature learning
2.) Deep learning theory
3.) Data-dependent kernel machines

Feel free to email me: dbeaglehole {at} ucsd {dot} edu

(* denotes equal contribution).

Pre-prints

  1. Emergence in non-neural models: grokking modular arithmetic via average gradient outer product
    Neil Mallinar, Daniel Beaglehole, Libin Zhu, Adityanarayanan Radhakrishnan, Parthe Pandit, Mikhail Belkin
  2. Mechanism of feature learning in convolutional neural networks
    Daniel Beaglehole*, Adityanarayanan Radhakrishnan*, Parthe Pandit, Mikhail Belkin
  3. Mechanism of feature learning in deep fully connected networks and kernel machines that recursively learn features
    Adityanarayanan Radhakrishnan*, Daniel Beaglehole*, Parthe Pandit, Mikhail Belkin
    (twitter link)
  4. Fast, optimal, and dynamic electoral campaign budgeting by a generalized Colonel Blotto game
    Thomas Valles, Daniel Beaglehole

Publications

  1. Mechanism for feature learning in neural networks and backpropagation-free machine learning models
    Adityanarayanan Radhakrishnan*, Daniel Beaglehole*, Parthe Pandit, Mikhail Belkin
    Science
  2. Average gradient outer product as a mechanism for deep neural collapse
    Daniel Beaglehole*, Peter Súkeník*, Marco Mondelli, Mikhail Belkin
    Conference on Neural Information Processing Systems (NeurIPS 2024)
  3. Feature learning as alignment: a structural property of gradient descent in non-linear neural networks
    Daniel Beaglehole, Ioannis Mitliagkas, Atish Agarwala
    Workshop on High-dimensional Learning Dynamics (HiLD) @ ICML 2024
  4. On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions
    Daniel Beaglehole, Mikhail Belkin, Parthe Pandit
    SIAM Journal on Mathematics of Data Science (SIMODS) ,
    Conference on the Mathematical Theory of Deep Neural Networks (DeepMath 2022)
  5. Sampling Equilibria: Fast No-Regret Learning in Structured Games
    Daniel Beaglehole*, Max Hopkins*, Daniel Kane*, Sihan Liu*, Shachar Lovett*
    Symposium on Discrete Algorithms (SODA 2023)
    (twitter link)
  6. Learning to Hash Robustly, Guaranteed
    Alexandr Andoni*, Daniel Beaglehole*
    International Conference on Machine Learning (ICML 2022)
    (twitter link) (presented by Prof. Andoni for NeurIPS’21 ANN competition)

Presentations

  1. Google Brain: (“Feature learning in neural networks and kernel machines that recursively learn features”, 3/2023)
  2. Yale University: Inference, Information, and Decision Systems Group, (“Feature learning in neural networks and kernel machines that recursively learn features”, 3/2023)
  3. UCSD: Theory Seminar, (“Learning to Hash Robustly, Guaranteed”, 10/2021)
  4. Goldman Sachs: Data Science and Machine Learning paper club, (“Learning to Hash Robustly, Guaranteed”, 07/2021)
  5. Goldman Sachs: Summer internship final presentation, (“Predictive Clustering Time Series for Finance”, 08/2021)