About

Hello! I am a third-year PhD student in the CSE department at UC San Diego in the machine learning and theory groups. I am very fortunate to be advised by Professor Mikhail Belkin. Prior to my PhD, I completed an MS in computer science at Columbia, where I researched under the excellent mentorship of Professor Alexandr Andoni.

I am currently a Student Researcher at Google Brain working on feature learning. I was also an ML Research intern at Goldman Sachs.

I am supported by the ARCS Foundation Fellowship.

Broadly, I am excited about developing theory-driven machine learning and algorithmic methods. I am especially interested in:
1.) Feature learning
2.) Deep learning theory
3.) Data-dependent kernels

Feel free to email me: dbeaglehole {at} ucsd {dot} edu

(* denotes equal contribution).

Pre-prints

  1. Average gradient outer product as a mechanism for deep neural collapse
    Daniel Beaglehole*, Peter Súkeník*, Marco Mondelli, Mikhail Belkin
  2. Gradient descent induces alignment between weights and the empirical NTK for deep non-linear networks
    Daniel Beaglehole, Ioannis Mitliagkas, Atish Agarwala
  3. Mechanism of feature learning in convolutional neural networks
    Daniel Beaglehole*, Adityanarayanan Radhakrishnan*, Parthe Pandit, Mikhail Belkin
  4. Mechanism of feature learning in deep fully connected networks and kernel machines that recursively learn features
    Adityanarayanan Radhakrishnan*, Daniel Beaglehole*, Parthe Pandit, Mikhail Belkin
    (twitter link)

Publications

  1. Mechanism for feature learning in neural networks and backpropagation-free machine learning models
    Adityanarayanan Radhakrishnan*, Daniel Beaglehole*, Parthe Pandit, Mikhail Belkin
    Science
  2. On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions
    Daniel Beaglehole, Mikhail Belkin, Parthe Pandit
    SIAM Journal on Mathematics of Data Science (SIMODS) ,
    Conference on the Mathematical Theory of Deep Neural Networks (DeepMath 2022)
  3. Sampling Equilibria: Fast No-Regret Learning in Structured Games
    Daniel Beaglehole*, Max Hopkins*, Daniel Kane*, Sihan Liu*, Shachar Lovett*
    Symposium on Discrete Algorithms (SODA 2023)
    (twitter link)
  4. Learning to Hash Robustly, Guaranteed
    Alexandr Andoni*, Daniel Beaglehole*
    International Conference on Machine Learning (ICML 2022)
    (twitter link) (presented by Prof. Andoni for NeurIPS’21 ANN competition)

Presentations

  1. Google Brain: (“Feature learning in neural networks and kernel machines that recursively learn features”, 3/2023)
  2. Yale University: Inference, Information, and Decision Systems Group, (“Feature learning in neural networks and kernel machines that recursively learn features”, 3/2023)
  3. UCSD: Theory Seminar, (“Learning to Hash Robustly, Guaranteed”, 10/2021)
  4. Goldman Sachs: Data Science and Machine Learning paper club, (“Learning to Hash Robustly, Guaranteed”, 07/2021)
  5. Goldman Sachs: Summer internship final presentation, (“Predictive Clustering Time Series for Finance”, 08/2021)