Research
Broadly speaking, my research interests revolve around Machine Learning, Artificial Intelligence,
Statistics, and Theoretical Computer Science. Some of the research directions that I am currently
focusing on are
Differentially Private Machine Learning
Statistically/Computationally Efficient Distribution Learning
Learning in Presence of Adversarial Perturbations or Data Poisoning
Unsupervised Domain Alignment and Learning under Distribution Shift
Modern Generalization Bounds for Supervised Learning
Highlighted Publications [Full List]
- Agnostic Private Density Estimation for GMMs via List Global Stability
[paper]
Mohammad Afzali, Hassan Ashtiani, Christopher Liaw
Preprint - Sample-Efficient Private Learning of Mixtures of Gaussians
[paper]
Hassan Ashtiani, Mahbod Majid, Shyam Narayanan
NeurIPS 2024 (Spotlight) - Sample-Optimal Locally Private Hypothesis Selection and the Provable Benefits of Interactivity
[paper]
Alireza F. Pour, Hassan Ashtiani, Shahab Asoodeh
COLT 2024 - On the Role of Noise in the Sample Complexity of Learning Recurrent Neural Networks: Exponential Gaps for Long Sequences
[paper]
Alireza Fathollah Pour, Hassan Ashtiani
NeurIPS 2023 - Polynomial time and private learning of unbounded Gaussian Mixture Models
[paper]
Jamil Arbas, Hassan Ashtiani, Christopher Liaw
ICML 2023 - Adversarially Robust Learning with Tolerance
[paper]
Hassan Ashtiani, Vinayak Pathak, Ruth Urner
ALT 2023 - Private and polynomial time algorithms for learning Gaussians and beyond
[paper]
Hassan Ashtiani, Christopher Liaw
COLT 2022 - Near-optimal Sample Complexity Bounds for Robust Learning of Gaussian Mixtures via Compression Schemes
[paper], [arXiv]
Hassan Ashtiani, Shai Ben-David, Nick Harvey, Chris Liaw, Abbas Mehrabian, Yaniv Plan
Journal of the ACM, 2020 - Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes
[paper]
Hassan Ashtiani, Shai Ben-David, Nick Harvey, Chris Liaw, Abbas Mehrabian, Yaniv Plan
NeurIPS (NIPS) 2018, Oral Presentation (Best Paper Award)