Research
Broadly speaking, my research interests are Machine Learning, Artificial Intelligence,
Statistics, and Theoretical Computer Science. Some of the research directions that I am currently
focusing on are
Differentially Private Machine Learning
Statistically/Computationally Efficient Distribution Learning
Learning in Presence of Adversarial Perturbations or Data Poisoning
Unsupervised Domain Alignment and Learning under Distribution Shift
Modern Generalization Bounds for Supervised Learning
Highlighted Publications [Full List]
- Simplifying Adversarially Robust PAC Learning with Tolerance
[paper]
Hassan Ashtiani, Vinayak Pathak, Ruth Urner
COLT 2025 - Agnostic Private Density Estimation for GMMs via List Global Stability
[paper]
Mohammad Afzali, Hassan Ashtiani, Christopher Liaw
ALT 2025 - Sample-Efficient Private Learning of Mixtures of Gaussians
[paper]
Hassan Ashtiani, Mahbod Majid, Shyam Narayanan
NeurIPS 2024 (Spotlight) - Sample-Optimal Locally Private Hypothesis Selection and the Provable Benefits of Interactivity
[paper]
Alireza F. Pour, Hassan Ashtiani, Shahab Asoodeh
COLT 2024 - On the Role of Noise in the Sample Complexity of Learning Recurrent Neural Networks: Exponential Gaps for Long Sequences
[paper]
Alireza Fathollah Pour, Hassan Ashtiani
NeurIPS 2023 - Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes
[paper]
Hassan Ashtiani, Shai Ben-David, Nick Harvey, Chris Liaw, Abbas Mehrabian, Yaniv Plan
NeurIPS (NIPS) 2018, Oral Presentation (Best Paper Award)