Research
Broadly speaking, my research interests revolve around Machine Learning, Artificial Intelligence,
Statistics, and Theoretical Computer Science.
I am interested in formulating new/emerging learning scenarios
(including various forms of unsupervised learning), and providing provably efficient methods for
-- or establishing inherent limitations in --
solving them.
Some of the directions that I am currently focusing on areStatistically/Computationally Efficient Distribution Learning
Differentially Private Distribution Learning
Domain Adaptation and Robust Learning Under Distribution Shift
Learning under Adversarial Attacks
Some of the directions that I am currently focusing on are
Highlighted Publications [Full List]
- Adversarially Robust Learning with Tolerance
[paper]
Hassan Ashtiani, Vinayak Pathak, Ruth Urner
Preprint - Private and polynomial time algorithms for learning Gaussians and beyond
[paper]
Hassan Ashtiani, Christopher Liaw
Preprint - Privately Learning Mixtures of Axis-Aligned Gaussians
[paper]
Ishaq Aden-Ali, Hassan Ashtiani, Christopher Liaw
NeurIPS 2021 - On the Sample Complexity of Privately Learning Unbounded High-Dimensional Gaussians
[paper]
Ishaq Aden-Ali, Hassan Ashtiani, Gautam Kamath
ALT 2021 - Near-optimal Sample Complexity Bounds for Robust Learning of Gaussian Mixtures via Compression Schemes
[paper], [arXiv]
Hassan Ashtiani, Shai Ben-David, Nick Harvey, Chris Liaw, Abbas Mehrabian, Yaniv Plan
Journal of the ACM, 2020 - Black-box Certification and Learning under Adversarial Perturbations
[paper]
Hassan Ashtiani, Vinayak Pathak, Ruth Urner
ICML 2020 - Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes
[paper]
Hassan Ashtiani, Shai Ben-David, Nick Harvey, Chris Liaw, Abbas Mehrabian, Yaniv Plan
NeurIPS (NIPS) 2018, Oral Presentation (Best Paper Award) - Clustering with Same-Cluster Queries
[paper, slides, video]
Hassan Ashtiani, Shrinu Kushagra, Shai Ben-David
NIPS 2016, Oral Presenattion