Shahab Asoodeh

alt text 

Assistant Professor
Department of Computing and Software
McMaster University

I am an Assistant Professor of computer science in the Department of Computing and Software at McMaster University and a Faculty Affiliate at Vector Institute. Before joining McMaster in August 2021, I was a postdoctoral fellow in the School of Engineering and Applied Sciences at Harvard University from 2019 to 2021 and in the Knowledge Lab at University of Chicago from 2017 to 2019. I received my Ph.D. in Applied Mathematics from Queen's University in 2017. My main research interests are information theory, inference, and statistics, with applications to privacy, fairness, machine learning, and communications engineering.

Recent Announcements

  • July 2024: I'll attend ISIT 2024 in Athens, Greece.

  • June 2024: Our paper on the sample complexity of locally private hypothesis selection was selected for an oral presentation at TPDP 2024. Congrats to Alireza!

  • May 2024: Three papers at TPDP 2024. Check out the program here.

  • February 2024: One paper at COLT 2024 with Alireza F. Pour and Hassan Ashtiani. If you're interested in the locally private hypothesis selection, check it out here.

  • January 2024: Four papers at ISIT 2024. Here is one of them with a brilliant graduate student in my group, Hrad Ghoukasian.

  • January 2024: We will organize the 2024 North American School of Information Theory from July 28th to August 2nd at the University of Ottawa. See the program and a great list of speakers here.

  • December 2023: Our paper on the contraction properties of LDP mechanisms was published in IEEE Journal on Selected Areas in Information Theory (JSAIT). [ieee] [arXiv]

  • August 2023: Four papers accepted to TPDP 2023.

  • July 2023: I'll give a talk in a contributed session at the XVI Latin American Congress of Probability and Mathematical Statistics (CLAPEM), São Paulo, Brazil.

  • June 2023: I'll attend ISIT 2023 in Taipei.

  • May 2023: New paper posted to arXiv on the privacy analysis of hidden-state DP-SGD algorithm.

  • May 2023: New paper posted to arXiv on the cardinality bound of information bottleneck representations.

  • April 2023: Our paper on the saddle-point accountant for differential privacy was accepted in ICML 2023. Here is my talk at Google on this work.

  • April 2023: Four papers were accepted to ISIT 2023.

  • October 2022: One paper accepted to NeurIPS 2022 (selected for Oral Presentation). In this work, we proposed an efficient algorithm for correcting bias in probabilistic classifiers and evaluate it at scale on a new open dataset with multiple classes, multiple intersectional protected groups, and over 1M samples. Check it out here.

  • October 2022: A new paper on local differential privacy posted to arXiv. (see here)

  • October 2022: Invited talk at Google on saddle-point accountant for differential privacy. (slides), (talk)

  • September 2022: Together with Lele Wang (UBC), I am organizing a virtual reading group on “Foundations of Differential Privacy” open to all graduate students. We meet every Wednesdays from 5.30 to 7pm ET. Join us if you're interested! You can find more details (such Zoom meeting ID and list of papers) here.

  • August 2022: Two papers on differential privacy are posted to arXiv. (see here and here)

  • July 2022: My recent work on fairness in multi-class prediction is posted to arXiv. (see here)

  • June 2022: Flavio Calmon, Mario Diaz, Haewon Jeong and I gave a tutorial at IEEE International Symposium on Information Theory in June 2022 (see here for more details). Our tutorial is on Information-Theoretic Tools for Responsible Machine Learning and its slides can be accessed here.

  • June 2022: Talk at the 17th Canadian Workshop on Information Theory (CWIT) about a recent work on “Distribution Simulation Under Local Differential Privacy”. (see here for the short version)

  • April 2022: I was awarded the Natural Sciences and Engineering Research Council of Canada (NSERC): Discovery Grant and Launch Supplement.

  • August 2021: I recently started working with Statistics & Privacy Team at Meta as an Academic Collaborator. The main focus is on the design and analysis of optimal differentially private machine learning algorithms.

I am looking for Ph.D. students on the topic of trustworthy machine learning (e.g., privacy, algorithmic fairness, interpretability). The ideal candidates have strong background in mathematics and statistics, and passion for information theory and theoretical machine learning.

Interested candidates should contact me with a CV and all transcripts. Like many other faculty members, I receive large volume of emails, and as such, I cannot respond to all inquiries.

Contact

Office: Information Technology Building,
1280 Main St W,
Office 212,
Hamilton, ON L8S 1C7
Email: asoodehs[@]mcmaster.ca