Hamed Hassani
Hamed Hassani
Princeton University
• Visiting Scholar, Departments of Electrical Engineering and Mathematics (Novem-
ber 2015 - December 2015)
DISTINCTIONS • IEEE Information Theory Society James L. Massey Research and Teaching
Award for Young Scholars, 2023
• IEEE Communications Society & Information Theory Society Joint Paper Award,
2023
• IEEE Information Theory Society Distinguished Lecturer, 2022-23
• NSF CAREER Award, 2020
• Intel Rising Star Faculty Award, 2020
• AFOSR Young Investigator Award, 2019
• Simons Institute (UC Berkeley) Research Fellowship Award, 2016
• IEEE Information Theory Society Thomas M. Cover Dissertation Award, 2014
• IEEE Jack Keil Wolf ISIT Student Paper Award, 2015 (Senior Author)
• IEEE Jack Keil Wolf ISIT Student Paper Award Finalist, 2010
PUBLICATIONS
Book
(B1) Gabor Braun, Alejandro Carderera, Cyrille W Combettes, Hamed Hassani,
Amin Karbasi, Aryan Mokhtari, Sebastian Pokutta, Conditional Gradient Meth-
ods, 2022.
Conference Papers
(C1) Donghwan Lee, Behrad Moniri, Xinmeng Huang, Eedgar Dobriban, Hamed
Hassani, Demystifying Disagreement-on-the-Line in High Dimensions, Interna-
tional Conference on Machine Learning (ICML), 2023.
(C2) Alex Shevchenko, Kevin Kögler, Hamed Hassani, Marco Mondelli, Fundamental
Limits of Two-layer Autoencoders, and Achieving Them with Gradient Meth-
ods, International Conference on Machine Learning (ICML), 2023.
Oral Presentation
(C3) Alexander Robey, FAbian Lattore, George Pappas, Hamed Hassani, Volkan
Cevher, Adversarial Training Should Be Cast as a Non-Zero-Sum Game, ICML
Workshop on Frontiers in Adversarial Learning (AdvML), 2023.
Best Paper Award
(C4) Aritra Mitra, Hamed Hassani, George Pappas, Linear Stochastic Bandits over
a Bit-Constrained Channel, Learning for Dynamics and Control Conference
(L4DC), 2023.
Oral Presentation
(C5) Payam Delgosha, Hamed Hassani, Ramtin Pedarsani, Generalization Proper-
ties of Adversarial Training for ℓ0 -Bounded Adversarial Attacks, Information
Theory Workshop (ITW), 2023.
(C6) Zebang Shen, Jiayuan Ye, Anmin Kang, Hamed Hassani, Reza Shokri, Share
Your Representation Only: Guaranteed Improvement of the Privacy-Utility
Tradeoff in Federated Learning, International Conference on Learning Repre-
sentations (ICLR), 2023.
(C7) Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti, Neural Estimation of the
Rate-Distortion Function for Massive Datasets, IEEE International Symposium
on Information Theory (ISIT), 2022.
(C8) Mark Beliaev, Payam Delgosha, Hamed Hassani, Ramtin Pedarsani, Efficient
and Robust Classification Under Sparse Attacks, IEEE International Sympo-
sium on Information Theory (ISIT), 2022.
(C9) Aritra Mitra, Arman Adibi, George J Pappas, Hamed Hassani, Collabora-
tive Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds,
NeurIPS, 2022.
(C10) Xingmen Huang, Donghwan Lee, Edgar Dobriban, Hamed Hassani, Collabora-
tive Learning of Distributions under Heterogeneity and Communication Con-
straints, NeurIPS, 2022.
(C11) Cian Eastwood, Alexander Robey, Shashank Singh, Julius von Kügelgen, Hamed
Hassani, George J Pappas, Bernhard Schölkopf, Probable Domain Generaliza-
tion via Quantile Risk Minimization, NeurIPS, 2022.
(C12) Liam Collins, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai, FedAvg
with Fine Tuning: Local Updates Lead to Representation Learning, NeurIPS,
2022.
(C13) Alexander Robey, Luiz Chamon, George Pappas, Hamed Hassani , Probabilis-
tically Robust Learning: Balancing Average-and Worst-case Performance, In-
ternational Conference on Machine Learning (ICML), 2022.
(C14) Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Aim Karbasi,
Hamed Hassani, Self-Consistency of the Fokker-Planck Equation, Conference
on Learning Theory (COLT), 2022.
(C15) Arman Adibi, Aritra Mitra, George J Pappas, Hamed Hassani, Distributed
Statistical Min-Max Learning in the Presence of Byzantine Agents, IEEE Con-
ference on Decision and Control (CDC), 2022.
(C16) Anton Xue, Lars Lindemann, Alexander Robey, Hamed Hassani, George J Pap-
pas, Rajeev Alur, Chordal Sparsity for Lipschitz Constant Estimation of Deep
Neural Networks, IEEE Conference on Decision and Control (CDC), 2022.
(C17) Bruce D Lee, Thomas TCK Zhang, Hamed Hassani, Nikolai Matni, Performance-
Robustness Tradeoffs in Adversarially Robust Linear-Quadratic Control, IEEE
Conference on Decision and Control (CDC), 2022.
(C18) Vahid Jamali, Mohammad Fereydounian, Hessam Mahdavifar, Hamed Hassani,
Low-Complexity Decoding of a Class of Reed-Muller Subcodes for Low-Capacity
Channels, IEEE International Conference on Communications (ICC), 2022.
(C19) A, Adibi, A. Mokhtari, H. Hassani, Minimax Optimization: The Case of Convex-
Submodular, Conference on Artificial Intelligence and Statistics (AISTATS),
2022. Oral Presentation
(C20) Z. Shen, H. Hassani, S. Kale, A. Karbasi, ”Federated Functional Gradient
Boosting”, Conference on Artificial Intelligence and Statistics (AISTATS), 2022.
(C21) A. Zhou, F. Tajwar, A. Robey, T. Knowles, G. Pappas, H. Hassani, C. Finn, ”Do
Deep Networks Transfer Invariances Across Classes?”, International Conference
on Learning Representations (ICLR), 2022.
(C22) Alexander Robey, George Pappas, Hamed Hassani “Model-Based Domain Gen-
eralization”, Neural Information Processing Systems (NeurIPS), 2021.
(C23) A. Robey, L. Chamon, G. Pappas, H. Hassani, A. Ribeiro, “Adversarial robust-
ness with semi-infinite constrained learning”, Neural Information Processing
Systems (NeurIPS), 2021.
(C24) A. Mitra, R. Jaafar, G. Pappas, H. Hassani, ”Achieving Linear Convergence
in Federated Learning under Objective and Systems Heterogeneity”, Neural
Information Processing Systems (NeurIPS), 2021.
(C25) E. Lei, H. Hassani, S. Saeedi Bidokhti, ”Out-of-Distribution Robustness in
Deep Learning Compression”, International Conference on Machine Learning
(ICML) – ITML Workshop , 2021.
(C26) L. Collins, H. Hassani, A. Mokhtari, S. Shakkottai Exploiting Shared Repre-
sentations for Personalized Federated Learning, International Conference on
Machine Learning (ICML), 2021.
(C27) Xingran Chen, Konstantinos Gatsis, Hamed Hassani, Shirin Saeedi Bidokhti,
Age of information in random access channels, IEEE International Symposium
on Information Theory (ISIT), 2020.
(C28) Z. Shen, Z. Wang, A. Ribeiro, H. Hassani, Sinkhorn Natural Gradient for Gen-
erative Models, Neural Information Processing Systems (NeurIPS), 2020.
Spotlight Presentation
(C29) A, Adibi, A. Mokhtari, H. Hassani, Submodular Meta-Learning, Neural Infor-
mation Processing Systems (NeurIPS), 2020.
(C30) Z. Shen, Z. Wang, A. Ribeiro, H. Hassani, Sinkhorn Barycenter via Functional
Gradient Descent, Neural Information Processing Systems (NeurIPS), 2020.
(C31) A. Javanmard, M. Soltanolkotabi, H. Hassani, “Precise Tradeoffs in Adversarial
Training for Linear Regression”, Conference on Learning Theory (COLT), 2020.
(C32) H. Taheri, A. Mokhtari, H. Hassani, R. Pedarsani, “Quantized Decentralized
Stochastic Learning over Directed Graphs’, International Conference on Ma-
chine Learning (ICML), 2020.
(C33) M. Zhang, L. Chen, A. Mokhtari, H. Hassani, A. Karbasi,“Quantized Frank-
Wolfe: Communication-Efficient Distributed Optimization”, Conference on Ar-
tificial Intelligence and Statistics (AISTATS), 2020.
(C34) A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, R. Pedarsani, “Fed-
PAQ: A Communication-Efficient Federated Learning Method with Periodic
Averaging and Quantization”, Conference on Artificial Intelligence and Statis-
tics (AISTATS), 2020.
(C35) M. Zhang, Z. Shen, A. Mokhtari, H. Hassani, A. Karbasi,“ One Sample Stochas-
tic Frank-Wolfe”, Conference on Artificial Intelligence and Statistics (AIS-
TATS), 2020.
(C36) M. Zhang, L. Chen, H. Hassani, A. Karbasi, “Black Box Submodular Maximiza-
tion: Discrete and Continuous Settings”, Conference on Artificial Intelligence
and Statistics (AISTATS), 2020.
(C37) Mohammad Fereydounian; Xingran Chen; Hamed Hassani; Shirin Saeedi Bidokhti,
Non-asymptotic Coded Slotted ALOHA , IEEE International Symposium on
Information Theory (ISIT), 2019.
(C38) H. Hassani, A. Karbasi, A. Mokhtari, Z. Shen, Stochastic Conditional Gradi-
ent++, Neural Information Processing Systems (NeurIPS), 2019.
(C39) M. Fazlyab, A. Robey, H. Hassani, M. Morari, G. Pappas, “Efficient and Ac-
curate Estimation of Lipschitz Constants for Deep Neural Networks”, Neural
Information Processing Systems (NeurIPS), 2019. Spotlight Presentation
(C40) A. Reisizadeh, H. Taheri, A. Mokhtari, H. Hassani, R. Pedarsani, “Robust and
Communication-Efficient Collaborative Learning”, Neural Information Process-
ing Systems (NeurIPS), 2019.
(C41) M. Zhang, L. Chen, H. Hassani, A. Karbasi, Online Continuous Submodular
Maximization: From Full-Information to Bandit Feedback, Neural Information
Processing Systems (NeurIPS), 2019.
(C42) H. Jong, B. Schlotfeldt, H. Hassani, M. Morari, D. Lee, G. Pappas, “Learn-
ing Q-network for Active Information Acquisition”, IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS), 2019.
(C43) Z. Shen, H. Hassani, C. Mao, H. Qian, A. Ribeiro, “Hessian Aided Policy
Gradient”, International Conference on Machine Learning (ICML), 2019.
(C44) Y. Balaji, H. Hassani, R. Chellappa, S. Feizi, “Entropic GANs meet VAEs: A
Statistical Approach to Compute Sample Likelihoods in GANs”, International
Conference on Machine Learning (ICML), 2019.
(C45) M. Fereydounian, V. Jamali, H. Hassani, H. Mahdavifar, “Channel Coding at
Low Capacity”, IEEE Information Theory Workshop (ITW), 2019.
(C46) A. Gotovos, H. Hassani, S. Jegelka, A. Krause,“ Discrete Sampling Using Semi-
gradient based Product Mixtures”, Uncertainty in Artificial Intelligence (UAI),
2018. Oral Presentation
(C47) A. Mokhtari, H. Hassani, A. Karbasi, “Decentralized Submodular Maximiza-
tion: Bridging Discrete and Continuous Settings”, International Conference on
Machine Learning (ICML), 2018.
(C48) L. Chen, C. Harshaw, H. Hassani, A. Karbasi, “Projection-Free Online Opti-
mization with Stochastic Gradient: From Convexity to Submodularity, Inter-
national Conference on Machine Learning (ICML), 2018.
(C49) A. Fazeli, H. Hassani, M. Mondelli, A. Vardy, “Binary Linear Codes with Op-
timal Scaling and Quasi-Linear Complexity”, IEEE Information Theory Work-
shop (ITW) 2018. Invited Paper
(C50) H. Hassani, S. Kudekar, O. Ordentlich, Y. Polyanskiy, R. Urbanke, “Almost
Optimal Scaling of Reed-Muller Codes on BEC and BSC Channels”, IEEE
International Symposium on Information Theory (ISIT), 2018.
(C51) M. Mondelli, S. H. Hassani, and R. Urbanke, “A New Coding Paradigm for
the Primitive Relay Channel”, IEEE International Symposium on Information
Theory (ISIT), 2018.
(C52) A. Reisizadeh, A. Mokhtari, H. Hassani, R. Pedarsani, “Quantized Decen-
tralized Consensus Optimization, IEEE Conference on Decision and Control
(CDC) 2018.
(C53) L. Chen, H. Hassani, A. Karbasi, “Online Continuous Submodular Maximiza-
tion”, Conference on Artificial Intelligence and Statistics (AISTATS), 2018.
Oral Presentation
(C54) A. Mokhtari, H. Hassani, A. Karbasi, “Conditional Gradient Method for Stochas-
tic Submodular Maximization: Closing the Gap, Conference on Artificial In-
telligence and Statistics (AISTATS), 2018.
(C55) A. Singla, H. Hassani, A. Krause, “Learning to Interact with Learning Agents”,
Association for the Advancement of Artificial Intelligence (AAAI) Conference,
2018.
(C56) H. Hassani, M. Soltanolkotabi, A. Karbasi, “Gradient Methods for Submodular
Maximization”, Neural Information Processing Systems (NIPS) 2017.
(C57) M. R. Karimi, M. Lucic, H. Hassani, A. Krause, “Stochastic Submodular Max-
imization: The Case for Coverage Functions”, Neural Information Processing
Systems (NIPS) 2017.
(C58) S. A. Hashemi, M. Mondelli, H. Hassani, R. Urbanke, W. J. Gross, “Partitioned
List Decoding of Polar Codes: Analysis and Improvement of Finite Length
Performance”, IEEE Global Communications Conference (GLOBECOM), 2017.
(C59) D. Achlioptas, H. Hassani, W. Liu, R. Urbanke, “Time-invariant LDPC convo-
lutional codes”, IEEE International Symposium on Information Theory (ISIT),
2017.
(C60) M. Mondelli, H. Hassani, R. Urbanke, “Construction of Polar Codes with Sub-
linear Complexity”, IEEE International Symposium on Information Theory
(ISIT), 2017.
(C61) O Bachem, M. Lucic, H. Hassani, A. Krause, ”Uniform Deviation Bounds for
Unbounded Loss Functions like k-Means”, International Conference on Machine
Learning (ICML), 2017.
(C62) L. Chen, H. Hassani, A. Karbasi, “Near-Optimal Active Learning of Halfspaces
via Query Synthesis in the Noisy Setting”, Conference on Artificial Intelligence
(AAAI), 2017.
(C63) Y. Chen, H. Hassani, A. Krause, “Near-optimal Bayesian Active Learning with
Correlated and Noisy Tests”, Artificial Intelligence and Statistics Conference
(AISTATS), 2017. Oral Presentation
(C64) O Bachem, M. Lucic, H. Hassani, A. Krause, “Fast and Provably Good Seedings
for K-Means”, Neural Information Processing Systems (NIPS), 2016.
Oral Presentation.
(C65) O Bachem, M. Lucic, H. Hassani, A. Krause, “Approximate K-Means++ in
Sublinear Time”, Conference on Artificial Intelligence (AAAI), 2016.
(C66) D. Achlioptas, H. Hassani, N. Macris, R. Urbanke, “Bounds for Random Con-
straint Satisfaction Problems via Spatial Coupling”, ACM-SIAM Symposium
on Discrete Algorithms (SODA), 2016.
(C67) A. Gotovos, H. Hassani, A. Krause, “Sampling From Probabilistic Submodular
Models”, Neural Information Processing Systems (NIPS), 2015.
Oral Presentation.
(C68) Y. Chen, H. Hassani, A.Karbasi, A. Krause, “Sequential Information Maxi-
mization: When is Greedy Near-optimal?”, Conference on Learning Theory
(COLT), 2015.
(C69) M. Mondelli, H. Hassani, R. Urbanke, “Unified Scaling of Polar Codes: Error
Exponent, Scaling Exponent, Moderate Deviations, and Error Floors”, IEEE
International Symposium on Information Theory (ISIT), 2015. IEEE Jack
Keil Wolf ISIT Student Paper Award.
(C70) J. M. Renes, D. Sutter, S. Hamed Hassani, “Alignment of Polarized Sets”, IEEE
International Symposium on Information Theory (ISIT), 2015.
(C71) M. Mondelli, H. Hassani, R. Urbanke, “How to Achieve the Capacity of Asym-
metric Channels”, Allerton Conference on Communications, Controlled Com-
puting, 2014.
(C72) M. Mondelli, H. Hassani, Igal Sason, R. Urbanke, “Achieving Marton’s Region
for Broadcast Channels Using Polar Codes”, IEEE International Symposium
on Information Theory (ISIT), 2014.
(C73) M. Mondelli, H. Hassani, R. Urbanke, “From Polar to Reed-Muller Codes:
a Technique to Improve the Finite-Length Performance”, IEEE International
Symposium on Information Theory (ISIT), 2014.
(C74) H. Hassani, R. Urbanke, “Universal Polar Codes”, IEEE International Sympo-
sium on Information Theory (ISIT), 2014.
(C75) M. Mondelli, H. Hassani, R. Urbanke, “Scaling Exponent of List Decoders with
Applications to Polar Codes”, IEEE Information Theory Workshop (ITW),
2013.
(C76) W. Liu, H. Hassani, R. Urbanke, “The Least Degraded and the Least Upgraded
Channel with respect to a Channel Family”, IEEE Information Theory Work-
shop (ITW), 2013.
(C77) H. Hassani, N. Macris, R. Urbanke, “The Space of Solutions of Coupled XOR-
SAT Formulae”, IEEE International Symposium on Information Theory (ISIT),
2013.
(C78) H. Hassani, R. Urbanke, “Polar Codes: Robustness of the Successive Cancel-
lation Decoder with Respect to Quantization”, IEEE International Symposium
on Information Theory (ISIT), 2012.
(C79) Ali Goli, S. Hamed Hassani, Rudiger Urbanke, “Universal Bounds on the Scal-
ing Behavior of Polar Codes”, IEEE International Symposium on Information
Theory, 2012.
(C80) R. Pedarsani, H. Hassani, I. Tal, E. Telatar, “On the Construction of Polar
Codes”, IEEE International Symposium on Information Theory (ISIT), 2011.
(C81) H. Hassani, N. Macris, R. Mori, “Near-Concavity of the Growth Rate for Cou-
pled LDPC Chains”, IEEE International Symposium on Information Theory
(ISIT), 2011.
(C82) H. Hassani, N. Macris, R. Urbanke, “Coupled Graphical Models and Their
Thresholds”, IEEE Information Theory Workshop (ITW), 2010.
(C83) H. Hassani, R. Urbanke, “On the Scaling of Polar Codes: I. The Behavior of
Polarized Channels”, IEEE International Symposium on Information Theory
(ISIT), 2010.
(C84) H. Hassani, K. Alishahi, R. Urbanke, “On the Scaling of Polar Codes: II.
The Behavior of Unpolarized Channels”, IEEE International Symposium on
Information Theory (ISIT), 2010. IEEE Jack Keil Wolf ISIT Student
Paper Award Finalist
(C85) H. Hassani, S. B. Korada, R. Urbanke, “The Compound Capacity of Polar
Codes”, Allerton Conference on Communications, Controlled Computing, 2009.
Journal Papers
(J1) Eric Lei, Hamed Hassani, Shirin Saeedi Bidokhti, Neural Estimation of the
Rate-Distortion Function for Massive Datasets, IEEE Journal on Selected Areas
in Information Theory, 2023.
(J2) Hamed Hassani, Adel Javanmard The curse of overparametrization in adver-
sarial training: Precise analysis of robust generalization for random features
regression, Submitted to the Annals of Statistics, 2022.
(J3) Payam Delgosha, Hamed Hassani, Ramtin Pedarsani, Robust Classification Un-
der ℓ0 Attack for the Gaussian Mixture Model, SIAM Journal on Mathematics
of Data Science, 2022.
(J4) Donghwan Lee, Xinmeng Huang, Hamed Hassani, Edgar Dobriban, T-Cal: An
optimal test for the calibration of predictive models, Submitted to the Journal
of Machine Learning Research (JMLR), 2022.
(J5) Xingran Chen, Konstantinos Gatsis, Hamed Hassani, Shirin Saeedi Bidokhti,
Age of information in random access channels, IEEE Transactions on Infor-
mation Theory, 2022. IEEE Communications Society & Information
Theory Society Joint Paper Award
(J6) Edgar Dobriban, Hamed Hassani, David Hong, Alexander Robey, Provable
Tradeoffs for Adversarially Robust Classification, IEEE Transactions on In-
formation Theory, 2022.
(J7) H. Hassani, A. Karbasi, A. Mokhtari, Z. Shen “Stochastic Conditional Gradi-
ent++: (Non-)Convex Minimization and Continuous Submodular Maximiza-
tion”, SIAM Journal on Optimization, 2020.
(J8) A. Fazeli, H. Hassani, M. Mondelli, A. Vardy, “Binary Linear Codes with Op-
timal Scaling: Polar Codes with Large Kernels”, IEEE Transactions on Infor-
mation Theory, 2020.
(J9) A. Mokhtari, H. Hassani, A. Karbasi, “Stochastic Conditional Gradient Meth-
ods: From Convex Minimization to Submodular Maximization”, Journal of
Machine Learning Research, 2020.
(J10) K. Gatsis, H. Hassani, G. J. Pappas, “Latency-Reliability Tradeoffs for State
Estimation, IEEE Transactions on Automatic Control, 2020.
(J11) M. Mondelli, H. Hassani, R. Urbanke, “A New Coding Paradigm for the Prim-
itive Relay Channel”, Entropy, 2019.
(J12) A. Reisizadeh, A. Mokhtari, H. Hassani, R. Pedarsani, “Quantized Decentral-
ized Consensus Optimization, IEEE Transactions on Signal Processing, 2019.
(J13) M. Mondelli, H. Hassani, R. Urbanke, “How to Achieve the Capacity of Asym-
metric Channels”, IEEE Trans. on Information Theory, 2018.
(J14) S. A. Hashemi, M. Mondelli, H. Hassani, R. Urbanke, W. J. Gross, “Decoder
Partitioning: Towards Practical List Decoding of Polar Codes”, IEEE Trans.
on Communications, 2018.
(J15) Y. Chen, H. Hassani, A. Krause, “Near-optimal Bayesian Active Learning with
Correlated and Noisy Tests”, Electronic Journal of Statistics, 2017.
(J16) E. Kazemi, H. Hassani, M. Grossglauser, H. Pezeshgi-Modarres, “PROPER:
Global Protein-Protein Interaction Network Alignment with Percolation”, BMC
Bioinformatics, 2016.
(J17) M. Mondelli, H. Hassani, R. Urbanke, “Unified Scaling of Polar Codes: Error
Exponent, Scaling Exponent, Moderate Deviations, and Error Floors”, IEEE
Trans. on Information Theory, 2016.
(J18) E. Kazemi, H. Hassani, M. Grossglauser, “Growing a Graph Matching from a
Handful of Seeds”, Proceedings of Very Large Databases Endowment (VLDB),
2015.
(J19) J. M. Renes, D. Sutter, H. Hassani, “Alignment of Polarized Sets”, IEEE Jour-
nal on Selected Areas in Communications: Recent Advances In Capacity Ap-
proaching Codes, 2015.
(J20) M. Mondelli, H. Hassani, Igal Sason, R. Urbanke, “Achieving Marton’s Re-
gion for Broadcast Channels Using Polar Codes”, IEEE Trans. on Information
Theory, 2015.
(J21) M. Mondelli, H. Hassani, R. Urbanke, “Scaling Exponent of List Decoders with
Applications to Polar Codes”, IEEE Trans. on Information Theory, 2015.
(J22) H. Hassani, K. Alishahi, R. Urbanke, “Finite-length Scaling of Polar Codes”,
IEEE Trans. on Information Theory, 2014.
(J23) M. Mondelli, H. Hassani, R. Urbanke, “From Polar to Reed-Muller Codes: a
Technique to Improve the Finite-Length Performance”, IEEE Trans. on Com-
munications, 2014.
(J24) H. Hassani, R. Mori, T. Tanaka, R. Urbanke, “Rate-Dependent Analysis of the
Asymptotic Behaviour of Channel Polarization”, IEEE Trans. on Information
Theory, 2013.
(J25) H. Hassani, N. Macris, R. Urbanke, “Threshold Saturation in Spatially Coupled
Constraint Satisfaction Problems”, Journal of Statistical Mechanics-Theory and
Experiment, 2012.
(J26) H. Hassani, N. Macris, R. Urbanke, “Chain of Mean Field Models”, Journal of
Statistical Mechanics-Theory and Experiment, 2012.
SERVICE Area chair of major conferences in the fields of Machine Learning, Information The-
ory, and Communications: NeurIPS, ICML, COLT, ICLR, ISIT, ITW, ICC. Regular
participant/reviewer in NSF panels. Member of IEEE and AAAI. Member of editorial
board at the Journal of Machine Learning Research (JMLR).
RESEARCH • TRIPODS Phase II: “EnCORE: Institute for Emerging CORE Methods in Data
SUPPORT Science”
National Science Foundation
Role: Lead PI from Penn (the institute is joint between Penn, UCLA, UCSD
(lead), UT Austin). Dates: 2022-2027
Amount: $600,000 of $10,000,000
• NSF AI Institute: “AI Institute for Learning-Enabled Optimization at Scale
(TILOS)”
National Science Foundation
Role: Co-Investigator in the Foundations Team (the institute is joint between
MIT, Penn, UCSD (lead), UT Austin, Yale). Dates: 2021-2026
Amount: One PhD Student
• NSF CAREER Award: “CAREER: Submodular Optimization in Complex En-
vironments: Theory, Algorithms, and Applications”
National Science Foundation
Role: solo-PI. Dates: 2020-2025
Amount: $400,000
• Intel’s Rising Start Faculty Award
Intel Research Office
Role: solo-PI.
Amount: $50,000
• AFOSR Young Investigator Award: “Data Acquisition in Dynamic Environ-
ments: A Submodular Perspective”
US AirForce Research Office
Role: solo-PI. Dates: 2020-2023
Amount: $450,000
• HDR TRIPODS Phase I: “Penn Institute for Foundations of Data Science”
National Science Foundation
Role: Co-Investigator (PI: S. Agarwal, other co-PIs: S. Khanna, A. Roth, W.
Su). Dates: 2019-2022
Amount: $300,000 of $1,500,000
• NSF CIF Small: “Collaborative Research: Communications in Ultra-Low-Rate
Regime: Fundamental Limits, Code Constructions, and Applications”
National Science Foundation
Role: PI (Joint with H. Mahdavifar from U. Michigan). Dates: 2019-2022
Amount: $250,000 of $500,000
• NSF CPS Medium: “Rethinking Communication and Control for Low-Latency,
High Reliability loT Devices”
National Science Foundation
Role: co-Investigator (PI: G. Pappas, other co-PI: A. Ribeiro). Dates: 2018-
2021
Amount: $330,000 of $1,000,000
• NSF CISE Research Initiation Initiative (NSF-CRII): “Low-Complexity Coding
at Optimal Length”
National Science Foundation
Role: solo-PI. Dates: 2018-2020
Amount: $175,000
TEACHING & Courses Taught at Penn
ADVISING • Deep Learning: A Hands-on Introduction (Fall 2023)
• Statistics for Data Science (Spring 2019, Fall 2019-23 – More that 250 Partici-
pants)
• Mathematics of High Dimensional Data with Applications in Machine Learning
(Spring 2021)
• Data Mining: Learning from Massive Data Sets (Fall 2017, 2018, 2019, 2020)
• Modern Convex Optimization (Spring 2018)
Courses Taught at ETH Zurich
• Information Theory (Spring 2015, 2016), (Course Satisfaction Score: 4.5/5)
Courses Taught at EPFL
• Advanced Digital Communication (Fall 2016), (Course Satisfaction Score: 4.5/5)
Current Ph.D. Students
• Mohammad Fereydounian (Double-major B.Sc. in Mathematics and EE from
Sharif University, Iran), Sept. 2017-May 2023 (expected)
• Alexander Robey, (Double-major BA in Engineering and Mathematics, Swarth-
more College, USA), Sept. 2018-present
• Eric Lei (B. Sc. in Electrical and Computer Engineering, Cornell U., Sept
2020-present
• Xinmeng Huang (B. Sc. in Mathematics, U. Science and Tech., China), Sept
2020-present
• Donghwan Lee (B. Sc. in Mathematics, Seoul National U., South Korea), Sept
2020-present
• Behrad Moniri (B. Sc. in EE from Sharif University) 2021-present
• Sima Noorani (B. Sc. in ECE from Drexel University) 2022-present
• Shayan Kiyani (Double-major B. Sc. in EE and Mathematics from Sharif Uni-
versity) 2022-present
• Mahdi Sabbaghi (Double-major B.Sc. in EE and Physics from Sharif Univer-
sity) 2022-present
Graduated PhD Students
• Arman Adibi (Double-major B.Sc. in Math and EE from Isfahan Institute of
Technology, Iran), Sept. 2018-August 2023. Next: Post-doc at Princeton EE.
• Jorge Berreras (Double-major B.Sc. in Mathematics and Economics from Uni-
versidad de los Andes - Colombia), Sept 2017-Nov 2022; Next: Post-doc in
Duncan Watt’s Group at Penn.
Post-Doctoral Scholars
• Aritra Mitra (Next: Assistant Prof at North Carolina State University)
• Zebang Shen (Next: Research Scientist at ETH Zurich)