Looking for full-time positions as a research scientist or postdoc!
I am a fifth-year Ph.D. student at Tulane University, working with Prof. Jihun Hamm. My current research is related to understanding the robustness of machine learning under distribution shifts. I also work on developing methods for solving large-scale bilevel optimization problems that appear in popular machine learning such as hyperparameter optimization, few-shot learning, importance learning, and training-data poisoning.
Previously, I completed my Masters in Computer Science from The Ohio State University under the guidance of Prof. Jihun Hamm and Prof. Mikhail Belkin where I worked on active learning and semi-supervised learning problems.
Research Interest
To deepen the understanding of the robustness of machine learning models to different types of distribution shifts such as shifts induced by corrupted data, shifts induced by adversarial attacks, etc..
Publications
- “On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization”.
Code
Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, and Jihun Hamm.
Winter Conference on Applications of Computer Vision (WACV) 2024. - “A Spectral View of Randomized Smoothing under Common Corruptions: Benchmarking and Improving Certified Robustness”.
Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm and Z. Morley Mao.
European Conference on Computer Vision (ECCV) 2022. - “Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning”.
Code, Poster
Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen and Jihun Hamm.
Neural Information Processing Systems (NeurIPS) 2021. - “How Robust are Randomized Smoothing based Defenses to Data Poisoning?”.
Code, Poster
Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen and Jihun Hamm.
Computer Vision and Pattern Recognition (CVPR) 2021. - “Penalty Method for Inversion-Free Deep Bilevel Optimization”.
Code, Poster
Akshay Mehra, Jihun Hamm.
Asian Conference on Machine Learning (ACML) 2021.
Preprints
- “Analysis of Task Transferability in Large Pre-trained Classifiers”.
Code
Akshay Mehra, Yunbei Zhang, and Jihun Hamm. - “Understanding the Robustness of Multi-Exit Models under Common Corruptions”.
Akshay Mehra, Skyler Seto, Navdeep Jaitly and Barry-John Theobald - “Do Domain Generalization Methods Generalize Well?”.
Code, Poster
Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen and Jihun Hamm.
Workshop on Machine Learning Safety at Neural Information Processing Systems (NeurIPS) 2022. - “Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples”.
Jihun Hamm and Akshay Mehra.