I am passionate about making AI systems interpretable—ensuring they can explain their decisions as they perceive and interact with the world. This drive for transparency has shaped my research journey from computer vision, where I developed attribution methods revealing what neural networks see, through neuroscience-inspired debugging tools for understanding distribution shifts, to my current work designing biological sequences with generative models. By focusing on interpretability at each stage, I aim to bridge human and machine intelligence, creating AI that is both robust in the wild and transparent enough to be a trusted partner in scientific discovery.
Deep Generative Modeling, Explainable, Trustworthy and Robust Artificial Intelligence
Get StartedI am currently affiliated with Koo Lab under Simons Center for Quantitative Biology at Cold Spring Harbor Laboratory. Here I am a computational postdoctoral fellow and work closely with Dr. Peter Koo on developing interpretable deep learning models for genomic sequence data—aiming to uncover regulatory logic, predict functional elements, and provide biological insights directly from DNA sequences. I strive to bridge advanced AI methods with fundamental genomic discovery.
Previously, I was a postdoctoral associate with Sinha Lab for Developmental Research under Brain and Cognitive Sciences department at Massachusetts Institute of Technology, working with Dr. Xavier Boix and Dr. Pawan Sinha on explainability under distribution shift.
I completed my Ph.D. from Computer Science & Engineering department at Indian Institute of Technology Hyderabad, under the guidance of Dr. Vineeth N Balasubramanian. As part of my Ph.D., I assisted the usability of DNNs through enhanced explanation to seamlessly operate in any diverse situation, that was reflected in exploring problems such as enhancing model robustness against unseen noise, adversarial as well as attributional attacks, and investigating more on self-explaining neural nets.
Recipient of the IKDD Best Doctoral Dissertation in Data Science Award 2022 (winner).
Grateful for the NASSCOM AI Gamechanger award 2022 under AI Research (DL Algorithms and Architecture) category (runner-up).
Awarded CVPR 2022 travel grant to present at the conference.
Accepted into the WACV 2021 Doctoral Consortium as a participant.
Awarded ICML 2019 travel grant to present at the conference.
Selected to join IBM India Research Lab, Bangalore as Research intern on AI Explainability Active Learning, May-Aug 2019.
Accepted to join the Machine Learning Summer School as a participant at Universidad Autónoma de Madrid, Spain, Aug-Sep 2018.
Selected for the Sakura Science Plan award to intern at University of Tokyo, Jun-Jul 2017.
Presented our work "Understanding DNA Discrete Diffusion for Engineering Regulatory DNA Sequences" in Workshop on AI for Nucleic Acids at ICLR 2025.
Presented our work “Designing DNA With Tunable Regulatory Activity Using Discrete Diffusion” as a long oral at MLCB 2024. This work was also featured in Workshop on AI for New Drug Modalities at NeurIPS 2024.
Started working as computational postdoctoral fellow at Cold Spring Harbor Lab.
Presented my ongoing work on explainability under distribution shift at Fujitsu Limited in Japan.
Started working as postdoctoral Associate at MIT.
Successfully defended my Ph.D. dissertation.
Explainability for new biological discoveries.
Explainability under distribution shift.
Model self-explaining neural networks with meaningful concepts as building blocks.
Exploratory work in causal inference and application of causality in machine learning.
Rational Deep Machines: Towards explainable trustworthy and robust deep learning systems.
Source camera identification model: Classifier learning, role of learning curves and their interpretation.
A. Sarkar, Z. Tang, C. Zhao and PK Koo, "Designing DNA With Tunable Regulatory Activity Using Discrete Diffusion", Long oral at Machine Learning in Computational Biology (MLCB 2024). Preprint code
A. Sarkar, M. Groth, I. Mason, T. Sasaki and X. Boix, "Deephys: Deep Electrophysiology. Debugging Neural Networks under Distribution Shifts", Preprint code
A. Sarkar, D. Vijaykeerthy, A. Sarkar and VN Balasubramanian, "A Framework for Learning Ante-hoc Explainable Models via Concepts", IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022). paper code
A. Sarkar, A. Sarkar and VN Balasubramanian, "Leveraging Test-Time Consensus Prediction for Robustness against Unseen Noise", IEEE Winter Conference on Applications of Computer Vision (WACV 2022). paper
A. Sarkar, A. Sarkar, S. Gali and VN Balasubramanian, "Adversarial Robustness without Adversarial Training: A Teacher-Guided Curriculum Learning Approach", Conference on Neural Information Processing Systems (NeurIPS 2021). paper code
A. Sarkar, A. Sarkar and VN Balasubramanian, "Enhanced Regularizers for Attributional Robustness", Association for the Advancement of Artificial Intelligence (AAAI 2021). paper code
A. Chattopadhyay, P. Manupriya, A. Sarkar and VN Balasubramanian, "Neural Network Attributions: A Causal Perspective", International Conference on Machine Learning (ICML 2019). paper code project website
A. Sarkar, A. Chattopadhyay, P. Howlader and VN Balasubramanian, "Grad-CAM++: Generalized Gradient-based Visual Explanations for Deep Convolutional Networks", IEEE Winter Conference on Applications of Computer Vision (WACV 2018). paper code
V.U. Sameer, A. Sarkar and R. Naskar, "Source camera identification model: Classifier learning, role of learning curves and their interpretation",IEEE International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET 2017). paper
A. Sarkar, Y. Kang, N. Somia and P. Koo., "Understanding DNA Discrete Diffusion for Engineering Regulatory DNA Sequences", International Conference on Learning Representations (ICLR 2025), Workshop on AI for Nucleic Acids.
A. Sarkar, Z. Tang, C. Zhao and P. Koo., "Designing DNA With Tunable Regulatory Activity Using Discrete Diffusion", Neural Information Processing Systems (NeurIPS 2024), Workshop on AI for New Drug Modalities.
A. Sarkar, A. Sarkar and VN Balasubramanian, "Leveraging Test-Time Consensus Prediction for Robustness against Unseen Noise", International Conference on Computer Vision (ICCV 2021), Workshop on "Adversarial Robustness In The Real World".
Koo Lab, Koch Building, CSHL Campus