I am passionate about building systems that can perceive the world and behave in the wild, more like a human does, with explanations for the actions. Towards this goal, my research focuses on bridging the gap of human and machine intelligence through bringing inspiration from neuroscience to understand learning machines and facilitate improving the current state of interpretability and address the lack of robustness outside the training distribution.
Explainable, Trustworthy and Robust Artificial Intelligence and Causal Inference
Get StartedI am currently affiliated with Koo Lab under Simons Center for Quantitative Biology at Cold Spring Harbor Laboratory. Here I am a computational postdoctoral fellow and work closely with Dr. Peter Koo on explaining different biological data modalities i.e. interpreting model predictions with genomic sequences or indentifying important body parts for pancreatic cancer from CT scans.
Previously, I was a postdoctoral associate with Sinha Lab for Developmental Research under Brain and Cognitive Sciences department at Massachusetts Institute of Technology, working with Dr. Xavier Boix on explainability under distribution shift.
I completed my Ph.D. from Computer Science & Engineering department, Indian Institute of Technology Hyderabad, under the guidance of Dr. Vineeth N Balasubramanian. As part of my Ph.D., I assisted the usability of DNNs through enhanced explanation to seamlessly operate in any diverse situation, that was reflected in exploring problems such as enhancing model robustness against unseen noise, adversarial as well as attributional attacks, and investigating more on self-explaining neural nets.
Recipient of the IKDD Best Doctoral Dissertation in Data Science Award (winner).
Grateful for the NASSCOM AI Gamechanger award 2022 under AI Research (DL Algorithms and Architecture) category (runner-up).
Awarded CVPR 2022 travel grant to present at the conference.
Accepted into the WACV 2021 Doctoral Consortium as a participant.
Awarded ICML 2019 travel grant to present at the conference.
Selected to join IBM India Research Lab, Bangalore as Research intern on AI Explainability Active Learning, May-Aug 2019.
Accepted to join the Machine Learning Summer School as a participant at Universidad Autónoma de Madrid, Spain, Aug-Sep 2018.
Selected for the Sakura Science Plan award to intern at University of Tokyo, Jun-Jul 2017.
Started working as computational postdoctoral fellow at Cold Spring Harbor Lab.
Presented my ongoing work on explainability under distribution shift at Fujitsu Limited in Japan.
Started working as postdoctoral Associate at MIT.
Successfully defended my Ph.D. dissertation. Please find my thesis here.
Explainability for new biological discoveries.
Explainability under distribution shift.
Model self-explaining neural networks with meaningful concepts as building blocks.
Exploratory work in causal inference and application of causality in machine learning.
Rational Deep Machines: Towards explainable trustworthy and robust deep learning systems.
Source camera identification model: Classifier learning, role of learning curves and their interpretation.
A. Sarkar, Z. Tang, C. Zhao and PK Koo, "Designing DNA With Tunable Regulatory Activity Using Discrete Diffusion", Preprint code
A. Sarkar, M. Groth, I. Mason, T. Sasaki and X. Boix, "Deephys: Deep Electrophysiology. Debugging Neural Networks under Distribution Shifts", Preprint code
A. Sarkar, D. Vijaykeerthy, A. Sarkar and VN Balasubramanian, "A Framework for Learning Ante-hoc Explainable Models via Concepts", IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022). paper code
A. Sarkar, A. Sarkar and VN Balasubramanian, "Leveraging Test-Time Consensus Prediction for Robustness against Unseen Noise", IEEE Winter Conference on Applications of Computer Vision (WACV 2022). paper
A. Sarkar, A. Sarkar, S. Gali and VN Balasubramanian, "Adversarial Robustness without Adversarial Training: A Teacher-Guided Curriculum Learning Approach", Conference on Neural Information Processing Systems (NeurIPS 2021). paper code
A. Sarkar, A. Sarkar and VN Balasubramanian, "Enhanced Regularizers for Attributional Robustness", Association for the Advancement of Artificial Intelligence (AAAI 2021). paper code
A. Chattopadhyay, P. Manupriya, A. Sarkar and VN Balasubramanian, "Neural Network Attributions: A Causal Perspective", International Conference on Machine Learning (ICML 2019). paper code project website
A. Sarkar, A. Chattopadhyay, P. Howlader and VN Balasubramanian, "Grad-CAM++: Generalized Gradient-based Visual Explanations for Deep Convolutional Networks",IEEE Winter Conference on Applications of Computer Vision (WACV 2018). paper code
V.U. Sameer, A. Sarkar and R. Naskar, "Source camera identification model: Classifier learning, role of learning curves and their interpretation",IEEE International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET 2017). paper
A. Sarkar, A. Sarkar and VN Balasubramanian, "Leveraging Test-Time Consensus Prediction for Robustness against Unseen Noise", International Conference on Computer Vision (ICCV 2021), Workshop on "Adversarial Robustness In The Real World". paper
Deephys: Deep Electrophysiology. Project website
Koo Lab, Koch Building, CSHL Campus