welcome to my website

Hello, I am Anirban Sarkar currently a computational postdoctoral fellow at Cold Spring Harbor Laboratory.

I am passionate about building systems that can perceive the world and behave in the wild, more like a human does, with explanations for the actions. Towards this goal, my research focuses on bridging the gap of human and machine intelligence through bringing inspiration from neuroscience to understand learning machines and facilitate improving the current state of interpretability and address the lack of robustness outside the training distribution.

Research Interests

Explainable, Trustworthy and Robust Artificial Intelligence and Causal Inference

Get Started

About Me

I am currently affiliated with Koo Lab under Simons Center for Quantitative Biology at Cold Spring Harbor Laboratory. Here I am a computational postdoctoral fellow and work closely with Dr. Peter Koo on explaining different biological data modalities i.e. interpreting model predictions with genomic sequences or indentifying important body parts for pancreatic cancer from CT scans.

Previously, I was a postdoctoral associate with Sinha Lab for Developmental Research under Brain and Cognitive Sciences department at Massachusetts Institute of Technology, working with Dr. Xavier Boix on explainability under distribution shift.

I completed my Ph.D. from Computer Science & Engineering department, Indian Institute of Technology Hyderabad, under the guidance of Dr. Vineeth N Balasubramanian. As part of my Ph.D., I assisted the usability of DNNs through enhanced explanation to seamlessly operate in any diverse situation, that was reflected in exploring problems such as enhancing model robustness against unseen noise, adversarial as well as attributional attacks, and investigating more on self-explaining neural nets.

News

  • Recipient of the IKDD Best Doctoral Dissertation in Data Science Award (winner).

  • Grateful for the NASSCOM AI Gamechanger award 2022 under AI Research (DL Algorithms and Architecture) category (runner-up).

  • Awarded CVPR 2022 travel grant to present at the conference.

  • Accepted into the WACV 2021 Doctoral Consortium as a participant.

  • Awarded ICML 2019 travel grant to present at the conference.

  • Selected to join IBM India Research Lab, Bangalore as Research intern on AI Explainability Active Learning, May-Aug 2019.

  • Accepted to join the Machine Learning Summer School as a participant at Universidad Autónoma de Madrid, Spain, Aug-Sep 2018.

  • Selected for the Sakura Science Plan award to intern at University of Tokyo, Jun-Jul 2017.

Updates

Apr, 2023

Started working as computational postdoctoral fellow at Cold Spring Harbor Lab.

Oct, 2022

Presented my ongoing work on explainability under distribution shift at Fujitsu Limited in Japan.

Apr, 2022

Started working as postdoctoral Associate at MIT.

Mar, 2022

Successfully defended my Ph.D. dissertation. Please find my thesis here.

My Experiences

Computational Postdoctoral Fellow, CSHL 2023 Apr -

Explainability for new biological discoveries.

Postdoctoral Associate, MIT 2022 Apr - 2023 Apr

Explainability under distribution shift.

Research Intern, IBM India Research Lab, Bangalore 2019 May - 2019 Aug

Model self-explaining neural networks with meaningful concepts as building blocks.

Research Intern, University of Tokyo, Japan 2017 Jun - 2017 Jul

Exploratory work in causal inference and application of causality in machine learning.

My Education

Ph.D. in Computer Science, IIT Hyderabad, India 2016 Aug - 2022 Mar

Rational Deep Machines: Towards explainable trustworthy and robust deep learning systems.

Masters of Technology, NIT Rourkela, India 2014 Aug - 2016 Jul

Source camera identification model: Classifier learning, role of learning curves and their interpretation.

Publications

  • A. Sarkar, M. Groth, I. Mason, T. Sasaki and X. Boix, "Deephys: Deep Electrophysiology. Debugging Neural Networks under Distribution Shifts", Preprint code

  • A. Sarkar, D. Vijaykeerthy, A. Sarkar and VN Balasubramanian, "A Framework for Learning Ante-hoc Explainable Models via Concepts", IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2022). paper code

  • A. Sarkar, A. Sarkar and VN Balasubramanian, "Leveraging Test-Time Consensus Prediction for Robustness against Unseen Noise", IEEE Winter Conference on Applications of Computer Vision (WACV 2022). paper

  • A. Sarkar, A. Sarkar, S. Gali and VN Balasubramanian, "Adversarial Robustness without Adversarial Training: A Teacher-Guided Curriculum Learning Approach", Conference on Neural Information Processing Systems (NeurIPS 2021). paper code

  • A. Sarkar, A. Sarkar and VN Balasubramanian, "Enhanced Regularizers for Attributional Robustness", Association for the Advancement of Artificial Intelligence (AAAI 2021). paper code

  • A. Chattopadhyay, P. Manupriya, A. Sarkar and VN Balasubramanian, "Neural Network Attributions: A Causal Perspective", International Conference on Machine Learning (ICML 2019). paper code project website

  • A. Sarkar, A. Chattopadhyay, P. Howlader and VN Balasubramanian, "Grad-CAM++: Generalized Gradient-based Visual Explanations for Deep Convolutional Networks",IEEE Winter Conference on Applications of Computer Vision (WACV 2018). paper code

  • V.U. Sameer, A. Sarkar and R. Naskar, "Source camera identification model: Classifier learning, role of learning curves and their interpretation",IEEE International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET 2017). paper

Workshop

  • A. Sarkar, A. Sarkar and VN Balasubramanian, "Leveraging Test-Time Consensus Prediction for Robustness against Unseen Noise", International Conference on Computer Vision (ICCV 2021), Workshop on "Adversarial Robustness In The Real World". paper

Software

Say hello..

Visit my office

Koo Lab, Koch Building, CSHL Campus