I'm advised by Yezhou Yang and Chitta Baral at ASU. I also work with the Machine Intelligence group at Lawrence Livermore National Laboratory, and the Adaptive Systems and Interaction group at Microsoft Research. I received my MS in ECE from Carnegie Mellon University, where I worked with Aswin Sankaranarayanan.
My mission is to conduct research to improve the robustness and reliability of AI systems. My contributions towards this goal are at the wonderful intersection of machine learning, computer vision, and natural language processing. My domain expertise is "semantic vision", i.e. computer vision tasks that seek to assign "meaning" to what we see -- this includes "designative" tasks such as image classification; "communicative" tasks involving both vision and language such as visual question answering, visual reasoning, and image captioning.
The main focus of my Ph.D. is on robust visual understanding, to address problems such as domain shift/out-of-distribution generalization, linguistic robustness (logical, semantic), visual robustness (corruptions, geometric transformations, attribute-level shift).
I am on the academic job market !!!
Research Statement
Send me an email if you wanna chat!
Collaboration/Mentorship Opportunities: If you're a PhD student interested in collaborating with me on robust machine learning in Vision/NLP/V+L (domain generalization, adversarial attack/defense etc.), or other related topics, please send me an email (if you're at ASU, we can discuss it over coffee). I'm always happy to dish out advice and share my experiences w.r.t. admissions to Ph.D. programs in CS/EE/CE (help me help you by sending me a list of specific questions over email).
March 2022 Invited Talks on "Reliable Semantic Vision" at Rochester Institute of Technology, SUNY Binghamton, Indiana University, UMBC, Case Western Reserve University
Oct 2022 Recognized as Top Reviewer for NeurIPS 2022 (top ~10%)
Jun 2022 Presented my work at CVPR Doctoral Consortium at CVPR 2022
Jun 2022 Organized the 1st Workshop on Open-Domain Retrieval Under Multi-Modal Settings (O-DRUM) at CVPR 2022
Apr 2022 Recognized as Highlighted Reviewer for ICLR 2022 (top ~8%)
We report a surprising finding that, although recent state-of-the-art T2I models exhibit high image quality, they are severely limited in their ability to generate multiple objects or the specified spatial relations such as left/right/above/below. We introduce a metric called VISOR to quantify spatial reasoning performance. VISOR can be used off-the-shelf with any text-to-image model. We construct and make available SR2D, a dataset which contains sentences that describe spatial relationships (left/right/above/below) between a pair of commonly occurring objects.
ALT discovers diverse and adversarial transformations using an image-to-image neural network with learnable weights. ALT improves the state-of-the-art single domain generalization performance on three benchmarks and is significantly better than pixel-wise adversarial training and standard data augmentation techniques.
Although the imaging pipeline is unable to capture many physical properties of objects (eg. mass and coefficient of friction), these properties can be estimated by utilizing cues introduced by collisions. We introduce a new dataset (CRIPP-VQA) for reasoning about the implicit physical properties of objects from videos. The dataset contains videos of objects in motion, annotated with hypothetical/counterfactual questions about the effect of actions (removing/adding/replacing objects) and questions about planning (performing actions to reach a goal).
In this paper, we introduce a benchmark for covariate shift detection (CSD), that builds upon and complements previous work on domain generalization. We find that existing novelty detection methods designed for OOD benchmarks perform worse than simple confidence-based methods on our CSD benchmark. We propose Domain Interpolation Sensitivity (DIS), based on the simple hypothesis that interpolation between the test input and randomly sampled inputs from the training domain, offers sufficient information to distinguish between the training domain and unseen domains under covariate shift.
SDRO: a distributed robust optimization method that operates with linguistic transformations of sentence inputs, SISP: a suit of semantics-inverting (SI) and semantics-preserving (SP) linguistic transformations, and an ensembling technique for vision-and-language inference.
In this work, we conduct a comprehensive study of common data modification strategies and evaluate not only their in-domain and OOD performance, but also their adversarial robustness (AR). This work serves as an empirical study towards understanding the relationship between generalizing to unseen domains and defending against adversarial perturbations.
We present a debiased dataset for the Person Centric Visual Grounding (PCVG) task. For instance, in many cases the first name in the sentence corresponds to the largest bounding box, or the sequence of names in the sentence corresponds to an exact left-to-right order in the image). The debiased dataset offers the PCVG task a more practical baseline for reliable benchmarking and future improvements.
We seek to improve information retrieval (IR) using neural retrievers (NR) in the biomedical domain, using a three-pronged approach. (1) a template-based question generation method, (2) two novel pre-training tasks that are closely aligned to the downstream task of information retrieval, (3) the ``Poly-DPR'' model which encodes each context into multiple context vectors.
VQA models trained with two additional objectives: object centroid estimation and relative position estimation, lead to improved performance on spatial reasoning questions (in GQA) in fully supervised and few shot settings as well as improved O.O.D. generalization.
We show that models can be trained without any human-annotated Q-A pairs, but only with images and associated text captions. Our experiments suggest gains on benchmark with shifted priors (VQA-CP) over baselines which use full supervision from human-authored QA data.
Scene completion from sparse and incomplete label maps. `Halluci-Net' is a 2-stage method that captures the object co-occurrence relationships, to produce dense label maps from incomplete labelmaps and object boundaries, for image synthesis.
Unsupervised Reading Comprehension method that operates directly on a single test passage. Synthetic QA pairs are generated from the passage, and models are trained on these. When a new human-authored test question appears, models infer answers better than previous unsupervised methods.
An adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to attributes-space. Studies robustness to semantic shifts that are beyond L-p norm perturbations, on 3 types of naturally occurring perturbations --- object-related shifts, geometric transformations, and common image corruptions.
MUTANT is a training paradigm that exposes VQA models to perceptually similar, yet semantically distinct mutations of the input image or question. We use a pairwise consistency loss between answers to original and mutant inputs as a regularization, along with an answer embedding NCE loss. MUTANT establishes a new SOTA (+10%) on the VQA-CP challenge (for generalization under Changing Priors)
Actions in videos are inherently linked to latent social and commonsense aspects. We present the first work on generating commonsense captions directly from videos, to describe latent intentions, attributes, and effects of humans in videos. Additionally we explore the use of open-ended video-based commonsense question answering (V2C-QA) as a way to enrich our captions.
VQA models struggle at negation, antonyms, conjunction, disjunction! We show a capability of answering logically composed questions with our novel modules and datasets, while retaining performance on VQA data.
Given two images (source, target) with different object configurations, what is the sequence of steps to re-arrange source to match target? For this reasoning task, our modular approach that contains a visual encoder and an event-sequencer/planner, and exhibits inductive generalization.