Posts by Collection
portfolio
publications
Evaluating Vision Language Models in Detecting Learning Engagement
Published in , 1900
With the advancement of both computer vision and natural language processing, there is growing interest in incorpo- rating Vision Language Models (VLMs) into the classroom to em- power students and educators. Despite the VLMs’ sophisticated abilities in context-aware emotion recognition, their effective- ness in detecting classroom-specific emotions, e.g., engagement, distraction, and absent-mindedness, remains underexplored. As such, this paper aims to investigate the capabilities of two state-of- the-art VLMs in this domain through an empirical study, focusing on two research questions: 1- Is learning engagement detection more challenging for VLMs compared to conventional emotion detection? 2- What are the key difficulties faced by VLMs in processing learning engagement detection tasks? To address these questions, we perform a series of evaluation experiments by utilizing a classroom behavior detection dataset and an emotion recognition dataset. We conclude that VLMs that perform well on basic emotion recognition struggle with in-context engagement detection, due to the nuanced and context-dependent nature of the task. Specifically, experiments show that VLMs have difficulty distinguishing engaged and distracted classroom behavior, e.g., reading versus bowing the head. It suggests that VLMs still have significant room for improvement in engagement analy- sis. This issue can potentially be addressed by incorporating more classroom-specific training data or commonsense reasoning frameworks.
Download here
Contrastive Learning-Based Spectral Knowledge Distillation for Multi-Modality and Missing Modality Scenarios in Semantic Segmentation
Published in , 1900
Improving the performance of semantic segmentation models using multispectral information is crucial, especially for environments with low-light and adverse conditions. Multi-modal fusion techniques pursue either the learning of cross-modality features to generate a fused image or engage in knowledge distillation but address multimodal and missing modality scenarios as distinct issues, which is not an optimal approach for multi-sensor models. To address this, a novel multi-modal fusion approach called CSK-Net is proposed, which uses a contrastive learning-based spectral knowledge distillation technique along with an automatic mixed feature exchange mechanism for semantic segmentation in optical (EO) and infrared (IR) images. The distillation scheme extracts detailed textures from the optical images and distills them into the optical branch of CSK-Net. The model encoder consists of shared convolution weights with separate batch norm (BN) layers for both modalities, to capture the multi-spectral information from different modalities of the same objects. A Novel Gated Spectral Unit (GSU) and mixed feature exchange strategy are proposed to increase the correlation of modality-shared information and decrease the modality-specific information during the distillation process. Comprehensive experiments show that CSK-Net surpasses state-of-the-art models in multi-modal tasks and for missing modalities when exclusively utilizing IR data for inference across three public benchmarking datasets. For missing modality scenarios, the performance increase is achieved without additional computational costs compared to the baseline segmentation models.
Download here
SKD-Net: Spectral-based Knowledge Distillation in Low-Light Thermal Imagery for robotic perception (Accepted in ICRA’24)
Published in , 1900
Will be updated soon
Download here
social
National Service Scheme
Responsible for teaching high school and senior secondary school students. Organized various social events such as blood donation camp and awareness rallies
Unnat Bharat Abhiyan, IIT Roorkee
Initiative Leader
Student Mentorship Program
Mentored 7 freshmen year students of branch Mechanical and Industrial Engineering as a part of this Program.