Biomedical Signal Based Predictive System

2024 - Present

Automated Pediatric Lung Sound Analysis

Automated pediatric lung sound analysis

A stethoscope is one of the most common tools in medicine, but using one to diagnose a child requires years of specialized training. To make this expertise more accessible, we developed an AI system that can automatically detect signs of respiratory distress in pediatric patients. Our method works by converting lung sound recordings into visual representations and then using a sophisticated neural network to scan for patterns associated with disease. Unlike previous attempts that often missed subtle cues, our model uses a "global-local" perspective to ensure no detail is overlooked.

The results show that this approach is notably more accurate than previous state-of-the-art systems. This tool is designed to empower doctors and nurses by providing a fast and data-driven way to screen for pneumonia and other lung infections. Our goal is to integrate this technology into digital stethoscopes to help save lives in regions where pediatric specialists are not always available.


2017 - 2025

Cardiovascular Screening with Heart Sounds

Cardiovascular Screening with Heart Sounds Diagram

Heart sounds offer a simple, low-cost, and non-invasive window into cardiovascular health, but accurately interpreting them often depends on specialized clinical expertise and can be affected by noise, device quality, and recording conditions. In our lab, we work to make cardiac screening more accessible by developing data-driven methods that automatically analyze heart sound recordings and detect signs of disease. Our research spans both the creation of high-quality heart sound datasets and the design of robust machine learning systems that can operate reliably in real-world settings. We have built curated, clinically meaningful collections of heart sound recordings that capture multiple types of valvular and other cardiac abnormalities, creating a strong foundation for training and evaluating intelligent screening tools.

On the modeling side, we develop deep learning approaches that learn directly from heart sound signals and their time-frequency representations, allowing systems to identify subtle acoustic patterns linked to abnormal cardiac function. A major focus of our work is robustness: we study how background noise, low-cost stethoscopes, and variability across sensors and environments affect performance, and we design methods that remain accurate despite these challenges. This includes learnable front-end filtering, feature representations tailored for noisy heart sounds, and ensemble strategies that combine complementary models for improved reliability. Overall, our lab aims to bridge traditional cardiac auscultation with modern artificial intelligence, enabling scalable, affordable, and dependable cardiovascular screening tools. Our long-term goal is to support early detection of heart disease, especially in low-resource and underserved communities where access to expert cardiac evaluation may be limited.

Publications


2025

Real Time Lung Sound Denoising

Real Time Lung Sound Denoising Diagram

Respiratory diseases such as asthma, chronic obstructive pulmonary disease (COPD), and bronchitis are often assessed through lung auscultation using stethoscopes. While digital stethoscopes enable recording and analysis of lung sounds, these recordings frequently contain environmental noise from speech, patient movement, and clinical equipment, which can obscure important acoustic patterns and complicate diagnosis.

Our lab is developing AI-driven methods to improve the reliability of lung sound analysis by enhancing the quality of recorded respiratory signals. Our research focuses on deep learning techniques that learn to separate clinically meaningful lung sounds from background noise while preserving subtle acoustic features associated with respiratory abnormalities. By combining advanced neural architectures capable of capturing both short-term and long-range sound patterns, we aim to transform noisy auscultation recordings into clearer signals for automated analysis.

View Publication

2019

End-to-end Sleep Staging with Raw Single Channel EEG

Model Diagram for Automated EEG Sleep Staging

Sleep plays a vital role in overall health, yet identifying sleep stages from electroencephalogram (EEG) recordings typically requires specialists to manually review long recordings, a process that is both time-consuming and expertise-intensive. In our lab, we explore AI-driven methods to automate sleep stage analysis using EEG signals.

Our research focuses on developing deep learning approaches that learn meaningful temporal patterns directly from raw brain signals, enabling reliable sleep stage identification without relying on handcrafted signal features. By leveraging architectures designed to capture complex dynamics in physiological time-series data, we aim to improve the efficiency and consistency of sleep analysis.

View Publication