AI-Driven Real-Time Facial Health Monitoring System
Keywords:
Facial Health Monitoring System using, Remote Photoplethysmography (rPPG), Heart Rate Estimation, Stress Detection, Emotion Recognition, Computer VisionAbstract
This study presents an automated facial-based health monitoring system that analyses real-time video to assess critical human health indicators, including heart rate, stress level, and emotional state. The system uses computer vision, physiological signal analysis, and lightweight machine-learning models to do a full health check without touching the person. A custom SignalExtractor module processes incoming video frames. This module picks up on small changes in the intensity of skin pixels, which makes it possible to create remote photoplethysmography (rPPG) signals for predicting heart rate. A pre-trained deep-learning model called emotion_detection_model.h5 is used to classify facial expressions into different emotion categories. A StressLevelEstimator looks at these emotional and physical signs in more detail and gives an overall stress score. The whole pipeline is set up as an interactive web-based platform. with secure user login, automatic video uploads, and results that change over time. Tests done with sample video recordings show that the estimation works well all the time, even when the lighting is normal and the face moves a little bit. The proposed system is a practical, touch-free alternative to wearable devices in general.
References
S. Li, Y. Zhang, and F. Wen, “Deep physiological signal analysis for stress detection using multimodal fusion,” IEEE Journal of Biomedical and Health Informatics, vol. 27, pp. 1120– 1132, 2023.
K. Nakamura and T. Han, “Lightweight face analytics for edge- based health monitoring systems,” Sensors, vol. 23, pp. 1–18, 2023.
R. Mehta and P. Singh, “A robust CNN-based facial emotion detection model for real-world environments,” Pattern Recognition Letters, vol. 175, pp. 78–85, 2024.
J. Anderson, H. Wu, and C. Park, “Improving facial landmark extraction using enhanced MediaPipe pipelines,” Computer Vision and Image Understanding, vol. 241, pp. 103–118, 2024.
V. Sharma and A. Patel, “Fusion-based stress estimation using facial micro-expressions and HR signals,” Biomedical Signal Processing and Control, vol. 95, pp. 105–126, 2024.
L. Gomez, R. Silva, and K. Huang, “Temporal modeling of emotional states using LSTM-attention networks,” Neurocomputing, vol. 575, pp. 44–58, 2024.
D. Choi and S. Lee, “Real-time webcam-based mental health monitoring using deep learning,” IEEE Access, vol. 12, pp. 58512– 58525, 2024.
N. Kumar and H. Reddy, “Benchmarking modern CNN architectures for facial emotion recognition,” Journal of Imaging, vol. 11, pp. 1–15, 2025.
Verma, A. Raj, and S. Thomas, “Stress level detection from facial video streams using multimodal deep learning,” Expert Systems with Applications, vol. 238, pp. 119–139, 2025.
Brown, M. Gupta, and L. Torres, “Real-time emotion recognition using hybrid CNN–Transformer models,” IEEE Transactions on Affective Computing, vol. 14, no. 2, pp. 488–500, 2023.
M. Riaz, S. Khan, and J. Kim, “Multimodal stress recognition using facial video and physiological signals with deep feature fusion,” IEEE Access, vol. 11, pp. 74210–74225, 2023.
H. Li, X. Chen, and Z. Wang, “Transformer-based facial expression recognition for real-time affective computing,” IEEE Transactions on Affective Computing, vol. 14, no. 4, pp. 2102–2115, 2023.
S. Gupta, R. Nair, and P. Joshi, “Efficient CNN architectures for facial emotion recognition on low-power devices,” Sensors, vol. 24, pp. 1–19, 2024.
Hernandez, J. Park, and D. Lee, “Edge AI-enabled emotion monitoring using lightweight vision transformers,” Pattern Recognition Letters, vol. 181, pp. 55–63, 2024.
T. Zhou, Y. Wang, and K. Li, “Facial stress detection using spatio-temporal deep learning from video streams,” Expert System with Applications, vol. 232, pp. 120–138, 2024.
N. Patel and S. Iyer, “Improved facial landmark localization for health monitoring using optimized MediaPipe models,” Computer Vision and Image Understanding, vol. 245, pp. 103–122, 2025.
Y. Kim, M. Zhao, and L. Chen, “Attention-based multimodal fusion for stress detection using facial cues and heart rate variability,” Biomedical Signal Processing and Control, vol. 101, pp. 106–124, 2025.
J. Wang, S. Roy, and A. Verma, “Real-time mental state monitoring using webcam-based facial analysis and deep neural networks,” IEEE Access, vol. 13, pp. 11840–11855, 2025.
K. Singh, A. Mehra, and V. Natarajan, “Benchmarking CNN and transformer models for facial emotion recognition under real-world conditions,” Journal of Imaging, vol. 11, pp. 1–17, 2025.
Downloads
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.




