Multimodal Emotion Modelling for Personalized Recommendation Systems
Keywords:
Multimodal emotion modelling, personalised recommendation systems, affective computing, emotion-aware personalisation, generative recommendationAbstract
Traditional personalised recommendation systems have been based on historical user behaviour and no-go preferences modelling and ignore the role of emotional states in decision-making and engagement of users. The paper under study explores the possibility of implementing multimodal emotion modelling into an individualized recommendation system in order to facilitate emotionally adaptive interaction. The developed system synthesizes a user affect based on textual, speech and visual emotion clues and uses the results in a generative recommendation pipeline. Confidence-aware fusion mechanism is used to increase robustness in changing input conditions, whereas content generation conditioned by emotions supports contextually succinct recommendations. The findings depict a superior level of emotional coherence, adaptability of interaction and perceived relevance over emotion-agnostic methods. This research provides system level implications of how affect-aware personalisation can be applied practically and how multimodal emotion modelling can be used to create emotionally intelligent recommender systems.
References
Baltrušaitis, T., Ahuja, C., & Morency, L.-P. (2019). Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 423–443. https://doi.org/10.1109/TPAMI.2018.2798607
Calvo, R. A., & D’Mello, S. (2018). Affect detection: An interdisciplinary review of models, methods, and their applications. IEEE Transactions on Affective Computing, 1(1), 18–36.
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), 4171–4186.
Hazarika, D., Poria, S., Zimmermann, R., & Mihalcea, R. (2021). Emotion-aware conversational systems: A survey. ACM Computing Surveys, 54(7), Article 148.
Li, S., & Deng, W. (2020). Deep facial expression recognition: A survey. IEEE Transactions on Affective Computing, 13(3), 1195–1215.
Li, X., Li, Z., & Liu, Y. (2020). Generative emotion-aware recommendation systems. Information Processing & Management, 57(6), 102337.
Picard, R. W. (2015). Affective computing: From laughter to IEEE. IEEE Transactions on Affective Computing, 6(1), 1–2.
Poria, S., Cambria, E., Bajpai, R., & Hussain, A. (2017). A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion, 37, 98–125.
Qian, Y., Liu, F., Zhang, J., & Xu, J. (2019). An emotion-aware recommender system based on users’ affective states. Information Processing & Management, 56(6), 102031.
Rahman, W., Hasan, M. K., Lee, S., Zadeh, A., Mao, C., Morency, L.-P., & Hoque, E. (2020). Integrating multimodal information in large pretrained transformers. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2359–2369.
Zadeh, A., Chen, M., Poria, S., Cambria, E., & Morency, L.-P. (2018). Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 33(6), 82–88.
Zhang, Z., Weninger, F., Wöllmer, M., & Schuller, B. (2018). Deep learning for speech emotion recognition: A survey. IEEE Signal Processing Magazine, 35(1), 135–150.
Cambria, E., Hazarika, D., Poria, S., & Hussain, A. (2017). SenticNet 5: Discovering conceptual primitives for sentiment analysis by means of context embeddings. Proceedings of AAAI, 1795–1802.
Majumder, N., Poria, S., Gelbukh, A., & Cambria, E. (2018). Deep learning-based document modeling for personality detection from text. IEEE Intelligent Systems, 33(2), 74–79.
Wen, B., Feng, Y., Zhang, Y., & Shah, C. (2022). Towards generating robust, fair and emotion-aware explanations for recommender systems. ACM Transactions on Interactive Intelligent Systems, 12(4), Article 41.
Downloads
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.




