Tasneem, F., Chakraborty, S., & Chy, A. N. (2023). An Early Synthesis of Deep Neural Networks to Identify Multimodal Informative Disaster Tweets. In Jaziar Radianti, Ioannis Dokas, Nicolas Lalone, & Deepak Khazanchi (Eds.), Proceedings of the 20th International ISCRAM Conference (pp. 428–438). Omaha, USA: University of Nebraska at Omaha.
Abstract: Twitter is always worthwhile in facilitating communication during disasters. It helps in raising situational awareness and undertaking disaster control actions as quickly as possible to alleviate the miseries. But the noisy essence of Twitter causes difficulty in distinguishing relevant information from the heterogeneous contents. Therefore, extracting informative tweets is a substantial task to help in crisis intervention. Analyzing only the text or image content of the tweet often misses necessary insights which might be helpful during disasters. In this paper, we propose a multimodal framework to address the challenges of identifying informative crisis-related tweets containing both texts and images. Our presented approach incorporates an early fusion strategy of BERT-LSTM and ResNet50 networks which effectively learns from the joint representation of texts and images. The experiments and evaluation on the benchmark CrisisMMD dataset show that our fusion method surpasses the baseline by 7% and substantiates its potency over the unimodal systems.
|