1.1
1
xml
info:srw/schema/1/mods-v3.2
Using the Image-Text Relationship to Improve Multimodal Disaster Tweet Classification
Tiberiu Sosea
author
Iustin Sirbu
author
Cornelia Caragea
author
Doina Caragea
author
Traian Rebedea
author
2021
Virginia Tech
Blacksburg, VA (USA)
English
In this paper, we show that the text-image relationship of disaster tweets can be used to improve the classification of tweets from emergency situations. To this end, we introduce DisRel, a dataset which contains 4,600 multimodal tweets, collected during the disasters that hit the USA in 2017, and manually annotated with coherence image-text relationships, such as Similar and Complementary. We explore multiple models to detect these relationships and perform a comprehensive analysis into the robustness of these methods. Based on these models, we build a simple feature augmentation approach that can leverage the text-image relationship. We test our methods on 2 tasks in CrisisMMD: Humanitarian Categories and Damage Assessment, and observe an increase in the performance of the relationship-aware methods.
Multi-modal disaster tweet classification
Image-text coherence relationship prediction
ViLBERT
tsosea2@uic.edu
exported from refbase (http://idl.iscram.org/show.php?record=2365), last updated on Tue, 13 Jul 2021 18:37:55 +0200
text
http://idl.iscram.org/files/tiberiusosea/2021/2365_TiberiuSosea_etal2021.pdf
TiberiuSosea_etal2021
ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management
Iscram 2021
Anouck Adrot
editor
Rob Grace
editor
Kathleen Moore
editor
Christopher W. Zobel
editor
18th International Conference on Information Systems for Crisis Response and Management
2021
Virginia Tech
Blacksburg, VA (USA)
conference publication
691
704
978-1-949373-61-5
1