Hafiz Budi Firmansyah, Jesus Cerquides, & Jose Luis Fernandez-Marquez. (2022). Ensemble Learning for the Classification of Social Media Data in Disaster Response. In Rob Grace, & Hossein Baharmand (Eds.), ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management (pp. 710–718). Tarbes, France.
Abstract: Social media generates large amounts of almost real-time data which has proven valuable in disaster response. Specially for providing information within the first 48 hours after a disaster occurs. However, this potential is poorly exploited in operational environments due to the challenges of curating social media data. This work builds on top of the latest research on automatic classification of social media content, proposing the use of ensemble learning to help in the classification of social media images for disaster response. Ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Experimental results show that ensemble learning is a valuable technology for the analysis of social media images for disaster response,and could potentially ease the integration of social media data within an operational environment.
|
|
Xukun Li, Doina Caragea, Cornelia Caragea, Muhammad Imran, & Ferda Ofli. (2019). Identifying Disaster Damage Images Using a Domain Adaptation Approach. In Z. Franco, J. J. González, & J. H. Canós (Eds.), Proceedings of the 16th International Conference on Information Systems for Crisis Response And Management. Valencia, Spain: Iscram.
Abstract: Approaches for effectively filtering useful situational awareness information posted by eyewitnesses of disasters,
in real time, are greatly needed. While many studies have focused on filtering textual information, the research
on filtering disaster images is more limited. In particular, there are no studies on the applicability of domain
adaptation to filter images from an emergent target disaster, when no labeled data is available for the target disaster.
To fill in this gap, we propose to apply a domain adaptation approach, called domain adversarial neural networks
(DANN), to the task of identifying images that show damage. The DANN approach has VGG-19 as its backbone,
and uses the adversarial training to find a transformation that makes the source and target data indistinguishable.
Experimental results on several pairs of disasters suggest that the DANN model generally gives similar or better
results as compared to the VGG-19 model fine-tuned on the source labeled data.
|
|