Giulio Palomba, Alessandro Farasin, & Claudio Rossi. (2020). Sentinel-1 Flood Delineation with Supervised Machine Learning. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 1072–1083). Blacksburg, VA (USA): Virginia Tech.
Abstract: Floods are one of the major natural hazards in terms of affected people and economic damages. The increasing and often uncontrolled urban sprawl together with climate change effects will make future floods more frequent and impacting. An accurate flood mapping is of paramount importance in order to update hazard and risk maps and to plan prevention measures. In this paper, we propose the use of a supervised machine learning approach for flood delineation from satellite data. We train and evaluate the proposed algorithm using Sentinel-1 acquisition and certified flood delineation maps produced by the Copernicus Emergency Management Service across different geographical regions in Europe, achieving increased performances against previously proposed supervised machine learning approaches for flood mapping.
|
|
Alessandro Farasin, Luca Colomba, Giulio Palomba, & Giovanni Nini. (2020). Supervised Burned Areas Delineation by Means of Sentinel-2 Imagery and Convolutional Neural Networks. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 1060–1071). Blacksburg, VA (USA): Virginia Tech.
Abstract: Wildfire events are increasingly threatening our lands, cities, and lives. To contrast this phenomenon and to limit its damages, governments around the globe are trying to find proper counter-measures, identifying prevention and monitoring as two key factors to reduce wildfires impact worldwide. In this work, we propose two deep convolutional neural networks to automatically detect and delineate burned areas from satellite acquisitions, assessing their performances at scale using validated maps of burned areas of historical wildfires. We demonstrate that the proposed networks substantially improve the burned area delineation accuracy over conventional methods.
|
|
Xukun Li, & Doina Caragea. (2020). Improving Disaster-related Tweet Classification with a Multimodal Approach. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 893–902). Blacksburg, VA (USA): Virginia Tech.
Abstract: Social media data analysis is important for disaster management. Lots of prior studies have focused on classifying a tweet based on its text or based on its images, independently, even if the tweet contains both text and images. Under the assumptions that text and images may contain complementary information, it is of interest to construct classifiers that make use of both modalities of the tweet. Towards this goal, we propose a multimodal classification model which aggregates text and image information. Our study aims to provide insights into the benefits obtained by combining text and images, and to understand what type of modality is more informative with respect to disaster tweet classification. Experimental results show that both text and image classification can be improved by the multimodal approach.
|
|
Jeremy Diaz, Lise St. Denis, Maxwell B. Joseph, Kylen Solvik, & Jennifer K. Balch. (2020). Classifying Twitter Users for Disaster Response: A Highly Multimodal or Simple Approach? In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 774–789). Blacksburg, VA (USA): Virginia Tech.
Abstract: We report on the development of a classifier to identify Twitter users contributing first-hand information during a disaster. Identifying such users helps social media monitoring teams identify critical information that might otherwise slip through the cracks. A parallel study (St. Denis et al., 2020) demonstrates that Twitter user filtering creates an information-rich stream of content, but the best way to approach this task is unexplored. A user's profile contains many different “modalities” of data, including numbers, text, and images. To integrate these different data types, we constructed a multimodal neural network that combines the loss function of all modalities, and we compared the results to many individual unimodal models and a decision-level fusion approach. Analysis of the results suggests that unimodal models acting on Twitter users' recent tweets are sufficient for accurate classification. We demonstrate promising classification of Twitter users for crisis response with methods that are (1) easy to implement and (2) quick to both optimize and infer.
|
|
Ferda Ofli, Firoj Alam, & Muhammad Imran. (2020). Analysis of Social Media Data using Multimodal Deep Learning for Disaster Response. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 802–811). Blacksburg, VA (USA): Virginia Tech.
Abstract: Multimedia content in social media platforms provides significant information during disaster events. The types of information shared include reports of injured or deceased people, infrastructure damage, and missing or found people, among others. Although many studies have shown the usefulness of both text and image content for disaster response purposes, the research has been mostly focused on analyzing only the text modality in the past. In this paper, we propose to use both text and image modalities of social media data to learn a joint representation using state-of-the-art deep learning techniques. Specifically, we utilize convolutional neural networks to define a multimodal deep learning architecture with a modality-agnostic shared representation. Extensive experiments on real-world disaster datasets show that the proposed multimodal architecture yields better performance than models trained using a single modality (e.g., either text or image).
|
|