|
Alan Aipe, Asif Ekbal, Mukuntha NS, & Sadao Kurohashi. (2018). Linguistic Feature Assisted Deep Learning Approach towards Multi-label Classification of Crisis Related Tweets. In Kees Boersma, & Brian Tomaszeski (Eds.), ISCRAM 2018 Conference Proceedings – 15th International Conference on Information Systems for Crisis Response and Management (pp. 705–717). Rochester, NY (USA): Rochester Institute of Technology.
Abstract: Micro-blogging site like Twitter, over the last decade, has evolved into a proactive communication channel during mass convergence and emergency events, especially in crisis stricken scenarios. Extracting multiple levels of information associated with the overwhelming amount of social media data generated during such situations remains a great challenge to disaster-affected communities and professional emergency responders. These valuable data, segregated into different informative categories, can be leveraged by the government agencies, humanitarian communities as well as citizens to bring about faster response in areas of necessity. In this paper, we address the above scenario by developing a deep Convolutional Neural Network (CNN) for multi-label classification of crisis related tweets.We augment deep CNN by several linguistic features extracted from Tweet, and investigate their usage in classification. Evaluation on a benchmark dataset show that our proposed approach attains the state-of-the-art performance.
|
|
|
Alessandro Farasin, Luca Colomba, Giulio Palomba, & Giovanni Nini. (2020). Supervised Burned Areas Delineation by Means of Sentinel-2 Imagery and Convolutional Neural Networks. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 1060–1071). Blacksburg, VA (USA): Virginia Tech.
Abstract: Wildfire events are increasingly threatening our lands, cities, and lives. To contrast this phenomenon and to limit its damages, governments around the globe are trying to find proper counter-measures, identifying prevention and monitoring as two key factors to reduce wildfires impact worldwide. In this work, we propose two deep convolutional neural networks to automatically detect and delineate burned areas from satellite acquisitions, assessing their performances at scale using validated maps of burned areas of historical wildfires. We demonstrate that the proposed networks substantially improve the burned area delineation accuracy over conventional methods.
|
|
|
Dipak Singh, Shayan Shams, Joohyun Kim, Seung-jong Park, & Seungwon Yang. (2020). Fighting for Information Credibility: AnEnd-to-End Framework to Identify FakeNews during Natural Disasters. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 90–99). Blacksburg, VA (USA): Virginia Tech.
Abstract: Fast-spreading fake news has become an epidemic in the post-truth world of politics, the stock market, or even during natural disasters. A large amount of unverified information may reach a vast audience quickly via social media. The effect of misinformation (false) and disinformation (deliberately false) is more severe during the critical time of natural disasters such as flooding, hurricanes, or earthquakes. This can lead to disruptions in rescue missions and recovery activities, costing human lives and delaying the time needed for affected communities to return to normal. In this paper, we designed a comprehensive framework which is capable of developing a training set and trains a deep learning model for detecting fake news events occurring during disasters. Our proposed framework includes infrastructure to collect Twitter posts which spread false information. In our model implementation, we utilized the Transfer Learning scheme to transfer knowledge gained from a large and general fake news dataset to relatively smaller fake news events occurring during disasters as a means of overcoming the limited size of our training dataset. Our detection model was able to achieve an accuracy of 91.47\% and F1 score of 90.89 when it was trained with the first 28 hours of Twitter data. Our vision for this study is to help emergency managers during disaster response with our framework so that they may perform their rescue and recovery actions effectively and efficiently without being distracted by false information.
|
|
|
Ferda Ofli, Firoj Alam, & Muhammad Imran. (2020). Analysis of Social Media Data using Multimodal Deep Learning for Disaster Response. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 802–811). Blacksburg, VA (USA): Virginia Tech.
Abstract: Multimedia content in social media platforms provides significant information during disaster events. The types of information shared include reports of injured or deceased people, infrastructure damage, and missing or found people, among others. Although many studies have shown the usefulness of both text and image content for disaster response purposes, the research has been mostly focused on analyzing only the text modality in the past. In this paper, we propose to use both text and image modalities of social media data to learn a joint representation using state-of-the-art deep learning techniques. Specifically, we utilize convolutional neural networks to define a multimodal deep learning architecture with a modality-agnostic shared representation. Extensive experiments on real-world disaster datasets show that the proposed multimodal architecture yields better performance than models trained using a single modality (e.g., either text or image).
|
|
|
Giulio Palomba, Alessandro Farasin, & Claudio Rossi. (2020). Sentinel-1 Flood Delineation with Supervised Machine Learning. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 1072–1083). Blacksburg, VA (USA): Virginia Tech.
Abstract: Floods are one of the major natural hazards in terms of affected people and economic damages. The increasing and often uncontrolled urban sprawl together with climate change effects will make future floods more frequent and impacting. An accurate flood mapping is of paramount importance in order to update hazard and risk maps and to plan prevention measures. In this paper, we propose the use of a supervised machine learning approach for flood delineation from satellite data. We train and evaluate the proposed algorithm using Sentinel-1 acquisition and certified flood delineation maps produced by the Copernicus Emergency Management Service across different geographical regions in Europe, achieving increased performances against previously proposed supervised machine learning approaches for flood mapping.
|
|
|
Gkika, I., Pattas, D., Konstantoudakis, K., & Zarpalas, D. (2023). Object detection and augmented reality annotations for increased situational awareness in light smoke conditions. In Jaziar Radianti, Ioannis Dokas, Nicolas Lalone, & Deepak Khazanchi (Eds.), Proceedings of the 20th International ISCRAM Conference (pp. 231–241). Omaha, USA: University of Nebraska at Omaha.
Abstract: Innovative technologies powered by Computer Vision algorithms can aid first responders, increasing their situ ational awareness. However, adverse conditions, such as smoke, can reduce the efficacy of such algorithms by degrading the input images. This paper presents a pipeline of image de-smoking, object detection, and augmented reality display that aims to enhance situational awareness in smoky conditions. A novel smoke-reducing deep learning algorithm is applied as a preprocessing step, before state-of-the-art object detection. The detected objects and persons are highlighted in the user’s augmented reality display. The proposed method is shown to increase detection accuracy and confidence. Testing in realistic environments provides an initial evaluation of the method, both in terms of image processing and of usefulness to first responders.
|
|
|
Grégoire Burel, & Harith Alani. (2018). Crisis Event Extraction Service (CREES) – Automatic Detection and Classification of Crisis-related Content on Social Media. In Kees Boersma, & Brian Tomaszeski (Eds.), ISCRAM 2018 Conference Proceedings – 15th International Conference on Information Systems for Crisis Response and Management (pp. 597–608). Rochester, NY (USA): Rochester Institute of Technology.
Abstract: Social media posts tend to provide valuable reports during crises. However, this information can be hidden in large amounts of unrelated documents. Providing tools that automatically identify relevant posts, event types (e.g., hurricane, floods, etc.) and information categories (e.g., reports on affected individuals, donations and volunteering, etc.) in social media posts is vital for their efficient handling and consumption. We introduce the Crisis Event Extraction Service (CREES), an open-source web API that automatically classifies posts during crisis situations. The API provides annotations for crisis-related documents, event types and information categories through an easily deployable and accessible web API that can be integrated into multiple platform and tools. The annotation service is backed by Convolutional Neural Networks (CNNs) and validated against traditional machine learning models. Results show that the CNN-based API results can be relied upon when dealing with specific crises with the benefits associated with the usage word embeddings.
|
|
|
Jeremy Diaz, Lise St. Denis, Maxwell B. Joseph, Kylen Solvik, & Jennifer K. Balch. (2020). Classifying Twitter Users for Disaster Response: A Highly Multimodal or Simple Approach? In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 774–789). Blacksburg, VA (USA): Virginia Tech.
Abstract: We report on the development of a classifier to identify Twitter users contributing first-hand information during a disaster. Identifying such users helps social media monitoring teams identify critical information that might otherwise slip through the cracks. A parallel study (St. Denis et al., 2020) demonstrates that Twitter user filtering creates an information-rich stream of content, but the best way to approach this task is unexplored. A user's profile contains many different “modalities” of data, including numbers, text, and images. To integrate these different data types, we constructed a multimodal neural network that combines the loss function of all modalities, and we compared the results to many individual unimodal models and a decision-level fusion approach. Analysis of the results suggests that unimodal models acting on Twitter users' recent tweets are sufficient for accurate classification. We demonstrate promising classification of Twitter users for crisis response with methods that are (1) easy to implement and (2) quick to both optimize and infer.
|
|
|
Long, Z., McCreadiem, R., & Imran, M. (2023). CrisisViT: A Robust Vision Transformer for Crisis Image Classification. In Jaziar Radianti, Ioannis Dokas, Nicolas Lalone, & Deepak Khazanchi (Eds.), Proceedings of the 20th International ISCRAM Conference (pp. 309–319). Omaha, USA: University of Nebraska at Omaha.
Abstract: In times of emergency, crisis response agencies need to quickly and accurately assess the situation on the ground in order to deploy relevant services and resources. However, authorities often have to make decisions based on limited information, as data on affected regions can be scarce until local response services can provide first-hand reports. Fortunately, the widespread availability of smartphones with high-quality cameras has made citizen journalism through social media a valuable source of information for crisis responders. However, analyzing the large volume of images posted by citizens requires more time and effort than is typically available. To address this issue, this paper proposes the use of state-of-the-art deep neural models for automatic image classification/tagging, specifically by adapting transformer-based architectures for crisis image classification (CrisisViT). We leverage the new Incidents1M crisis image dataset to develop a range of new transformer-based image classification models. Through experimentation over the standard Crisis image benchmark dataset, we demonstrate that the CrisisViT models significantly outperform previous approaches in emergency type, image relevance, humanitarian category, and damage severity classification. Additionally, we show that the new Incidents1M dataset can further augment the CrisisViT models resulting in an additional 1.25% absolute accuracy gain.
|
|
|
Mirko Zaffaroni, & Claudio Rossi. (2020). Water Segmentation with Deep Learning Models for Flood Detection and Monitoring. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 66–74). Blacksburg, VA (USA): Virginia Tech.
Abstract: Flooding is a natural hazard that causes a lot of deaths every year and the number of flood events is increasing worldwide because of climate change effects. Detecting and monitoring floods is of paramount importance in order to reduce their impacts both in terms of affected people and economic losses. Automated image analysis techniques capable to extract the amount of water from a picture can be used to create novel services aimed to detect floods from fixed surveillance cameras, drones, crowdsourced in-field observations, as well as to extract meaningful data from social media streams. In this work we compare the accuracy and the prediction performances of recent Deep Learning algorithms for the pixel-wise water segmentation task. Moreover, we release a new dataset that enhances well-know benchmark datasets used for multi-class segmentation with specific flood-related images taken from drones, in-field observations and social media.
|
|
|
Nilani Algiriyage, Rangana Sampath, Raj Prasanna, Kristin Stock, Emma Hudson-Doyle, & David Johnston. (2021). Identifying Disaster-related Tweets: A Large-Scale Detection Model Comparison. In Anouck Adrot, Rob Grace, Kathleen Moore, & Christopher W. Zobel (Eds.), ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management (pp. 731–743). Blacksburg, VA (USA): Virginia Tech.
Abstract: Social media applications such as Twitter and Facebook are fast becoming a key instrument in gaining situational awareness (understanding the bigger picture of the situation) during disasters. This has provided multiple opportunities to gather relevant information in a timely manner to improve disaster response. In recent years, identifying crisis-related social media posts is analysed as an automatic task using machine learning (ML) or deep learning (DL) techniques. However, such supervised learning algorithms require labelled training data in the early hours of a crisis. Recently, multiple manually labelled disaster-related open-source twitter datasets have been released. In this work, we create a large dataset with 186,718 tweets by combining a number of such datasets and evaluate the performance of multiple ML and DL algorithms in classifying disaster-related tweets in three settings, namely ``in-disaster'', ``out-disaster'' and ``cross-disaster''. Our results show that the Bidirectional LSTM model with Word2Vec embeddings performs well for the tweet classification task in all three settings. We also make available the preprocessing steps and trained weights for future research.
|
|
|
Xiaojing Guo, Xinzhi Wang, Luyao Kou, & Hui Zhang. (2021). A Question Answering System Applied to Disasters. In Anouck Adrot, Rob Grace, Kathleen Moore, & Christopher W. Zobel (Eds.), ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management (pp. 2–16). Blacksburg, VA (USA): Virginia Tech.
Abstract: In emergency management, identifying disaster information accurately and promptly out of numerous documents like news articles, announcements, and reports is important for decision makers to accomplish their mission efficiently. This paper studies the application of the question answering system which can automatically locate answers in the documents by natural language processing to improve the efficiency and accuracy of disaster knowledge extraction. Firstly, an improved question answering model was constructed based on the advantages of the existing neural network models. Secondly, the English question answering dataset pertinent to disasters and the Chinese question answering dataset were constructed. Finally, the improved neural network model was trained on the datasets and tested by calculating the F1 and EM scores which indicated that a higher question answering accuracy was achieved. The improved system has a deeper understanding of the semantic information and can be used to construct the disaster knowledge graph.
|
|
|
Xukun Li, & Doina Caragea. (2020). Improving Disaster-related Tweet Classification with a Multimodal Approach. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 893–902). Blacksburg, VA (USA): Virginia Tech.
Abstract: Social media data analysis is important for disaster management. Lots of prior studies have focused on classifying a tweet based on its text or based on its images, independently, even if the tweet contains both text and images. Under the assumptions that text and images may contain complementary information, it is of interest to construct classifiers that make use of both modalities of the tweet. Towards this goal, we propose a multimodal classification model which aggregates text and image information. Our study aims to provide insights into the benefits obtained by combining text and images, and to understand what type of modality is more informative with respect to disaster tweet classification. Experimental results show that both text and image classification can be improved by the multimodal approach.
|
|
|
Yingjie Li, Seoyeon Park, Cornelia Caragea, Doina Caragea, & Andrea Tapia. (2019). Sympathy Detection in Disaster Twitter Data. In Z. Franco, J. J. González, & J. H. Canós (Eds.), Proceedings of the 16th International Conference on Information Systems for Crisis Response And Management. Valencia, Spain: Iscram.
Abstract: Nowadays, micro-blogging sites such as Twitter have become powerful tools for communicating with others in
various situations. Especially in disaster events, these sites can be the best platforms for seeking or providing social
support, of which informational support and emotional support are the most important types. Sympathy, a sub-type
of emotional support, is an expression of one?s compassion or sorrow for a difficult situation that another person
is facing. Providing sympathy to people affected by a disaster can help change people?s emotional states from
negative to positive emotions, and hence, help them feel better. Moreover, detecting sympathy contents in Twitter
can potentially be used for finding candidate donors since the emotion ?sympathy? is closely related to people who
may be willing to donate. Thus, in this paper, as a starting point, we focus on detecting sympathy-related tweets.
We address this task using Convolutional Neural Networks (CNNs) with refined word embeddings. Specifically, we
propose a refined word embedding technique in terms of various pre-trained word vector models and show great
performance of CNNs that use these refined embeddings in the sympathy tweet classification task. We also report
experimental results showing that the CNNs with the refined word embeddings outperform not only traditional
machine learning techniques, such as Naïve Bayes, Support Vector Machines and AdaBoost with conventional
feature sets as bags of words, but also Long Short-Term Memory Networks.
|
|
|
Zijun Long, & Richard Mccreadie. (2021). Automated Crisis Content Categorization for COVID-19 Tweet Streams. In Anouck Adrot, Rob Grace, Kathleen Moore, & Christopher W. Zobel (Eds.), ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management (pp. 667–678). Blacksburg, VA (USA): Virginia Tech.
Abstract: Social media platforms, like Twitter, are increasingly used by billions of people internationally to share information. As such, these platforms contain vast volumes of real-time multimedia content about the world, which could be invaluable for a range of tasks such as incident tracking, damage estimation during disasters, insurance risk estimation, and more. By mining this real-time data, there are substantial economic benefits, as well as opportunities to save lives. Currently, the COVID-19 pandemic is attacking societies at an unprecedented speed and scale, forming an important use-case for social media analysis. However, the amount of information during such crisis events is vast and information normally exists in unstructured and multiple formats, making manual analysis very time consuming. Hence, in this paper, we examine how to extract valuable information from tweets related to COVID-19 automatically. For 12 geographical locations, we experiment with supervised approaches for labelling tweets into 7 crisis categories, as well as investigated automatic priority estimation, using both classical and deep learned approaches. Through evaluation using the TREC-IS 2020 COVID-19 datasets, we demonstrated that effective automatic labelling for this task is possible with an average of 61% F1 performance across crisis categories, while also analysing key factors that affect model performance and model generalizability across locations.
|
|
|
Zijun Long, & Richard McCreadie. (2022). Is Multi-Modal Data Key for Crisis Content Categorization on Social Media? In Rob Grace, & Hossein Baharmand (Eds.), ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management (pp. 1068–1080). Tarbes, France.
Abstract: The user-base of social media platforms, like Twitter, has grown dramatically around the world over the last decade. As people post everything they experience on social media, large volumes of valuable multimedia content are being recorded online, which can be analysed to help for a range of tasks. Here we specifically focus on crisis response. The majority of prior works in this space focus on using machine learning to categorize single-modality content (e.g. text of the posts, or images shared), with few works jointly utilizing multiple modalities. Hence, in this paper, we examine to what extent integrating multiple modalities is important for crisis content categorization. In particular, we design a pipeline for multi-modal learning that fuses textual and visual inputs, leverages both, and then classifies that content based on the specified task. Through evaluation using the CrisisMMD dataset, we demonstrate that effective automatic labelling for this task is possible, with an average of 88.31% F1 performance across two significant tasks (relevance and humanitarian category classification). while also analysing cases that unimodal models and multi-modal models success and fail.
|
|