Wang, R., & Li, N. (2023). Revealing social disparities under natural disasters using large-scale mobility data: A dynamic accessibility perspective. In Jaziar Radianti, Ioannis Dokas, Nicolas Lalone, & Deepak Khazanchi (Eds.), Proceedings of the 20th International ISCRAM Conference (pp. 797–807). Omaha, USA: University of Nebraska at Omaha.
Abstract: Accessibility is an essential indicator for measuring the functions and equity of urban services, and could be harnessed to provide insights into the social disparities in urban residents’ interaction with urban services. In this study, we attempt to measure urban residents’ accessibility patterns to urban services during natural disasters using an improved gravity model method. Firstly, by analyzing human digital trace data in the Wilmington metropolitan area over three months, we assessed the residents’ accessibility levels of grocery stores and restaurants before, during and after Hurricane Florence, and captured the diverse trends of residents’ responses to the hurricane. Then, we identified and statistically tested the social disparities in residents’ accessibility behaviors in response to the hurricane. The findings may provide new insights for city planners and policymakers in terms of equity evaluations of resource accessibility and resource allocations among different communities and improvement of their resilience against natural disasters.
|
|
Gaëtan Caillaut, Cécile Gracianne, Nathalie Abadie, Guillaume Touya, & Samuel Auclair. (2022). Automated Construction of a French Entity Linking Dataset to Geolocate Social Network Posts in the Context of Natural Disasters. In Rob Grace, & Hossein Baharmand (Eds.), ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management (pp. 654–663). Tarbes, France.
Abstract: During natural disasters, automatic information extraction from Twitter posts is a valuable way to get a better overview of the field situation. This information has to be geolocated to support effective actions, but for the vast majority of tweets, spatial information has to be extracted from texts content. Despite the remarkable advances of the Natural Language Processing field, this task is still challenging for current state-of-the-art models because they are not necessarily trained on Twitter data and because high quality annotated data are still lacking for low resources languages. This research in progress address this gap describing an analytic pipeline able to automatically extract geolocatable entities from texts and to annotate them by aligning them with the entities present in Wikipedia/Wikidata resources. We present a new dataset for Entity Linking on French texts as preliminary results, and discuss research perspectives for enhancements over current state-of-the-art modeling for this task.
|
|
Valerio Lorini, Javier Rando, Diego Saez-Trumper, & Carlos Castillo. (2020). Uneven Coverage of Natural Disasters in Wikipedia: The Case of Floods. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 688–703). Blacksburg, VA (USA): Virginia Tech.
Abstract: The usage of non-authoritative data for disaster management provides timely information that might not be available through other means. Wikipedia, a collaboratively-produced encyclopedia, includes in-depth information about many natural disasters, and its editors are particularly good at adding information in real-time as a crisis unfolds. In this study, we focus on the most comprehensive version of Wikipedia, the English one. Wikipedia offers good coverage of disasters, particularly those having a large number of fatalities. However, by performing automatic content analysis at a global scale, we also show how the coverage of floods in Wikipedia is skewed towards rich, English-speaking countries, in particular the US and Canada. We also note how coverage of floods in countries with the lowest income is substantially lower than the coverage of floods in middle-income countries. These results have implications for analysts and systems using Wikipedia as an information source about disasters.
|
|
Ferda Ofli, Firoj Alam, & Muhammad Imran. (2020). Analysis of Social Media Data using Multimodal Deep Learning for Disaster Response. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 802–811). Blacksburg, VA (USA): Virginia Tech.
Abstract: Multimedia content in social media platforms provides significant information during disaster events. The types of information shared include reports of injured or deceased people, infrastructure damage, and missing or found people, among others. Although many studies have shown the usefulness of both text and image content for disaster response purposes, the research has been mostly focused on analyzing only the text modality in the past. In this paper, we propose to use both text and image modalities of social media data to learn a joint representation using state-of-the-art deep learning techniques. Specifically, we utilize convolutional neural networks to define a multimodal deep learning architecture with a modality-agnostic shared representation. Extensive experiments on real-world disaster datasets show that the proposed multimodal architecture yields better performance than models trained using a single modality (e.g., either text or image).
|
|
Dipak Singh, Shayan Shams, Joohyun Kim, Seung-jong Park, & Seungwon Yang. (2020). Fighting for Information Credibility: AnEnd-to-End Framework to Identify FakeNews during Natural Disasters. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 90–99). Blacksburg, VA (USA): Virginia Tech.
Abstract: Fast-spreading fake news has become an epidemic in the post-truth world of politics, the stock market, or even during natural disasters. A large amount of unverified information may reach a vast audience quickly via social media. The effect of misinformation (false) and disinformation (deliberately false) is more severe during the critical time of natural disasters such as flooding, hurricanes, or earthquakes. This can lead to disruptions in rescue missions and recovery activities, costing human lives and delaying the time needed for affected communities to return to normal. In this paper, we designed a comprehensive framework which is capable of developing a training set and trains a deep learning model for detecting fake news events occurring during disasters. Our proposed framework includes infrastructure to collect Twitter posts which spread false information. In our model implementation, we utilized the Transfer Learning scheme to transfer knowledge gained from a large and general fake news dataset to relatively smaller fake news events occurring during disasters as a means of overcoming the limited size of our training dataset. Our detection model was able to achieve an accuracy of 91.47\% and F1 score of 90.89 when it was trained with the first 28 hours of Twitter data. Our vision for this study is to help emergency managers during disaster response with our framework so that they may perform their rescue and recovery actions effectively and efficiently without being distracted by false information.
|
|