Matti Wiegmann, Jens Kersten, Friederike Klan, Martin Potthast, & Benno Stein. (2020). Analysis of Detection Models for Disaster-Related Tweets. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 872–880). Blacksburg, VA (USA): Virginia Tech.
Abstract: Social media is perceived as a rich resource for disaster management and relief efforts, but the high class imbalance between disaster-related and non-disaster-related messages challenges a reliable detection. We analyze and compare the effectiveness of three state-of-the-art machine learning models for detecting disaster-related tweets. In this regard we introduce the Disaster Tweet Corpus~2020, an extended compilation of existing resources, which comprises a total of 123,166 tweets from 46~disasters covering 9~disaster types. Our findings from a large experiments series include: detection models work equally well over a broad range of disaster types when being trained for the respective type, a domain transfer across disaster types leads to unacceptable performance drops, or, similarly, type-agnostic classification models behave more robust at a lower effectiveness level. Altogether, the average misclassification rate of~3,8\% on performance-optimized detection models indicates effective classification knowledge but comes at the price of insufficient generalizability.
|
|
Leon Derczynski, Kenny Meesters, Kalina Bontcheva, & Diana Maynard. (2018). Helping Crisis Responders Find the Informative Needle in the Tweet Haystack. In Kees Boersma, & Brian Tomaszeski (Eds.), ISCRAM 2018 Conference Proceedings – 15th International Conference on Information Systems for Crisis Response and Management (pp. 649–662). Rochester, NY (USA): Rochester Institute of Technology.
Abstract: Crisis responders are increasingly using social media, data and other digital sources of information to build a situational understanding of a crisis situation in order to design an effective response. However with the increased availability of such data, the challenge of identifying relevant information from it also increases. This paper presents a successful automatic approach to handling this problem. Messages are filtered for informativeness based on a definition of the concept drawn from prior research and crisis response experts. Informative messages are tagged for actionable data – for example, people in need, threats to rescue efforts, changes in environment, and so on. In all, eight categories of actionability are identified. The two components – informativeness and actionability classification – are packaged together as an openly-available tool called Emina (Emergent Informativeness and Actionability).
|
|
Jens Kersten, Anna Kruspe, Matti Wiegmann, & Friederike Klan. (2019). Robust filtering of crisis-related tweets. In Z. Franco, J. J. González, & J. H. Canós (Eds.), Proceedings of the 16th International Conference on Information Systems for Crisis Response And Management. Valencia, Spain: Iscram.
Abstract: Social media enables fast information exchange and status reporting during crises. Filtering is usually required to
identify the small fraction of social media stream data related to events. Since deep learning has recently shown to
be a reliable approach for filtering and analyzing Twitter messages, a Convolutional Neural Network is examined for
filtering crisis-related tweets in this work. The goal is to understand how to obtain accurate and robust filtering
models and how model accuracies tend to behave in case of new events. In contrast to other works, the application
to real data streams is also investigated. Motivated by the observation that machine learning model accuracies
highly depend on the used data, a new comprehensive and balanced compilation of existing data sets is proposed.
Experimental results with this data set provide valuable insights. Preliminary results from filtering a data stream
recorded during hurricane Florence in September 2018 confirm our results.
|
|
Sara Barozzi, Jose Luis Fernandez Marquez, Amudha Ravi Shankar, & Barbara Pernici. (2019). Filtering images extracted from social media in the response phase of emergency events. In Z. Franco, J. J. González, & J. H. Canós (Eds.), Proceedings of the 16th International Conference on Information Systems for Crisis Response And Management. Valencia, Spain: Iscram.
Abstract: The use of social media to support emergency operators in the first hours of the response phases can improve the
quality of the information available and awareness on ongoing emergency events. Social media contain both textual
and visual information, in the form of pictures and videos. The problem related to the use of social media posts
as a source of information during emergencies lies in the difficulty of selecting the relevant information among
a very large amount of irrelevant information. In particular, we focus on the extraction of images relevant to an
event for rapid mapping purpose. In this paper, a set of possible filters is proposed and analyzed with the goal of
selecting useful images from posts and of evaluating how precision and recall are impacted. Filtering techniques,
which include both automated and crowdsourced steps, have the goal of providing better quality posts and easy
manageable data volumes both to emergency responders and rapid mapping operators. The impact of the filters on
precision and recall in extracting relevant images is discussed in the paper in two different case studies.
|
|
Florian Vandecasteele, Krishna Kumar, Kenzo Milleville, & Steven Verstockt. (2019). Video Summarization And Video Highlight Selection Tools To Facilitate Fire Incident Management. In Z. Franco, J. J. González, & J. H. Canós (Eds.), Proceedings of the 16th International Conference on Information Systems for Crisis Response And Management. Valencia, Spain: Iscram.
Abstract: This paper reports on the added value of combining different types of sensor data and geographic information for fire incident management. A survey was launched within the Belgian fire community to explore the need of added value and the use of new types of sensor data during a fire incident. This evaluation revealed that people are visually-oriented and that video footages and images are of great value to gain insights in a particular problem. However, due to the limited available time (i.e., fast decisions need to be taken) and the large amount of cameras it is not feasible to analyze all video footages sequentially. To solve this problem we propose a video summarization mechanism and a video highlight selection tool based on the automatic generated image and video tags.
|
|