Cody Buntain, Richard Mccreadie, & Ian Soboroff. (2022). Incident Streams 2021 Off the Deep End: Deeper Annotations and Evaluations in Twitter. In Rob Grace, & Hossein Baharmand (Eds.), ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management (pp. 584–604). Tarbes, France.
Abstract: This paper summarizes the final year of the four-year Text REtrieval Conference Incident Streams track (TREC-IS), which has produced a large dataset comprising 136,263 annotated tweets, spanning 98 crisis events. Goals of this final year were twofold: 1) to add new categories for assessing messages, with a focus on characterizing the audience, author, and images associated with these messages, and 2) to enlarge the TREC-IS dataset with new events, with an emphasis of deeper pools for sampling. Beyond these two goals, TREC-IS has nearly doubled the number of annotated messages per event for the 26 crises introduced in 2021 and has released a new parallel dataset of 312,546 images associated with crisis content – with 7,297 tweets having annotations about their embedded images. Our analyses of this new crisis data yields new insights about the context of a tweet; e.g., messages intended for a local audience and those that contain images of weather forecasts and infographics have higher than average assessments of priority but are relatively rare. Tweets containing images, however, have higher perceived priorities than tweets without images. Moving to deeper pools, while tending to lower classification performance, also does not generally impact performance rankings or alter distributions of information-types. We end this paper with a discussion of these datasets, analyses, their implications, and how they contribute both new data and insights to the broader crisis informatics community.
|
|
Zijun Long, & Richard McCreadie. (2022). Is Multi-Modal Data Key for Crisis Content Categorization on Social Media? In Rob Grace, & Hossein Baharmand (Eds.), ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management (pp. 1068–1080). Tarbes, France.
Abstract: The user-base of social media platforms, like Twitter, has grown dramatically around the world over the last decade. As people post everything they experience on social media, large volumes of valuable multimedia content are being recorded online, which can be analysed to help for a range of tasks. Here we specifically focus on crisis response. The majority of prior works in this space focus on using machine learning to categorize single-modality content (e.g. text of the posts, or images shared), with few works jointly utilizing multiple modalities. Hence, in this paper, we examine to what extent integrating multiple modalities is important for crisis content categorization. In particular, we design a pipeline for multi-modal learning that fuses textual and visual inputs, leverages both, and then classifies that content based on the specified task. Through evaluation using the CrisisMMD dataset, we demonstrate that effective automatic labelling for this task is possible, with an average of 88.31% F1 performance across two significant tasks (relevance and humanitarian category classification). while also analysing cases that unimodal models and multi-modal models success and fail.
|
|
Tiberiu Sosea, Iustin Sirbu, Cornelia Caragea, Doina Caragea, & Traian Rebedea. (2021). Using the Image-Text Relationship to Improve Multimodal Disaster Tweet Classification. In Anouck Adrot, Rob Grace, Kathleen Moore, & Christopher W. Zobel (Eds.), ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management (pp. 691–704). Blacksburg, VA (USA): Virginia Tech.
Abstract: In this paper, we show that the text-image relationship of disaster tweets can be used to improve the classification of tweets from emergency situations. To this end, we introduce DisRel, a dataset which contains 4,600 multimodal tweets, collected during the disasters that hit the USA in 2017, and manually annotated with coherence image-text relationships, such as Similar and Complementary. We explore multiple models to detect these relationships and perform a comprehensive analysis into the robustness of these methods. Based on these models, we build a simple feature augmentation approach that can leverage the text-image relationship. We test our methods on 2 tasks in CrisisMMD: Humanitarian Categories and Damage Assessment, and observe an increase in the performance of the relationship-aware methods.
|
|
Zhenke Yang, & Leon J.M. Rothkrantz. (2007). Emotion sensing for context sensitive interpretation of crisis reports. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 507–514). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: The emotional qualities of a report play an important role in the evaluation of eye witness reports in crisis centers. Human operators in the crisis center can use the amount of anxiety and stress detected in a spoken report to rapidly estimate the possible impact and urgency of a report and the appropriate response to the reporter. This paper presents ongoing work in automated multi-modal emotion sensing of crisis reports in order to reduce the cognitive load on human operators. Our approach is based on the work procedures adopted by the crisis response center Rijnmond environmental agency (DCMR) and assumes a spoken dialogue between a reporter and a crisis control center. We use an emotion model based on conceptual graphs that is continually evaluated while the dialogue continues. We show how the model can be applied to interpret crisis report in a fictional toxic gas dispersion scenario.
|
|
Markku T. Häkkinen, & Helen T. Sullivan. (2007). Effective communication of warnings and critical information: Application of accessible design methods to auditory warnings. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 167–171). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: When a system initiates an auditory warning or alert, detection and correct identification of the information by the human recipient can be influenced by a variety of factors. Examples from aviation and public warning demonstrate instances where messages are ignored, not understood or misinterpreted. The reasons why messages may fail can stem from the design of the message itself, environmental conditions, and sensory or cognitive impairments. Based upon experience from several contexts and from the development of assistive technology for people with disabilities, promising design approaches are being explored in research on warning system design. The importance of multimodal warnings, selection of speech type, and internationalization are discussed.
|
|