Cody Buntain, Richard Mccreadie, & Ian Soboroff. (2022). Incident Streams 2021 Off the Deep End: Deeper Annotations and Evaluations in Twitter. In Rob Grace, & Hossein Baharmand (Eds.), ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management (pp. 584–604). Tarbes, France.
Abstract: This paper summarizes the final year of the four-year Text REtrieval Conference Incident Streams track (TREC-IS), which has produced a large dataset comprising 136,263 annotated tweets, spanning 98 crisis events. Goals of this final year were twofold: 1) to add new categories for assessing messages, with a focus on characterizing the audience, author, and images associated with these messages, and 2) to enlarge the TREC-IS dataset with new events, with an emphasis of deeper pools for sampling. Beyond these two goals, TREC-IS has nearly doubled the number of annotated messages per event for the 26 crises introduced in 2021 and has released a new parallel dataset of 312,546 images associated with crisis content – with 7,297 tweets having annotations about their embedded images. Our analyses of this new crisis data yields new insights about the context of a tweet; e.g., messages intended for a local audience and those that contain images of weather forecasts and infographics have higher than average assessments of priority but are relatively rare. Tweets containing images, however, have higher perceived priorities than tweets without images. Moving to deeper pools, while tending to lower classification performance, also does not generally impact performance rankings or alter distributions of information-types. We end this paper with a discussion of these datasets, analyses, their implications, and how they contribute both new data and insights to the broader crisis informatics community.
|
Siska Fitrianie, & Leon J.M. Rothkrantz. (2009). Computed ontology-based situation awareness of multi-user observations. In S. J. J. Landgren (Ed.), ISCRAM 2009 – 6th International Conference on Information Systems for Crisis Response and Management: Boundary Spanning Initiatives and New Perspectives. Gothenburg: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: In recent years, we have developed a framework of human-computer interaction that offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting/drawing, gesture, text and visual symbols. The framework allows the rapid construction of a multimodal, multi-device, and multi-user communication system within crisis management. This paper reports the approaches used in multi-user information integration (input fusion) and multimodal presentation (output fission) modules, which can be used in isolation, but also as part of the framework. The latter is able to specify and produce contextsensitive and user-tailored output combining language, speech, visual-language and graphics. These modules provide a communication channel between the system and users with different communication devices. By the employment of ontology, the system's view about the world is constructed from multi-user observations and appropriate multimodal responses are generated.
|
Siska Fitrianie, Ronald Poppe, Trung H. Bui, Alin Gavril Chitu, Dragos Datcu, Ramón Dor, et al. (2007). A multimodal human-computer interaction framework for research into crisis management. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 149–158). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: Unreliable communication networks, chaotic environments and stressful conditions can make communication during crisis events difficult. The current practice in crisis management can be improved by introducing ICT systems in the process. However, much experimentation is needed to determine where and how ICT can aid. Therefore, we propose a framework in which predefined modules can be connected in an ad hoc fashion. Such a framework allows for rapid development and evaluation of such ICT systems. The framework offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting and drawing, body gesture, text and visual symbols. It provides mechanisms to fuse these modalities into a context dependent interpretation of the current situation and generate appropriate the multimodal information responses. The proposed toolbox can be used as part of a disaster and rescue simulation. We propose evaluation methods, and focus on the technological aspects of our framework.
|
Markku T. Häkkinen, & Helen T. Sullivan. (2007). Effective communication of warnings and critical information: Application of accessible design methods to auditory warnings. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 167–171). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: When a system initiates an auditory warning or alert, detection and correct identification of the information by the human recipient can be influenced by a variety of factors. Examples from aviation and public warning demonstrate instances where messages are ignored, not understood or misinterpreted. The reasons why messages may fail can stem from the design of the message itself, environmental conditions, and sensory or cognitive impairments. Based upon experience from several contexts and from the development of assistive technology for people with disabilities, promising design approaches are being explored in research on warning system design. The importance of multimodal warnings, selection of speech type, and internationalization are discussed.
|
Chris Murphy, Doug Phair, & Courtney Aquilina. (2005). A prototype multi-modal decision support architecture. In B. C. B. Van de Walle (Ed.), Proceedings of ISCRAM 2005 – 2nd International Conference on Information Systems for Crisis Response and Management (pp. 135–137). Brussels: Royal Flemish Academy of Belgium.
Abstract: This paper presents the design of a decision support tool for crisis response applications. We propose a system to replace emergency contact calling trees with a multi-modal personnel contact architecture. This architecture consists of a centralized notification framework using existing enterprise e-mail, Web site, instant messaging, and voice over IP (VOIP) infrastructure. Response and audit data is collected and stored for analysis, and can be reviewed using a variety of methods in real time. Details of our prototype implementation are discussed. Specifically, we address multi-modal communication techniques and their benefits, enterprise deployment challenges, and opportunities for further research.
|
Tiberiu Sosea, Iustin Sirbu, Cornelia Caragea, Doina Caragea, & Traian Rebedea. (2021). Using the Image-Text Relationship to Improve Multimodal Disaster Tweet Classification. In Anouck Adrot, Rob Grace, Kathleen Moore, & Christopher W. Zobel (Eds.), ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management (pp. 691–704). Blacksburg, VA (USA): Virginia Tech.
Abstract: In this paper, we show that the text-image relationship of disaster tweets can be used to improve the classification of tweets from emergency situations. To this end, we introduce DisRel, a dataset which contains 4,600 multimodal tweets, collected during the disasters that hit the USA in 2017, and manually annotated with coherence image-text relationships, such as Similar and Complementary. We explore multiple models to detect these relationships and perform a comprehensive analysis into the robustness of these methods. Based on these models, we build a simple feature augmentation approach that can leverage the text-image relationship. We test our methods on 2 tasks in CrisisMMD: Humanitarian Categories and Damage Assessment, and observe an increase in the performance of the relationship-aware methods.
|
Zhenke Yang, & Leon J.M. Rothkrantz. (2007). Emotion sensing for context sensitive interpretation of crisis reports. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 507–514). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: The emotional qualities of a report play an important role in the evaluation of eye witness reports in crisis centers. Human operators in the crisis center can use the amount of anxiety and stress detected in a spoken report to rapidly estimate the possible impact and urgency of a report and the appropriate response to the reporter. This paper presents ongoing work in automated multi-modal emotion sensing of crisis reports in order to reduce the cognitive load on human operators. Our approach is based on the work procedures adopted by the crisis response center Rijnmond environmental agency (DCMR) and assumes a spoken dialogue between a reporter and a crisis control center. We use an emotion model based on conceptual graphs that is continually evaluated while the dialogue continues. We show how the model can be applied to interpret crisis report in a fictional toxic gas dispersion scenario.
|
Zijun Long, & Richard McCreadie. (2022). Is Multi-Modal Data Key for Crisis Content Categorization on Social Media? In Rob Grace, & Hossein Baharmand (Eds.), ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management (pp. 1068–1080). Tarbes, France.
Abstract: The user-base of social media platforms, like Twitter, has grown dramatically around the world over the last decade. As people post everything they experience on social media, large volumes of valuable multimedia content are being recorded online, which can be analysed to help for a range of tasks. Here we specifically focus on crisis response. The majority of prior works in this space focus on using machine learning to categorize single-modality content (e.g. text of the posts, or images shared), with few works jointly utilizing multiple modalities. Hence, in this paper, we examine to what extent integrating multiple modalities is important for crisis content categorization. In particular, we design a pipeline for multi-modal learning that fuses textual and visual inputs, leverages both, and then classifies that content based on the specified task. Through evaluation using the CrisisMMD dataset, we demonstrate that effective automatic labelling for this task is possible, with an average of 88.31% F1 performance across two significant tasks (relevance and humanitarian category classification). while also analysing cases that unimodal models and multi-modal models success and fail.
|