|   | 
Details
   web
Records
Author Cody Buntain; Richard Mccreadie; Ian Soboroff
Title Incident Streams 2021 Off the Deep End: Deeper Annotations and Evaluations in Twitter Type Conference Article
Year 2022 Publication ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management Abbreviated Journal Iscram 2022
Volume Issue Pages 584-604
Keywords Emergency Management; Crisis Informatics; Twitter; Categorization; Priorization; Multi-Modal; Public Safety; PSCR; TREC
Abstract This paper summarizes the final year of the four-year Text REtrieval Conference Incident Streams track (TREC-IS), which has produced a large dataset comprising 136,263 annotated tweets, spanning 98 crisis events. Goals of this final year were twofold: 1) to add new categories for assessing messages, with a focus on characterizing the audience, author, and images associated with these messages, and 2) to enlarge the TREC-IS dataset with new events, with an emphasis of deeper pools for sampling. Beyond these two goals, TREC-IS has nearly doubled the number of annotated messages per event for the 26 crises introduced in 2021 and has released a new parallel dataset of 312,546 images associated with crisis content – with 7,297 tweets having annotations about their embedded images. Our analyses of this new crisis data yields new insights about the context of a tweet; e.g., messages intended for a local audience and those that contain images of weather forecasts and infographics have higher than average assessments of priority but are relatively rare. Tweets containing images, however, have higher perceived priorities than tweets without images. Moving to deeper pools, while tending to lower classification performance, also does not generally impact performance rankings or alter distributions of information-types. We end this paper with a discussion of these datasets, analyses, their implications, and how they contribute both new data and insights to the broader crisis informatics community.
Address University of Maryland, College Park (UMD); University of Glasgow; National Institute of Standards and Technology (NIST)
Corporate Author Thesis
Publisher Place of Publication Tarbes, France Editor Rob Grace; Hossein Baharmand
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2411-3387 ISBN 978-82-8427-099-9 Medium
Track Social Media for Crisis Management Expedition Conference
Notes Approved no
Call Number ISCRAM @ idladmin @ Serial 2441
Share this record to Facebook
 

 
Author Siska Fitrianie; Leon J.M. Rothkrantz
Title Computed ontology-based situation awareness of multi-user observations Type Conference Article
Year 2009 Publication ISCRAM 2009 – 6th International Conference on Information Systems for Crisis Response and Management: Boundary Spanning Initiatives and New Perspectives Abbreviated Journal ISCRAM 2009
Volume Issue Pages
Keywords Character recognition; Face recognition; Gesture recognition; Information systems; Speech recognition; Communication device; Communication modalities; Information integration; Multi-modal; Multi-user; Multiuser communication; Rapid construction; Situation awareness; Visual languages
Abstract In recent years, we have developed a framework of human-computer interaction that offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting/drawing, gesture, text and visual symbols. The framework allows the rapid construction of a multimodal, multi-device, and multi-user communication system within crisis management. This paper reports the approaches used in multi-user information integration (input fusion) and multimodal presentation (output fission) modules, which can be used in isolation, but also as part of the framework. The latter is able to specify and produce contextsensitive and user-tailored output combining language, speech, visual-language and graphics. These modules provide a communication channel between the system and users with different communication devices. By the employment of ontology, the system's view about the world is constructed from multi-user observations and appropriate multimodal responses are generated.
Address Man-Machine Interaction Group, Delft University of Technology, Netherlands; Netherlands Defence Academy, Delft University of Technology, Netherlands
Corporate Author Thesis
Publisher Information Systems for Crisis Response and Management, ISCRAM Place of Publication Gothenburg Editor J. Landgren, S. Jul
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2411-3387 ISBN 9789163347153 Medium
Track Open Track Expedition Conference 6th International ISCRAM Conference on Information Systems for Crisis Response and Management
Notes Approved no
Call Number Serial 498
Share this record to Facebook
 

 
Author Siska Fitrianie; Ronald Poppe; Trung H. Bui; Alin Gavril Chitu; Dragos Datcu; Ramón Dor; Denis Hofs; Pascal Wiggers; Don J.M. Willems; Mannes Poel; Leon J.M. Rothkrantz; Louis G. Vuurpijl; Job Zwiers
Title A multimodal human-computer interaction framework for research into crisis management Type Conference Article
Year 2007 Publication Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers Abbreviated Journal ISCRAM 2007
Volume Issue Pages 149-158
Keywords Character recognition; Communication systems; Disasters; Human computer interaction; Speech recognition; Communication modalities; Evaluation methods; Facial Expressions; Multi-modal information; Multimodal human computer interaction; Multimodal system; Rescue simulation; Technological aspects; Face recognition
Abstract Unreliable communication networks, chaotic environments and stressful conditions can make communication during crisis events difficult. The current practice in crisis management can be improved by introducing ICT systems in the process. However, much experimentation is needed to determine where and how ICT can aid. Therefore, we propose a framework in which predefined modules can be connected in an ad hoc fashion. Such a framework allows for rapid development and evaluation of such ICT systems. The framework offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting and drawing, body gesture, text and visual symbols. It provides mechanisms to fuse these modalities into a context dependent interpretation of the current situation and generate appropriate the multimodal information responses. The proposed toolbox can be used as part of a disaster and rescue simulation. We propose evaluation methods, and focus on the technological aspects of our framework.
Address Man-Machine Interaction Group, Delft University of Technology, Netherlands; Human Media Interaction Group, University of Twente, Netherlands; Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, Netherlands
Corporate Author Thesis
Publisher Information Systems for Crisis Response and Management, ISCRAM Place of Publication Delft Editor B. Van de Walle, P. Burghardt, K. Nieuwenhuis
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2411-3387 ISBN 9789054874171; 9789090218717 Medium
Track HCIS Expedition Conference 4th International ISCRAM Conference on Information Systems for Crisis Response and Management
Notes Approved no
Call Number Serial 497
Share this record to Facebook
 

 
Author Markku T. Häkkinen; Helen T. Sullivan
Title Effective communication of warnings and critical information: Application of accessible design methods to auditory warnings Type Conference Article
Year 2007 Publication Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers Abbreviated Journal ISCRAM 2007
Volume Issue Pages 167-171
Keywords Computer science; Computers; Accessibility; Assistive technology; Auditory display; Effective communication; Environmental conditions; Multi-Modal Displays; People with disabilities; Warning; Speech synthesis
Abstract When a system initiates an auditory warning or alert, detection and correct identification of the information by the human recipient can be influenced by a variety of factors. Examples from aviation and public warning demonstrate instances where messages are ignored, not understood or misinterpreted. The reasons why messages may fail can stem from the design of the message itself, environmental conditions, and sensory or cognitive impairments. Based upon experience from several contexts and from the development of assistive technology for people with disabilities, promising design approaches are being explored in research on warning system design. The importance of multimodal warnings, selection of speech type, and internationalization are discussed.
Address Department of Computer Science and Information Systems, Agora Human Technologies Center, University of Jyväskylä, Jyväskylä, Finland; Department of Psychology, Rider University, Lawrenceville, NJ, United States
Corporate Author Thesis
Publisher Information Systems for Crisis Response and Management, ISCRAM Place of Publication Delft Editor B. Van de Walle, P. Burghardt, K. Nieuwenhuis
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2411-3387 ISBN 9789054874171; 9789090218717 Medium
Track HCIS Expedition Conference 4th International ISCRAM Conference on Information Systems for Crisis Response and Management
Notes Approved no
Call Number Serial 558
Share this record to Facebook
 

 
Author Chris Murphy; Doug Phair; Courtney Aquilina
Title A prototype multi-modal decision support architecture Type Conference Article
Year 2005 Publication Proceedings of ISCRAM 2005 – 2nd International Conference on Information Systems for Crisis Response and Management Abbreviated Journal ISCRAM 2005
Volume Issue Pages 135-137
Keywords Decision support systems; Information systems; Internet telephony; Crisis response; Decision support tools; Decision supports; Instant messaging; Multi-modal; Multimodal communications; Prototype implementations; Voice over IP; Network architecture
Abstract This paper presents the design of a decision support tool for crisis response applications. We propose a system to replace emergency contact calling trees with a multi-modal personnel contact architecture. This architecture consists of a centralized notification framework using existing enterprise e-mail, Web site, instant messaging, and voice over IP (VOIP) infrastructure. Response and audit data is collected and stored for analysis, and can be reviewed using a variety of methods in real time. Details of our prototype implementation are discussed. Specifically, we address multi-modal communication techniques and their benefits, enterprise deployment challenges, and opportunities for further research.
Address MITRE Corporation, United States
Corporate Author Thesis
Publisher Royal Flemish Academy of Belgium Place of Publication Brussels Editor B. Van de Walle, B. Carle
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2411-3387 ISBN 9076971099 Medium
Track POSTER SESSION Expedition Conference 2nd International ISCRAM Conference on Information Systems for Crisis Response and Management
Notes Approved no
Call Number Serial 799
Share this record to Facebook
 

 
Author Tiberiu Sosea; Iustin Sirbu; Cornelia Caragea; Doina Caragea; Traian Rebedea
Title Using the Image-Text Relationship to Improve Multimodal Disaster Tweet Classification Type Conference Article
Year 2021 Publication ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management Abbreviated Journal Iscram 2021
Volume Issue Pages 691-704
Keywords Multi-modal disaster tweet classification, Image-text coherence relationship prediction, ViLBERT
Abstract In this paper, we show that the text-image relationship of disaster tweets can be used to improve the classification of tweets from emergency situations. To this end, we introduce DisRel, a dataset which contains 4,600 multimodal tweets, collected during the disasters that hit the USA in 2017, and manually annotated with coherence image-text relationships, such as Similar and Complementary. We explore multiple models to detect these relationships and perform a comprehensive analysis into the robustness of these methods. Based on these models, we build a simple feature augmentation approach that can leverage the text-image relationship. We test our methods on 2 tasks in CrisisMMD: Humanitarian Categories and Damage Assessment, and observe an increase in the performance of the relationship-aware methods.
Address University of Illinois at Chicago; University Politehnica of Bucharest; University of Illinois at Chicago; Kansas State University; University Politehnica of Bucharest
Corporate Author Thesis
Publisher Virginia Tech Place of Publication Blacksburg, VA (USA) Editor Anouck Adrot; Rob Grace; Kathleen Moore; Christopher W. Zobel
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 978-1-949373-61-5 ISBN Medium
Track Social Media for Disaster Response and Resilience Expedition Conference 18th International Conference on Information Systems for Crisis Response and Management
Notes tsosea2@uic.edu Approved no
Call Number ISCRAM @ idladmin @ Serial 2365
Share this record to Facebook
 

 
Author Zhenke Yang; Leon J.M. Rothkrantz
Title Emotion sensing for context sensitive interpretation of crisis reports Type Conference Article
Year 2007 Publication Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers Abbreviated Journal ISCRAM 2007
Volume Issue Pages 507-514
Keywords Computer science; Computers; Fusion reactions; Context sensitive; Emergent interpretation; Emotion; Emotion modeling; Emotional quality; Environmental agency; Multi-modal; Scenario scripts; Quality control
Abstract The emotional qualities of a report play an important role in the evaluation of eye witness reports in crisis centers. Human operators in the crisis center can use the amount of anxiety and stress detected in a spoken report to rapidly estimate the possible impact and urgency of a report and the appropriate response to the reporter. This paper presents ongoing work in automated multi-modal emotion sensing of crisis reports in order to reduce the cognitive load on human operators. Our approach is based on the work procedures adopted by the crisis response center Rijnmond environmental agency (DCMR) and assumes a spoken dialogue between a reporter and a crisis control center. We use an emotion model based on conceptual graphs that is continually evaluated while the dialogue continues. We show how the model can be applied to interpret crisis report in a fictional toxic gas dispersion scenario.
Address Man-Machine-Interaction Group, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, Mekelweg 4, 2628CD Delft, Netherlands
Corporate Author Thesis
Publisher Information Systems for Crisis Response and Management, ISCRAM Place of Publication Delft Editor B. Van de Walle, P. Burghardt, K. Nieuwenhuis
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2411-3387 ISBN 9789054874171; 9789090218717 Medium
Track EMOT Expedition Conference 4th International ISCRAM Conference on Information Systems for Crisis Response and Management
Notes Approved no
Call Number Serial 1123
Share this record to Facebook
 

 
Author Zijun Long; Richard McCreadie
Title Is Multi-Modal Data Key for Crisis Content Categorization on Social Media? Type Conference Article
Year 2022 Publication ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management Abbreviated Journal Iscram 2022
Volume Issue Pages 1068-1080
Keywords Social Media Classification; Multi-modal Learning; Crisis Management; Deep Learning, BERT; Supervised Learning
Abstract The user-base of social media platforms, like Twitter, has grown dramatically around the world over the last decade. As people post everything they experience on social media, large volumes of valuable multimedia content are being recorded online, which can be analysed to help for a range of tasks. Here we specifically focus on crisis response. The majority of prior works in this space focus on using machine learning to categorize single-modality content (e.g. text of the posts, or images shared), with few works jointly utilizing multiple modalities. Hence, in this paper, we examine to what extent integrating multiple modalities is important for crisis content categorization. In particular, we design a pipeline for multi-modal learning that fuses textual and visual inputs, leverages both, and then classifies that content based on the specified task. Through evaluation using the CrisisMMD dataset, we demonstrate that effective automatic labelling for this task is possible, with an average of 88.31% F1 performance across two significant tasks (relevance and humanitarian category classification). while also analysing cases that unimodal models and multi-modal models success and fail.
Address University of Glasgow; University of Glasgow
Corporate Author Thesis
Publisher Place of Publication Tarbes, France Editor Rob Grace; Hossein Baharmand
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2411-3387 ISBN 978-82-8427-099-9 Medium
Track Social Media for Crisis Management Expedition Conference
Notes Approved no
Call Number ISCRAM @ idladmin @ Serial 2472
Share this record to Facebook