|Home||<< 1 >>|
Daniel Stein, Barbara Krausz, Jobst Löffler, Robin Marterer, Rolf Bardeli, Jochen Schwenninger, et al. (2012). Enriching an intelligent resource management system with automatic event recognition. In Z.Franco J. R. L. Rothkrantz (Ed.), ISCRAM 2012 Conference Proceedings – 9th International Conference on Information Systems for Crisis Response and Management. Vancouver, BC: Simon Fraser University.
Abstract: Event recognition systems have high potential to support crisis management and emergency response. Given the vast amount of possible input channels, automatic processing of raw data is crucial. In this paper, we describe several components integrated in an overall intelligent resource management system, namely abnormal event detection in audio and video material, as well as automatic speech recognition within a public safety network. We elaborate on the challenges expected from real life data and the solutions that we applied. The overall system, based on Event-Driven Service-Oriented Architecture, has been implemented and partly integrated into the end users' infrastructures. The system is continuously running since almost two years, collecting data for research purposes. Â© 2012 ISCRAM.
Keywords: Data handling; Information services; Information systems; Natural resources management; Resource allocation; Service oriented architecture (SOA); Abnormal event detections; Automatic speech recognition; Event recognition; Irm; TETRA channel; Management information systems
Siska Fitrianie, & Leon J.M. Rothkrantz. (2009). Computed ontology-based situation awareness of multi-user observations. In S. J. J. Landgren (Ed.), ISCRAM 2009 – 6th International Conference on Information Systems for Crisis Response and Management: Boundary Spanning Initiatives and New Perspectives. Gothenburg: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: In recent years, we have developed a framework of human-computer interaction that offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting/drawing, gesture, text and visual symbols. The framework allows the rapid construction of a multimodal, multi-device, and multi-user communication system within crisis management. This paper reports the approaches used in multi-user information integration (input fusion) and multimodal presentation (output fission) modules, which can be used in isolation, but also as part of the framework. The latter is able to specify and produce contextsensitive and user-tailored output combining language, speech, visual-language and graphics. These modules provide a communication channel between the system and users with different communication devices. By the employment of ontology, the system's view about the world is constructed from multi-user observations and appropriate multimodal responses are generated.
Keywords: Character recognition; Face recognition; Gesture recognition; Information systems; Speech recognition; Communication device; Communication modalities; Information integration; Multi-modal; Multi-user; Multiuser communication; Rapid construction; Situation awareness; Visual languages
Track: Open Track
Siska Fitrianie, Ronald Poppe, Trung H. Bui, Alin Gavril Chitu, Dragos Datcu, Ramón Dor, et al. (2007). A multimodal human-computer interaction framework for research into crisis management. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 149–158). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: Unreliable communication networks, chaotic environments and stressful conditions can make communication during crisis events difficult. The current practice in crisis management can be improved by introducing ICT systems in the process. However, much experimentation is needed to determine where and how ICT can aid. Therefore, we propose a framework in which predefined modules can be connected in an ad hoc fashion. Such a framework allows for rapid development and evaluation of such ICT systems. The framework offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting and drawing, body gesture, text and visual symbols. It provides mechanisms to fuse these modalities into a context dependent interpretation of the current situation and generate appropriate the multimodal information responses. The proposed toolbox can be used as part of a disaster and rescue simulation. We propose evaluation methods, and focus on the technological aspects of our framework.
Keywords: Character recognition; Communication systems; Disasters; Human computer interaction; Speech recognition; Communication modalities; Evaluation methods; Facial Expressions; Multi-modal information; Multimodal human computer interaction; Multimodal system; Rescue simulation; Technological aspects; Face recognition