Siska Fitrianie, & Leon J.M. Rothkrantz. (2009). Computed ontology-based situation awareness of multi-user observations. In S. J. J. Landgren (Ed.), ISCRAM 2009 – 6th International Conference on Information Systems for Crisis Response and Management: Boundary Spanning Initiatives and New Perspectives. Gothenburg: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: In recent years, we have developed a framework of human-computer interaction that offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting/drawing, gesture, text and visual symbols. The framework allows the rapid construction of a multimodal, multi-device, and multi-user communication system within crisis management. This paper reports the approaches used in multi-user information integration (input fusion) and multimodal presentation (output fission) modules, which can be used in isolation, but also as part of the framework. The latter is able to specify and produce contextsensitive and user-tailored output combining language, speech, visual-language and graphics. These modules provide a communication channel between the system and users with different communication devices. By the employment of ontology, the system's view about the world is constructed from multi-user observations and appropriate multimodal responses are generated.
|
Siska Fitrianie, Ronald Poppe, Trung H. Bui, Alin Gavril Chitu, Dragos Datcu, Ramón Dor, et al. (2007). A multimodal human-computer interaction framework for research into crisis management. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 149–158). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: Unreliable communication networks, chaotic environments and stressful conditions can make communication during crisis events difficult. The current practice in crisis management can be improved by introducing ICT systems in the process. However, much experimentation is needed to determine where and how ICT can aid. Therefore, we propose a framework in which predefined modules can be connected in an ad hoc fashion. Such a framework allows for rapid development and evaluation of such ICT systems. The framework offers recognition of various communication modalities including speech, lip movement, facial expression, handwriting and drawing, body gesture, text and visual symbols. It provides mechanisms to fuse these modalities into a context dependent interpretation of the current situation and generate appropriate the multimodal information responses. The proposed toolbox can be used as part of a disaster and rescue simulation. We propose evaluation methods, and focus on the technological aspects of our framework.
|