|
Peter Berggren, Björn J.E. Johansson, Nicoletta Baroutsi, Isabelle Turcotte, & Sébastien Tremblay. (2014). Assessing team focused behaviors in emergency response teams using the shared priorities measure. In and P.C. Shih. L. Plotnick M. S. P. S.R. Hiltz (Ed.), ISCRAM 2014 Conference Proceedings – 11th International Conference on Information Systems for Crisis Response and Management (pp. 130–134). University Park, PA: The Pennsylvania State University.
Abstract: The purpose of this work in progress paper is to report on the method development of the Shared Priorities measure to include content analysis, as a way of gaining a deeper understanding of team work in crisis/emergency response. An experiment is reported where the performance of six trained teams is compared with the performance of six non-trained teams. The experiment was performed using an emergency response microworld simulation with a forest fire scenario. Dependent measures were simulation performance, the Crew Awareness Rating Scale (CARS), and content analysis. Trained teams performed better and scored higher on measures of team behaviors.
|
|
|
Nicoletta Baroutsi. (2016). Observing Sensemaking in C2: Performance Assessments in Multi-Organizational Crisis Response. In A. Tapia, P. Antunes, V.A. Bañuls, K. Moore, & J. Porto (Eds.), ISCRAM 2016 Conference Proceedings ? 13th International Conference on Information Systems for Crisis Response and Management. Rio de Janeiro, Brasil: Federal University of Rio de Janeiro.
Abstract: A crisis can involve multiple organizations during high pressure events, and it is up to the Command & Control (C2) unit to provide direction and coordination for the response (Brehmer, 2006). Hard as this problem is, there is still no ?one-solution?. Dissimilar organizations with very different methods seem to be able to master the problem. This paper presents the initial development of a new evaluation method for C2 in the context of multi-organizational crisis response. The data is collected at an emergency water exercise series conducted in several cities in Sweden. Each exercise involves multiple agencies and organizations, with up to 76 participants from 15 unique organizations/units. The analysis is brief, but presents the possibility of observing Sensemaking as it unfolds, and that generic behavioral patterns can be found. The existence of generic and observable behavior patterns suggests the possibility of assessing, and maybe even quantifying, Sensemaking performance in C2.
|
|
|
Nicoletta Baroutsi. (2018). A Practitioners Guide for C2 Evaluations: Quantitative Measurements of Performance and Effectiveness. In Kees Boersma, & Brian Tomaszeski (Eds.), ISCRAM 2018 Conference Proceedings – 15th International Conference on Information Systems for Crisis Response and Management (pp. 170–189). Rochester, NY (USA): Rochester Institute of Technology.
Abstract: Quantitative evaluations are valuable in the strive for improvements and asserting quality. However, the field of Command & Control (C2) evaluations are hard to navigate, and it is difficult to find the correct measurement for a specific situation. A comprehensive Scoping Study was made concerning measurements of C2 performance and effectiveness. A lack of an existing appropriate framework for discussing C2 evaluations led to the development of the Crisis Response Management (CRM) Matrix. This is an analysis tool that assigns measurements into categories, and each category display unique strengths, weaknesses and trends. The analysis yielded results proving to be too rich for a single article, thusly, this is the first of two articles covering the results. In this article, the Practitioners Guide focus on results valuable for someone interested in evaluating C2. Each evaluation has specific requirements that, for best result, ought to be reflected in the chosen measurement.
|
|