Dragos Datcu, & Leon J.M. Rothkrantz. (2007). The use of active appearance model for facial expression recognition in crisis environments. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 515–524). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: In the past a crisis event was notified by local witnesses that use to make phone calls to the special services. They reported by speech according to their observation on the crisis site. The recent improvements in the area of human computer interfaces make possible the development of context-aware systems for crisis management that support people in escaping a crisis even before external help is available at site. Apart from collecting the people's reports on the crisis, these systems are assumed to automatically extract useful clues during typical human computer interaction sessions. The novelty of the current research resides in the attempt to involve computer vision techniques for performing an automatic evaluation of facial expressions during human-computer interaction sessions with a crisis management system. The current paper details an approach for an automatic facial expression recognition module that may be included in crisis-oriented applications. The algorithm uses Active Appearance Model for facial shape extraction and SVM classifier for Action Units detection and facial expression recognition.
|
Haiyan Hao, & Yan Wang. (2020). Hurricane Damage Assessment with Multi-, Crowd-Sourced Image Data: A Case Study of Hurricane Irma in the City of Miami. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 825–837). Blacksburg, VA (USA): Virginia Tech.
Abstract: The massive crowdsourced data generated on social networking platforms (e.g. Twitter and Flickr) provide free, real-time data for damage assessment (DA) even during catastrophes. Recent studies leveraging crowdsourced data for DA mainly focused on analyzing textual formats. Crowdsourced images can provide rich and objective information about damage conditions, however, are rarely researched for DA purposes. The highly-varied content and loosely-defined damage forms make it difficult to process and analyze the crowdsourced images. To address this problem, we propose a data-driven DA method based on multi-, crowd-sourced images, which includes five machine learning classifiers organized in a hierarchical structure. The method is validated with a case study investigating the damage condition of the City of Miami caused by Hurricane Irma. The outcome is then compared with a metric derived from NFIP insurance claims data. The proposed method offers a resource for rapid DA that supplements conventional DA methods.
|
Nilani Algiriyage, Raj Prasanna, Kristin Stock, Emma Hudson-Doyle, David Johnston, Minura Punchihewa, et al. (2021). Towards Real-time Traffic Flow Estimation using YOLO and SORT from Surveillance Video Footage. In Anouck Adrot, Rob Grace, Kathleen Moore, & Christopher W. Zobel (Eds.), ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management (pp. 40–48). Blacksburg, VA (USA): Virginia Tech.
Abstract: Traffic emergencies and resulting delays cause a significant impact on the economy and society. Traffic flow estimation is one of the early steps in urban planning and managing traffic infrastructure. Traditionally, traffic flow rates were commonly measured using underground inductive loops, pneumatic road tubes, and temporary manual counts. However, these approaches can not be used in large areas due to high costs, road surface degradation and implementation difficulties. Recent advancement of computer vision techniques in combination with freely available closed-circuit television (CCTV) datasets has provided opportunities for vehicle detection and classification. This study addresses the problem of estimating traffic flow using low-quality video data from a surveillance camera. Therefore, we have trained the novel YOLOv4 algorithm for five object classes (car, truck, van, bike, and bus). Also, we introduce an algorithm to count the vehicles using the SORT tracker based on movement direction such as ``northbound'' and ``southbound'' to obtain the traffic flow rates. The experimental results, for a CCTV footage in Christchurch, New Zealand shows the effectiveness of the proposed approach. In future research, we expect to train on large and more diverse datasets that cover various weather and lighting conditions.
|
Leon J.M. Rothkrantz, & Zhenke Yang. (2009). Crowd control by multiple cameras. In S. J. J. Landgren (Ed.), ISCRAM 2009 – 6th International Conference on Information Systems for Crisis Response and Management: Boundary Spanning Initiatives and New Perspectives. Gothenburg: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: One of the goals of the crowd control project at Delft University of Technology is to detect and track people during a crisis event, classify their behavior and assess what is happening. The assumption is that the crisis area is observed by multiple cameras (fixed or mobile). The cameras sense the environment and extract features such as the amount of motion. These features are the input to a Bayesian network with nodes corresponding to situations such as terroristic attack, fire, and explosion. Given the probabilities of the observed features, by reasoning, the likelihood of the possible situations can be computed. A prototype was tested in a train compartment and its environment. Forty scenarios, performed by actors, were recorded. From the recordings the conditional probabilities have been computed. The scenarios are designed as scripts which proved to be a good methodology. The models, experiments and results will be presented in the paper.
|