|
Dat T. Nguyen, Firoj Alam, Ferda Ofli, & Muhammad Imran. (2017). Automatic Image Filtering on Social Networks Using Deep Learning and Perceptual Hashing During Crises. In eds Aurélie Montarnal Matthieu Lauras Chihab Hanachi F. B. Tina Comes (Ed.), Proceedings of the 14th International Conference on Information Systems for Crisis Response And Management (pp. 499–511). Albi, France: Iscram.
Abstract: The extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness and launch relief operations accordingly. In addition to the textual content, people post overwhelming amounts of imagery data on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in the computer vision field, automatic processing of the crisis-related social media imagery data remains a challenging task. It is because a majority of which consists of redundant and irrelevant content. In this paper, we present an image processing pipeline that comprises de-duplication and relevancy filtering mechanisms to collect and filter social media image content in real-time during a crisis event. Results obtained from extensive experiments on real-world crisis datasets demonstrate the significance of the proposed pipeline for optimal utilization of both human and machine computing resources.
|
|
|
Tom De Groeve, & Patrick Riva. (2009). Early flood detection and mapping for humanitarian response. In S. J. J. Landgren (Ed.), ISCRAM 2009 – 6th International Conference on Information Systems for Crisis Response and Management: Boundary Spanning Initiatives and New Perspectives. Gothenburg: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: Space-based river monitoring can provide a systematic, timely and impartial way to detect floods of humanitarian concern. This paper presents a new processing method for such data, resulting in daily flood magnitude time series for any arbitrary observation point on Earth, with lag times as short as 4h. Compared with previous work, this method uses image processing techniques and reduces the time to obtain a 6 year time series for an observation site from months to minutes, with more accurate results and global coverage. This results in a daily update of major floods in the world, with an objective measure for their magnitude, useful for early humanitarian response. Because of its full coverage, the grid-based technique also allows the automatic creation of low-resolution flood maps only hours after the satellite passes, independent of cloud coverage.
|
|
|
Firoj Alam, Ferda Ofli, Muhammad Imran, & Michael Aupetit. (2018). A Twitter Tale of Three Hurricanes: Harvey, Irma, and Maria. In Kees Boersma, & Brian Tomaszeski (Eds.), ISCRAM 2018 Conference Proceedings – 15th International Conference on Information Systems for Crisis Response and Management (pp. 553–572). Rochester, NY (USA): Rochester Institute of Technology.
Abstract: People increasingly use microblogging platforms such as Twitter during natural disasters and emergencies. Research studies have revealed the usefulness of the data available on Twitter for several disaster response tasks. However, making sense of social media data is a challenging task due to several reasons such as limitations of available tools to analyze high-volume and high-velocity data streams. This work presents an extensive multidimensional analysis of textual and multimedia content from millions of tweets shared on Twitter during the three disaster events. Specifically, we employ various Artificial Intelligence techniques from Natural Language Processing and Computer Vision fields, which exploit different machine learning algorithms to process the data generated during the disaster events. Our study reveals the distributions of various types of useful information that can inform crisis managers and responders as well as facilitate the development of future automated systems for disaster management.
|
|
|
Gkika, I., Pattas, D., Konstantoudakis, K., & Zarpalas, D. (2023). Object detection and augmented reality annotations for increased situational awareness in light smoke conditions. In Jaziar Radianti, Ioannis Dokas, Nicolas Lalone, & Deepak Khazanchi (Eds.), Proceedings of the 20th International ISCRAM Conference (pp. 231–241). Omaha, USA: University of Nebraska at Omaha.
Abstract: Innovative technologies powered by Computer Vision algorithms can aid first responders, increasing their situ ational awareness. However, adverse conditions, such as smoke, can reduce the efficacy of such algorithms by degrading the input images. This paper presents a pipeline of image de-smoking, object detection, and augmented reality display that aims to enhance situational awareness in smoky conditions. A novel smoke-reducing deep learning algorithm is applied as a preprocessing step, before state-of-the-art object detection. The detected objects and persons are highlighted in the user’s augmented reality display. The proposed method is shown to increase detection accuracy and confidence. Testing in realistic environments provides an initial evaluation of the method, both in terms of image processing and of usefulness to first responders.
|
|
|
Muhammad Imran, Firoj Alam, Umair Qazi, Steve Peterson, & Ferda Ofli. (2020). Rapid Damage Assessment Using Social Media Images by Combining Human and Machine Intelligence. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 761–773). Blacksburg, VA (USA): Virginia Tech.
Abstract: Rapid damage assessment is one of the core tasks that response organizations perform at the onset of a disaster to understand the scale of damage to infrastructures such as roads, bridges, and buildings. This work analyzes the usefulness of social media imagery content to perform rapid damage assessment during a real-world disaster. An automatic image processing system, which was activated in collaboration with a volunteer response organization, processed ~280K images to understand the extent of damage caused by the disaster. The system achieved an accuracy of 76% computed based on the feedback received from the domain experts who analyzed ~29K system-processed images during the disaster. An extensive error analysis reveals several insights and challenges faced by the system, which are vital for the research community to advance this line of research.
|
|