|
Zahra Ashktorab, Christopher Brown, Manojit Nandi, & Aron Culotta. (2014). Tweedr: Mining twitter to inform disaster response. In and P.C. Shih. L. Plotnick M. S. P. S.R. Hiltz (Ed.), ISCRAM 2014 Conference Proceedings – 11th International Conference on Information Systems for Crisis Response and Management (pp. 354–358). University Park, PA: The Pennsylvania State University.
Abstract: In this paper, we introduce Tweedr, a Twitter-mining tool that extracts actionable information for disaster relief workers during natural disasters. The Tweedr pipeline consists of three main parts: classification, clustering and extraction. In the classification phase, we use a variety of classification methods (sLDA, SVM, and logistic regression) to identify tweets reporting damage or casualties. In the clustering phase, we use filters to merge tweets that are similar to one another; and finally, in the extraction phase, we extract tokens and phrases that report specific information about different classes of infrastructure damage, damage types, and casualties. We empirically validate our approach with tweets collected from 12 different crises in the United States since 2006.
|
|
|
Basanta Chaulagain, Aman Shakya, Bhuwan Bhatt, Dip Kiran Pradhan Newar, Sanjeeb Prasad Panday, & Rom Kant Pandey. (2019). Casualty Information Extraction and Analysis from News. In Z. Franco, J. J. González, & J. H. Canós (Eds.), Proceedings of the 16th International Conference on Information Systems for Crisis Response And Management. Valencia, Spain: Iscram.
Abstract: During unforeseen situations of crisis such as disasters and accidents we usually have to rely on local news reports for the latest updates on casualties. The information in such feeds is in unstructured text format, however, structured data is required for analysis and visualization. This paper presents a system for automatic extraction and visualization of casualty information from news articles. A prototype online system has been implemented and tested with local news feed of road accidents. The system extracts information regarding number of deaths, injuries, date, location, and vehicles involved using techniques like Named Entity Recognition, Semantic Role Labeling and Regular expressions. The entities were manually annotated and compared with the results obtained from the system. Initial results are promising with good accuracy overall. Moreover, the system maintains an online database of casualties and provides information visualization and filtering interfaces for analysis.
|
|
|
Cruz, J. A. dela, Hendrickx, I., & Larson, M. (2023). Towards XAI for Information Extraction on Online Media Data for Disaster Risk Management. In Jaziar Radianti, Ioannis Dokas, Nicolas Lalone, & Deepak Khazanchi (Eds.), Proceedings of the 20th International ISCRAM Conference (pp. 478–486). Omaha, USA: University of Nebraska at Omaha.
Abstract: Disaster risk management practitioners have the responsibility to make decisions at every phase of the disaster risk management cycle: mitigation, preparedness, response and recovery. The decisions they make affect human life. In this paper, we consider the current state of the use of AI in information extraction (IE) for disaster risk management (DRM), which makes it possible to leverage disaster information in social media. We consolidate the challenges and concerns of using AI for DRM into three main areas: limitations of DRM data, limitations of AI modeling and DRM domain-specific concerns, i.e., bias, privacy and security, transparency and accountability, and hype and inflated expectations. Then, we present a systematic discussion of how explainable AI (XAI) can address the challenges and concerns of using AI for IE in DRM.
|
|
|
Fedor Vitiugin, & Carlos Castillo. (2019). Comparison of Social Media in English and Russian During Emergencies and Mass Convergence Events. In Z. Franco, J. J. González, & J. H. Canós (Eds.), Proceedings of the 16th International Conference on Information Systems for Crisis Response And Management. Valencia, Spain: Iscram.
Abstract: Twitter is used for spreading information during crisis events. In this paper, we first retrieve event-related information
posted in English and Russian during six disasters and sports events that received wide media coverage in both
languages, using an adaptive information filtering method for automating the collection of about 100 000 messages.
We then compare the contents of these messages in terms of 17 informational and linguistic features using a
difference in differences approach. Our results suggest that posts in each language are focused on different types
of information. For instance, almost 50% of the popular people mentioned in these messages appear exclusively
in either the English messages or the Russian messages, but not both. Our results also suggest differences in the
adoption of platform mechanics during crises between Russian-speaking and English-speaking users. This has
important implications for data collection during crises, which is almost always focused on a single language.
|
|
|
Rajesh M. Hegde, B.S. Manoj, Bashkar D. Rao, & Ramesh R. Rao. (2006). Emotion detection from speech signals and its applications in supporting enhanced QoS in emergency response. In M. T. B. Van de Walle (Ed.), Proceedings of ISCRAM 2006 – 3rd International Conference on Information Systems for Crisis Response and Management (pp. 82–91). Newark, NJ: Royal Flemish Academy of Belgium.
Abstract: Networking in the event of disasters requires new hybrid wireless architectures such as Wireless Mesh Networks (WMNs). Provisioning Quality of Service (QoS) in such networks which are quickly deployed during emergencies demand radical solutions. In this paper, we provide a new QoS approach for voice calls over a wireless mesh networks during emergency situations. According to our scheme, the contention and back-off parameters are modified based on the emotion content in the voice streams. This paper also looks at methods for detecting emotion from an incoming voice call using the speech signal. The issues of interest in such situations are whether the caller is in a state of extreme panic, moderate panic, or in a normal state of behavior. The communication network behavior should be modified to provide differentiated QoS for calls based on the degree of emotion. We use several features extracted from the speech signal like the range of pitch variation, energy in the critical bark band, range of the first three formant variations, and speaking rate among others to discriminate between the three emotional states. At the back end the Gaussian mixture modeling techniques is used to model the three emotional states of the speaker. Since a large number of features increase the computational complexity and time, a feature selection technique is employed based on the Bhattacharya distance, to select the set of features that give maximum discrimination between the classes. These set of features are employed to simulate an emotion recognition system. The results indicate a promising emotion detection rate for the three emotions. We also present the early results on detecting the emotion content in the speech and using this in the MAC layer differentiated QoS provisioning scheme. Our scheme provides an end-to-end delay performance improvement for panicked calls as high as 60% compared to normal calls.
|
|
|
Jingxian Wang, Lida Huang, Guofeng Su, Tao Chen, Chunhui Liu, & Xiaomeng Wang. (2021). UAV and GIS Based Real-time Display System for Forest Fire. In Anouck Adrot, Rob Grace, Kathleen Moore, & Christopher W. Zobel (Eds.), ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management (pp. 527–535). Blacksburg, VA (USA): Virginia Tech.
Abstract: When a forest fire occurs, the commander cannot obtain information in time, and the rescue command is like groping in the dark. In order to solve the problem, this research establishes a real-time forest fire display system based on UAV and GIS. The UAV is equipped with visible light and thermal imaging cameras to transmit back forest fire scenes in real time. Based on GIS, the system can extract the boundary of the fire field through image processing and 3D modeling technology, and display various forest fire information on the screen. Through image processing and 3D modeling technology, the boundary of the fire field can be extracted and displayed on the screen. We conducted several experiments to test the accuracy and the reliability of the system. The result shows that the accuracy, reliability and real-time capability can be guaranteed in small-scale forest fires.
|
|
|
Kiran Zahra, Rahul Deb Das, Frank O. Ostermann, & Ross S. Purves. (2022). Towards an Automated Information Extraction Model from Twitter Threads during Disasters. In Rob Grace, & Hossein Baharmand (Eds.), ISCRAM 2022 Conference Proceedings – 19th International Conference on Information Systems for Crisis Response and Management (pp. 637–653). Tarbes, France.
Abstract: Social media plays a vital role as a communication source during large-scale disasters. The unstructured and informal nature of such short individual posts makes it difficult to extract useful information, often due to a lack of additional context. The potential of social media threads– sequences of posts– has not been explored as a source of adding context and more information to the initiating post. In this research, we explored Twitter threads as an information source and developed an information extraction model capable of extracting relevant information from threads posted during disasters. We used a crowdsourcing platform to determine whether a thread adds more information to the initial tweet and defined disaster-related information present in these threads into six themes– event reporting, location, time, intensity, casualty and damage reports, and help calls. For these themes, we created the respective thematic lexicons from WordNet. Moreover, we developed and compared four information extraction models trained on GloVe, word2vec, bag-of-words, and thematic bag-of-words to extract and summarize the most critical information from the threads. Our results reveal that 70 percent of all threads add information to the initiating post for various disaster-related themes. Furthermore, the thematic bag-of-words information extraction model outperforms the other algorithms and models for preserving the highest number of disaster-related themes.
|
|
|
Nasik Muhammad Nafi, Avishek Bose, Sarthak Khanal, Doina Caragea, & William H. Hsu. (2020). Abstractive Text Summarization of Disaster-Related Documents. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 881–892). Blacksburg, VA (USA): Virginia Tech.
Abstract: Abstractive summarization is intended to capture key information from the full text of documents. In the application domain of disaster and crisis event reporting, key information includes disaster effects, cause, and severity. While some researches regarding information extraction in the disaster domain have focused on keyphrase extraction from short disaster-related texts like tweets, there is hardly any work that attempts abstractive summarization of long disaster-related documents. Following the recent success of Reinforcement Learning (RL) in other domains, we leverage an RL-based state-of-the-art approach in abstractive summarization to summarize disaster-related documents. RL enables an agent to find an optimal policy by maximizing some reward. We design a novel hybrid reward metric for the disaster domain by combining \underline{Vec}tor Similarity and \underline{Lex}icon Matching (\textit{VecLex}) to maximize the relevance of the abstract to the source document while focusing on disaster-related keywords. We evaluate the model on a disaster-related subset of a CNN/Daily Mail dataset consisting of 104,913 documents. The results show that our approach produces more informative summaries and achieves higher \textit{VecLex} scores compared to the baseline.
|
|
|
El Hamali Samiha, Nouali-TAboudjnent, N., & Omar Nouali. (2011). Knowledge extraction by Internet monitoring to enhance crisis management. In E. Portela L. S. M.A. Santos (Ed.), 8th International Conference on Information Systems for Crisis Response and Management: From Early-Warning Systems to Preparedness and Training, ISCRAM 2011. Lisbon: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: This paper presents our work on developing a system for Internet monitoring and knowledge extraction from different web documents which contain information about disasters. This system is based on ontology of the disasters domain for the knowledge extraction and it presents all the information extracted according to the kind of the disaster defined in the ontology. The system disseminates the information extracted (as a synthesis of the web documents) to the users after a filtering based on their profiles. The profile of a user is updated automatically by interactively taking into account his feedback.
|
|
|
Sara Barozzi, Jose Luis Fernandez Marquez, Amudha Ravi Shankar, & Barbara Pernici. (2019). Filtering images extracted from social media in the response phase of emergency events. In Z. Franco, J. J. González, & J. H. Canós (Eds.), Proceedings of the 16th International Conference on Information Systems for Crisis Response And Management. Valencia, Spain: Iscram.
Abstract: The use of social media to support emergency operators in the first hours of the response phases can improve the
quality of the information available and awareness on ongoing emergency events. Social media contain both textual
and visual information, in the form of pictures and videos. The problem related to the use of social media posts
as a source of information during emergencies lies in the difficulty of selecting the relevant information among
a very large amount of irrelevant information. In particular, we focus on the extraction of images relevant to an
event for rapid mapping purpose. In this paper, a set of possible filters is proposed and analyzed with the goal of
selecting useful images from posts and of evaluating how precision and recall are impacted. Filtering techniques,
which include both automated and crowdsourced steps, have the goal of providing better quality posts and easy
manageable data volumes both to emergency responders and rapid mapping operators. The impact of the filters on
precision and recall in extracting relevant images is discussed in the paper in two different case studies.
|
|
|
Schreiber. (2007). Automatic generation of sensor queries in a WSN for environmental monitoring. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 245–254). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: The design of a WSN for environmental data monitoring is a largely ad-hoc human process. In this paper, we propose the automatic generation of queries for sensor data extraction, based on the collection of a number of parameters concerning the physical phenomenon to be controlled, the relevant physical variables, the types of sensors to be deployed and their allocation, the data collection frequencies, and other features.
|
|
|
Min Song, & Peishih Chang. (2008). Automatic extraction of abbreviation for emergency management websites. In B. V. de W. F. Fiedrich (Ed.), Proceedings of ISCRAM 2008 – 5th International Conference on Information Systems for Crisis Response and Management (pp. 93–100). Washington, DC: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: In this paper we present a novel approach to reduce information proliferation and aid better information structure by automatically generating extraction of abbreviation for emergency management websites. 5.7 Giga Byte web data from 624 emergency management related web sites is collected and a list of acronyms is automatically generated by proposed system (AbbrevExtractor). Being the first attempt of applying abbreviation extraction to the field, this work is expected to provide comprehensive and timely information for emergency management communities in emergency preparedness, training and education. Future work is likely to involve more data collection and intelligent text analysis for dynamically maintaining and updating the list of acronyms and abbreviations.
|
|
|
Teun Terpstra, Richard Stronkman, Arnout De Vries, & Geerte L. Paradies. (2012). Towards a realtime Twitter analysis during crises for operational crisis management. In Z.Franco J. R. L. Rothkrantz (Ed.), ISCRAM 2012 Conference Proceedings – 9th International Conference on Information Systems for Crisis Response and Management. Vancouver, BC: Simon Fraser University.
Abstract: Today's crises attract great attention on social media, from local and distant citizens as well as from news media. This study investigates the possibilities of real-time and automated analysis of Twitter messages during crises. The analysis was performed through application of an information extraction tool to nearly 97,000 tweets that were published shortly before, during and after a storm hit the Pukkelpop 2011 festival in Belgium. As soon as the storm hit the festival tweet activity increased exponentially, peaking at 576 tweets per minute. The extraction tool enabled analyzing tweets through predefined (geo)graphical displays, message content filters (damage, casualties) and tweet type filters (e.g., retweets). Important topics that emerged were 'early warning tweets', 'rumors' and the 'self-organization of disaster relief' on Twitter. Results indicate that automated filtering of information provides valuable information for operational response and crisis communication. Steps for further research are discussed. © 2012 ISCRAM.
|
|
|
Don J.M. Willems, & Louis Vuurpijl. (2007). Designing interactive maps for crisis management. In K. Nieuwenhuis P. B. B. Van de Walle (Ed.), Intelligent Human Computer Systems for Crisis Response and Management, ISCRAM 2007 Academic Proceedings Papers (pp. 159–166). Delft: Information Systems for Crisis Response and Management, ISCRAM.
Abstract: This paper describes the design, implementation, and evaluation of pen input recognition systems that are suited for so-called interactive maps. Such systems provide the possibility to enter handwriting, drawings, sketches and other modes of pen input. Typically, interactive maps are used to annotate objects or mark situations that are depicted on the display of video walls, handhelds, PDAs, or tablet PCs. Our research explores the possibility of employing interactive maps for crisis management systems, which require robust and effective communication of, e.g., the location of objects, the kind of incidents, or the indication of route alternatives. The design process described here is a mix of “best practices” for building perceptive systems, combining research in pattern recognition, human factors, and human-computer interaction. Using this approach, comprising data collection and annotation, feature extraction, and the design of domain-specific recognition technology, a decrease in error rates is achieved from 9.3% to 4.0%.
|
|
|
Yohann Chasseray, Anne-Marie Barthe-Delanoë, Stéphane Négny, & Jean-Marc Le Lann. (2021). Automated unsupervised ontology population system applied to crisis management domain. In Anouck Adrot, Rob Grace, Kathleen Moore, & Christopher W. Zobel (Eds.), ISCRAM 2021 Conference Proceedings – 18th International Conference on Information Systems for Crisis Response and Management (pp. 968–981). Blacksburg, VA (USA): Virginia Tech.
Abstract: As crisis are complex systems, providing an accurate response to an ongoing crisis is not possible without ensuring situational awareness. The ongoing works around knowledge management and ontologies provide relevant and machine readable structures towards situational awareness and context understanding. Many metamodels, that can be derived into ontologies, supporting the collect and organization of crucial information for Decision Support Systems have been designed and are now used on specific cases. The next challenge into crisis management is to provide tools that can process an automated population of these metamodels/ontologies. The aim of this paper is to present a strategy to extract concept-instance relations in order to feed crisis management ontologies. The presented system is based on a previously proposed generic metamodel for information extraction and is applied in this paper to three different case studies representing three different crisis namely Ebola sanitarian crisis, Fukushima nuclear crisis and Hurricane Katrina natural disaster.
|
|