Cheng Wang, Benjamin Bowes, Arash Tavakoli, Stephen Adams, Jonathan Goodall, & Peter Beling. (2020). Smart Stormwater Control Systems: A Reinforcement Learning Approach. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 2–13). Blacksburg, VA (USA): Virginia Tech.
Abstract: Flooding poses a significant and growing risk for many urban areas. Stormwater systems are typically used to control flooding, but are traditionally passive (i.e. have no controllable components). However, if stormwater systems are retrofitted with valves and pumps, policies for controlling them in real-time could be implemented to enhance system performance over a wider range of conditions than originally designed for. In this paper, we propose an autonomous, reinforcement learning (RL) based, stormwater control system that aims to minimize flooding during storms. With this approach, an optimal control policy can be learned by letting an RL agent interact with the system in response to received reward signals. In comparison with a set of static control rules, RL shows superior performance on a wide range of artificial storm events. This demonstrates RL's ability to learn control actions based on observation and interaction, a key benefit for dynamic and ever-changing urban areas.
|
Nasik Muhammad Nafi, Avishek Bose, Sarthak Khanal, Doina Caragea, & William H. Hsu. (2020). Abstractive Text Summarization of Disaster-Related Documents. In Amanda Hughes, Fiona McNeill, & Christopher W. Zobel (Eds.), ISCRAM 2020 Conference Proceedings – 17th International Conference on Information Systems for Crisis Response and Management (pp. 881–892). Blacksburg, VA (USA): Virginia Tech.
Abstract: Abstractive summarization is intended to capture key information from the full text of documents. In the application domain of disaster and crisis event reporting, key information includes disaster effects, cause, and severity. While some researches regarding information extraction in the disaster domain have focused on keyphrase extraction from short disaster-related texts like tweets, there is hardly any work that attempts abstractive summarization of long disaster-related documents. Following the recent success of Reinforcement Learning (RL) in other domains, we leverage an RL-based state-of-the-art approach in abstractive summarization to summarize disaster-related documents. RL enables an agent to find an optimal policy by maximizing some reward. We design a novel hybrid reward metric for the disaster domain by combining \underline{Vec}tor Similarity and \underline{Lex}icon Matching (\textit{VecLex}) to maximize the relevance of the abstract to the source document while focusing on disaster-related keywords. We evaluate the model on a disaster-related subset of a CNN/Daily Mail dataset consisting of 104,913 documents. The results show that our approach produces more informative summaries and achieves higher \textit{VecLex} scores compared to the baseline.
|