|Home||<< 1 >>|
|Title||Identifying Disaster Damage Images Using a Domain Adaptation Approach||Type||Conference Article|
|Abstract||Approaches for effectively filtering useful situational awareness information posted by eyewitnesses of disasters,
in real time, are greatly needed. While many studies have focused on filtering textual information, the research
on filtering disaster images is more limited. In particular, there are no studies on the applicability of domain
adaptation to filter images from an emergent target disaster, when no labeled data is available for the target disaster.
To fill in this gap, we propose to apply a domain adaptation approach, called domain adversarial neural networks
(DANN), to the task of identifying images that show damage. The DANN approach has VGG-19 as its backbone,
and uses the adversarial training to find a transformation that makes the source and target data indistinguishable.
Experimental results on several pairs of disasters suggest that the DANN model generally gives similar or better
results as compared to the VGG-19 model fine-tuned on the source labeled data.
|Address||Department of Computer Science, Kansas State University, United States of America;Department of Computer Science, University of Illinois at Chicago, United States of America;Qatar Computing Research Institute, Hamad Bin Khalifa University, Qatar|
|Publisher||Iscram||Place of Publication||Valencia, Spain||Editor||Franco, Z.; González, J.J.; Canós, J.H.|
|Language||English||Summary Language||English||Original Title|
|Series Editor||Series Title||Abbreviated Series Title|
|Series Volume||Series Issue||Edition|
|Track||Expedition||Conference||16th International Conference on Information Systems for Crisis Response and Management (ISCRAM 2019)|
|Share this record to Facebook