Carolyn Huston, Jennifer Davis, Petra Kuhnert, & Andrew Bolt. (2023). Creating Trusted Extensions to Existing Software Tools in Bushfire Consequence Estimation. In V. L. Thomas J. Huggins (Ed.), Proceedings of the ISCRAM Asia Pacific Conference 2022 (pp. 25–34). Palmerston North, New Zealand: Massey Unversity.
Abstract: Bushfire modelling has advanced with wildfire simulators such as Spark and Phoenix Rapidfire that can generate plausible fire dynamics and simulations that decision-makers can easily explore. With extreme weather impacting the Australian landscapes through the onset of droughts and heatwaves, it is becoming more important to make decisions rapidly from fire simulations. An element of this decision-making process is trust, in which the decision-maker feels empowered to make decisions from models of complex systems like fire. We propose a framework for decision-making that makes use of a fire emulator, a surrogate version of Spark, to facilitate faster exploration of wildfire predictions and their uncertainties under a changing climate. We discuss the advantages and next steps of an emulator model using the mechanisms and conditions framework, a powerful vocabulary and design framework that builds in trust to allow users of a technology to understand and accept the features of a system.
|
Linda Plotnick, Starr Roxanne Hiltz, Sukeshini Grandhi, & Julie Dugdale. (2018). Real or Fake? User Behavior and Attitudes Related to Determining the Veracity of Social Media Posts. In Kristin Stock, & Deborah Bunker (Eds.), Proceedings of ISCRAM Asia Pacific 2018: Innovating for Resilience – 1st International Conference on Information Systems for Crisis Response and Management Asia Pacific. (pp. 439–449). Albany, Auckland, New Zealand: Massey Univeristy.
Abstract: Citizens and Emergency Managers need to be able to distinguish “fake” (untrue) news posts from real news posts on social media during disasters. This paper is based on an online survey conducted in 2018 that produced 341 responses from invitations distributed via email and through Facebook. It explores to what extent and how citizens generally assess whether postings are “true” or “fake,” and describes indicators of the trustworthiness of content that users would like. The mean response on a semantic differential scale measuring how frequently users attempt to verify the news trustworthiness (a scale from 1-never to 5-always) was 3.37. The most frequent message characteristics citizens' use are grammar and the trustworthiness of the sender. Most respondents would find an indicator of trustworthiness helpful, with the most popular choice being a colored graphic. Limitations and implications for assessments of trustworthiness during disasters are discussed.
|