New Papers on AI Use and Risk
Collaborating with other teams is one of the great things about our ANU Justice and Technoscience Lab. Two recent papers on the use and risks associated with artificial intelligence (AI) capture that commitment. They are a research note in Australian Journal of Public Administration with colleagues in the UNSW Public Service Research Group and a paper accepted to the 2026 ACM Conference on Fairness, Accountability, and Transparency (FAccT) in Montréal, Canada. Here are the details of each:
Navigating Complexity: A Relational Perspective on Generative AI Adoption in Government
Authors: Shibaab Rahman, James Connor, Helen Dickinson, Kate Henne, Vanessa McDermott
Abstract: This research note presents preliminary findings from exploratory research examining how Australian public servants understand and use generative AI (GenAI) in government work. Drawing on 37 interviews across 22 agencies, we highlight the importance of a relational view of GenAI adoption, that is, the co-creation of meanings that emerges from the relations between users, technologies, and organisational contexts. Based on our empirical work, we offer three propositions as an initial research agenda: GenAI’s equivocal nature triggers fragmented sensemaking processes across agencies; GenAI’s affordances become entangled with bureaucratic institutional norms that require negotiation; and GenAI adoption necessitates reconstituting public sector legitimacy. These findings challenge narratives of straightforward technological integration, demonstrating that GenAI adoption is a fundamental renegotiation of core public sector work aspects.
Navigating Complexity: A Relational Perspective on Generative AI Adoption in Government
Authors: Shibaab Rahman, James Connor, Helen Dickinson, Kate Henne, and Vanessa McDermott
Abstract: This research note presents preliminary findings from exploratory research examining how Australian public servants understand and use generative AI (GenAI) in government work. Drawing on 37 interviews across 22 agencies, we highlight the importance of a relational view of GenAI adoption, that is, the co-creation of meanings that emerges from the relations between users, technologies, and organisational contexts. Based on our empirical work, we offer three propositions as an initial research agenda: GenAI’s equivocal nature triggers fragmented sensemaking processes across agencies; GenAI’s affordances become entangled with bureaucratic institutional norms that require negotiation; and GenAI adoption necessitates reconstituting public sector legitimacy. These findings challenge narratives of straightforward technological integration, demonstrating that GenAI adoption is a fundamental renegotiation of core public sector work aspects.
The 2025 OpenAI Preparedness Framework Does Not Guarantee Any AI Risk Mitigation Practices: A Proof-of-Concept for Affordance Analyses of AI Safety Policies
Authors: Sam Coggins, Alexander K. Saeri, Katherine A. Daniell, Kathryn Henne, Lorenn P. Ruster, Jessie Liu, and Jenny L. Davis
Abstract: Prominent AI companies are producing risk management frameworks as a type of voluntary self-regulation. As interventions, these policies purport to establish risk thresholds and safety procedures for the development and deployment of highly capable AI. Understanding which AI risks are covered and what actions are allowed, refused, demanded, encouraged, or discouraged by these risk management policies is a foundational step in assessing how they might govern the development and deployment of AI systems in practice. To investigate how such policies are operationalised, we introduce a transferrable AI policy analysis method based on the Mechanisms & Conditions model of affordances (M&C) and the MIT AI Risk Repository. We illustrate the utility of this method by applying it to OpenAI’s “Preparedness Framework Version 2” (April 2025). We find that OpenAI’s safety policy requests evaluation of a small minority of AI risks, encourages deployment of systems with “Medium” capabilities for unintentionally enabling “severe harm” (which OpenAI defines as >1000 deaths or >$100B in damages), and allows OpenAI’s CEO to deploy even more dangerous capabilities. These findings suggest that effective mitigation of AI risks requires more robust governance interventions beyond current industry self-regulation—of which there are well-established models in other domains. In addition, we illustrate how our affordance analysis provides a replicable method for evaluating what AI policies permit versus what they claim. Applied broadly, our AI policy analysis method will help clarify what is needed to mitigate AI-enabled harms, as well as what is needed to facilitate trustworthy and socially beneficial AI systems.
Posted in: academic