As AI continues to evolve, its application in defence raises critical ethical questions. Ensuring the responsible integration of AI technologies within the Australian Defence Force (ADF) demands strict adherence to ethical standards. How then, can policy makers and public administrators ensure that the deployment of these technologies aligns with strategic objectives, ethical standards of the ADF and international law? Further, what are the potential impacts of AI on decision-making processes and the dynamics of warfare?
In this first new Work with Purpose episode in 2024, host David Pembroke is joined by two leading Australian experts in AI ethics and military applications. Professor Toni Erskine from the ANU Coral Bell School explores the moral agency of AI-enabled military tools and what they mean for future use-of-force decisions. Adjunct Professor Kate Conroy from the Centre for Robotics, School of Electrical Engineering and Robotics at Queensland University of Technology, examines the ethical challenges of AI deployment in military and civilian contexts. The conversation covers the complex environment of AI in defence, shedding a light on autonomy in weapon systems, and emphasising the importance of balancing technological progress with ethical responsibility in defence. Kate Conroy speaks in her personal capacity and views are her own.
Trigger warning: This podcast discusses topics related to war and warfare, which some listeners might find distressing. If you need someone to speak with, do not hesitate to contact Beyond Blue’s 24-hour support via 1300 22 4636.
Discussed in this episode:
- Ethical use of AI in the Australian Defence Force
- Challenges of integrating AI into military strategies
- How AI changes modern warfare
- The balance between technological advancement and ethical responsibility
- The relationship between humans and AI in the context of defence
- The future of AI in defence decision-making.
Show notes:
- A method for ethical AI in Defence | Defence Science & Technology Group
- Australia’s System of Control and applications for Autonomous Weapons Systems | Australian Government
- Bad, mad and cooked: Moral responsibility for civilian harm in human-AI military teams | Dr Kate Conroy née Devitt
- How might AI affect the trustworthiness of public service delivery | Department of the Prime Minister and Cabinet
- Ethical use of AI in the workplace – AI WHS Scorecard | NSW Government
- Systems of Control | UNODA
- AI, automated systems, and future use-of-force decision making: Anticipating effects | Professor Toni Erskine