The Monthly Roundup with Dr Gatra Priyandita
ASPI Cyber, Technology & Security senior analyst Gatra Priyandita on AI norms for national security & the responsible use of AI + his top picks from the Digest this month.
This is a special edition of ASPI's Daily Cyber & Tech Digest, a newsletter that focuses on the topics we work on, including cybersecurity, critical technologies, foreign interference & disinformation. Sign up for it here.
Welcome to the third edition of The Daily Cyber & Tech Digest Monthly Update! Each month, an ASPI expert will share their top news picks and provide their own take on one key story. This month, Dr Gatra Priyandita, senior analyst at ASPI CTS, shares his take.
AI norms for national security and responsible state behaviour in cyberspace
One of the big news items this October was the decision by the Biden administration to impose guardrails and normative guidelines on US national security agencies’ use of artificial intelligence (AI). This new framework prohibits a series of actions, including AI applications that would violate civil rights protected under the US Constitution or that could automate the deployment of nuclear weapons. It is a fundamental framework because it provides clarity and establishes ethical boundaries for the use of AI in national security, ensuring it remains human-led.
The irresponsible use of AI and other emerging technologies continues to pose serious questions about what risks are ‘acceptable’ (for instance, consider Israel’s problematic use of AI to identify Hamas targets). In recent years, governments have intensified international cooperation to define normative constraints for the national security community’s use of AI. It is likely that over time, other states will follow the US lead and impose similar measures. The EU released its guidelines on the military use of AI in 2021, while in 2023 Australia joined 46 other countries in endorsing a US-initiated political declaration on responsible military use of AI and autonomy.
These discussions are part of a broader debate about responsible technology use worldwide. Closer to home, October marked a milestone in Southeast Asia. Singapore announced ASEAN’s release of its Norms Implementation Checklist aimed at operationalising the UN’s eleven norms of responsible state behaviour in cyberspace. While some norms—like refraining from attacking another state’s critical infrastructure during peacetime—may seem straightforward, states have significant latitude in interpreting their commitments. The checklist offers clearer guidance, helping officials understand the recommended steps to demonstrate adherence.
Those of us at ASPI focused on international security in cyberspace have been tracking how states work together to encourage responsible behaviour in cyberspace and prevent the misuse of technology. As I argued last week, the ASEAN norms checklist is just a starting point. The real challenge is convincing officials to self-regulate cyber capabilities and emerging technologies in a way that is consistent. This must be accompanied by global efforts to promote restraint, especially in countering states that misuse their technological advantage. After all, there is less incentive to restrain oneself when others are not doing the same.
Cyber diplomats face a dual challenge: encouraging adherence to the UN norms while also demonstrating responsible behaviour. Increasingly, states are responding to irresponsible behaviour by ‘naming and shaming’—for instance, just weeks ago China’s cyber diplomat indirectly accused the US of conducting cyberattacks against it for the first time ever.
As these dynamics evolve, the pursuit of a more secure cyberspace demands both a robust commitment to established norms and genuine transparency in state behaviour.
My must-reads
Fragmentation or Like-Mindedness: Rethinking Responsible Behavior in the Age of Multilateralism (CSIS)
This piece by the formidable Jim Lewis talks about the future of cyber governance during an age of strategic competition. He calls for liberal democracies, the primary advocates of responsible state behaviour in cyberspace, to strategize in the face of declining support for past agreements, like the UN eleven norms, and the principles that undergird the UN charter.
Responsible AI principles in an ‘apolitical’ industry (Binding Hook)
States are not the only actors with a role who need to act responsibly in cyberspace – industry needs to as well. In this article, Cat Easdon explores the challenges that industry faces when developing AI principles, especially when they choose not to engage in societal or political issues.
Not specific to cyber norms, but this new report by the United Nations Office of Drugs and Crime covers the most pressing cybersecurity issue in Southeast Asia: cyber scams. The report highlights how emerging technologies, like AI, help intensify these scam campaigns. The cyber diplomatic angle: countries and civil society organisations need to band together to combat this threat but not everyone is playing ball.