US acknowledges AI is accelerating the war with Iran
Plus, OpenClaw adoption races ahead of security guardrails in China
Welcome to the latest edition of ASPI’s Cyber & Tech Digest.
Each week, ASPI curates and contextualises the most important developments in cyber, technology, and geopolitics — highlighting what matters and why.
This edition covers the period: 7 March 2026 to 13 March 2026.
Follow the Australian Strategic Policy Institute on Bluesky, LinkedIn, and X.
A quick note to readers:
We’re working to make the Digest as comprehensive and readable as possible, though this edition is on the longer side.
If you’d prefer shorter updates more frequently (for example, two or three editions across the week), we’d like to hear that. As outlined in our recent format update, we’ll soon introduce a Substack pledge option so readers can signal whether they’d support a more frequent release schedule. If there’s sufficient interest, we would look at moving to up to three editions per week, delivering key developments faster and in shorter bursts.
We also want to be transparent about our workflow: we use AI tools to assist with research and drafting, but every edition is reviewed, edited and curated by our team. When the Digest includes our own analysis or commentary, it is written by us.
Feedback is always welcome. You can reply directly to this email, leave a comment on the post, or contact us at aspicts@substack.com
— The ASPI Cyber, Technology & Security Program
What We’re Tracking
The US acknowledges AI role in accelerating the war with Iran
What happened: Admiral Brad Cooper, head of US Central Command, told Al Jazeera that US forces are using “advanced AI tools” in the war with Iran, while insisting that humans still make final decisions on what to strike. The Wall Street Journal reported that the US and Israel are using AI to accelerate intelligence analysis, target identification, mission planning, logistics and battle-damage assessment.
That speed is now extending beyond the battlefield. The Guardian reported that Iranian drones struck Amazon Web Services data centres in the UAE and Bahrain. Iranian state media claimed the strikes were intended to test whether those facilities were supporting enemy military or intelligence activity.
The conflict is also spilling into corporate networks and online information flows. The Wall Street Journal reported that a suspected Iran-linked cyberattack disrupted Stryker’s systems, while Rolling Stone reported that AI-generated and AI-edited war imagery is spreading widely online.
Why we’re tracking this: The novelty is not that AI has entered war. It is that officials are now openly presenting it as part of operational tempo, while outside reporting suggests it is shaping how targets are found, prioritised and assessed.
That makes the “human in the loop” claim harder to evaluate. The Wall Street Journal says the US and Israel have declined to explain exactly how they are using these systems. The sources leave open how much meaningful review remains once AI has already structured the options.
What people are saying:
Admiral Brad Cooper told Al Jazeera that AI helps leaders “make smarter decisions faster,” but that humans still make final strike decisions.
David Leslie of Queen Mary University of London told The Guardian that AI is compressing planning time and narrowing the window for human evaluation.
Rumman Chowdhury told Rolling Stone that deepfakes have reached a level of realism that most people cannot readily distinguish from authentic material.
My view: The key issue is not whether a human still signs off at the end of the chain. It is whether AI has already shaped the intelligence, priorities and options that make up that decision. If so, official claims about human oversight may describe a formal safeguard more than a substantive one. That does not make AI in war unusual or exceptional, but it does raise a question no one outside the military can currently answer: what does accountability look like when the chain of decisions leading to a strike is shaped by systems that cannot be independently audited?
— Stephan Robin, CTS
China embraces OpenClaw as Beijing tightens security controls
What happened: OpenClaw — an open-source AI agent created by Austrian developer Peter Steinberger that can take over a user's computer and autonomously complete tasks like sending emails, booking flights and drafting reports — has moved quickly from a developer tool to a commercial and consumer craze in China. As MIT Technology Review reports, early adopters are selling installation, tutoring and hardware services, while The Information describes founders racing to build new products on top of it. The trend has its own slang — "raising a lobster," after the tool's logo — and has spread well beyond the developer community to office workers, retirees and students (SCMP). OpenAI hired Steinberger in February.
That momentum has been amplified from above. Tencent, Alibaba, ByteDance, Baidu, MiniMax and Zhipu have launched hosting, deployment or OpenClaw-like services — with Tencent alone rolling out at least three products including WorkBuddy, QClaw and a free in-person setup event in Shenzhen that drew nearly a thousand people (Bloomberg; SCMP). Local governments in Shenzhen, Wuxi, Hefei and Suzhou have proposed subsidies of up to 10 million yuan for projects built around the tool, along with free computing credits, office space and accommodation for “one-person companies” (Reuters; SCMP).
Central authorities, meanwhile, have sharpened their response. The Ministry of Industry and Information Technology issued an early warning about misconfigured deployments (Reuters), followed by CNCERT publishing a second, more detailed warning flagging prompt injection attacks, accidental deletion of files through misinterpreted commands, and malicious plugins capable of stealing credentials (SCMP). Some state agencies, banks and state-owned firms have now been told to restrict or report installations, with the ban extending to personal devices on corporate networks and, in at least one case, to military families (Bloomberg). Universities have begun issuing outright prohibitions (SCMP), and the China Academy of Information and Communications Technology has launched a standards initiative for “Claw” agent products focused on transparent execution and manageable permissions (SCMP).
Why we’re tracking this: Local promotion, corporate opportunism and central risk control are colliding at once over a tool whose safe and durable uses are not yet clear. China’s AI ecosystem can localise and commercialise an open-source product in days. That it also requires unusually broad system access makes the security stakes harder to wave away.
What people are saying:
Jiang Yunhui told MIT Technology Review the agent is still a proof of concept, not yet likely to be transformative for average users.
Alfred Wu told SCMP that the split between local enthusiasm and central caution is predictable: local governments chase growth, the centre worries about data breaches.
Anna Wu of Van Eck Associates told Bloomberg that claims about one-person firms reshaping the workforce are premature without stronger usage data.
My view: China has generally been more willing than most governments to regulate AI risks early. But OpenClaw shows what happens when that instinct collides with intense competitive pressure. Local governments are offering millions in subsidies for the same tool that central agencies are banning in sensitive sectors, and the frenzy has moved so fast that the security warnings have barely kept pace. OpenClaw requires unusually broad system access and the threat surface is not hypothetical: users have already reported deleted emails, unauthorised purchases and credential exposure. Beijing's restrictions so far only reach state agencies, banks and the military. What happens to the millions of ordinary users and private firms still running the tool with no enforceable guardrails at all?
— Fergus Ryan, CTS
What We’re Watching
A weekly scan of notable developments we’re tracking across technology, policy, and geopolitics.
⸻
🚀 Strategic competition
Anthropic sued to block the Pentagon’s supply-chain-risk designation, saying the restriction followed its refusal to remove guardrails against domestic surveillance and autonomous weapons use of Claude. U.S. Defense Department officials later said there was little chance of reviving the deal and that existing projects using Anthropic models would move to alternative vendors over six months. Pentagon CTO Emil Michael said Claude’s built-in policy framework could influence defence systems. Michael also has been leading a wider push to bring more commercial AI suppliers into military work.
Anthropic told a federal court the dispute had already put hundreds of millions of dollars in public-sector revenue at risk as customers paused or renegotiated contracts and investors grew uncertain. More than 30 employees from OpenAI and Google DeepMind filed a court statement backing Anthropic after the designation. Another report on the filing said Jeff Dean and other researchers argued the move could chill debate over safe military uses of frontier AI.
Microsoft urged a federal judge to temporarily block the Pentagon’s restriction, warning it could disrupt existing Defense Department systems used by military contractors. Earlier in the week, Microsoft said Claude would remain available to non-defence customers through products including Microsoft 365, GitHub Copilot and AI Foundry. Google, Microsoft and Amazon also told customers Anthropic models would stay available for non-defence uses, with defence deployments expected to wind down over six months.
The Pentagon appointed computer scientist Gavin Kliger as Chief Data Officer, putting him in charge of coordinating the department’s AI work and liaising with major U.S. model companies. Sam Altman, meanwhile, said governments should remain more powerful than technology companies as OpenAI moved into the classified department work Anthropic had declined. At OpenAI, robotics and hardware leader Caitlin Kalinowski resigned over concerns about domestic surveillance and autonomous weapons tied to the company’s Pentagon agreement.
Michael Dell said technology companies cannot dictate how governments use their products once sold as the dispute over model guardrails widened beyond Anthropic. In Noahpinion, Noah Smith argued AI systems and autonomous agents should increasingly be treated like weapons because of their potential use in cyberattacks, biological attacks and other large-scale destruction.
Ukraine opened battlefield datasets to allied governments and companies so AI systems can train on continuously updated combat data, including millions of annotated drone images. Ukrainian forces also described widening use of armed uncrewed ground vehicles for attacks, resupply, evacuations and other frontline tasks, with most systems still operated by humans. In the war with Iran, the U.S. and Israel have been using AI to accelerate intelligence analysis, target identification and mission planning.
Amazon Web Services data centres in the UAE and Bahrain were struck by Iranian drones, disrupting services for millions and drawing attention to the role of commercial cloud infrastructure in conflict. A separate report said the war was imperilling more than $300 billion in planned Gulf AI spending across data-centre and semiconductor projects. Saudi Arabia, Qatar and the UAE, meanwhile, have been backing overland fibre routes to Europe to reduce dependence on Red Sea and Egyptian submarine-cable chokepoints.
GPS interference has been spreading beyond the Russia-Ukraine front into places such as the Strait of Hormuz and northern Europe, disrupting aviation, shipping and military operations. Engineers are now working on combinations of inertial, magnetic and visual-navigation systems as alternatives to GPS in denied environments.
⸻
🧠 AI models, agents & compute
Anthropic said it will open an office in Sydney as its fourth Asia-Pacific location after Tokyo, Bengaluru and Seoul, expanding support for enterprise, startup and research customers across Australia and New Zealand. The company also said it will initially use existing Australian data-centre space while considering longer-term local infrastructure, with expected customers including Canva, Quantium and Commonwealth Bank.
Anthropic launched the Anthropic Institute, combining internal teams studying AI safety, economics, governance and human interaction with advanced systems. Cofounder Jack Clark is moving from public policy to head of public benefit to lead the new organisation.
Nscale raised $2 billion at a $14.6 billion valuation as it expanded AI data-centre projects across Britain, Iceland, Norway, Portugal, Texas and Southeast Asia. Later in the week, an investigation said several UK AI infrastructure announcements tied to Nscale and CoreWeave rested on inflated or unclear commitments, and reporters found a proposed Nscale site near London undeveloped. Oracle, meanwhile, faced fresh scrutiny over debt-backed data-centre expansion after OpenAI reportedly cooled on a planned Stargate buildout in Abilene, Texas because the facility may rely on chips that could be outdated by launch.
Startup AMI, founded by former Meta chief AI scientist Yann LeCun, raised more than $1 billion at a $3.5 billion valuation just weeks after launch. In another account, the Paris-based company said it is building world-model systems with persistent memory and planning for manufacturing, robotics and biomedical uses.
Cursor passed $2 billion in annualised revenue while shifting to a war-time strategy centred on its own coding models and agent systems. OpenAI, meanwhile, has been reorganising teams to catch up with Anthropic’s Claude Code after Codex lost ground when the company focused on ChatGPT and multimodal work.
Simile and clients including CVS Health and Gallup have been testing AI-built digital twins for market research, using interview data, behavioural signals and purchasing information to simulate consumer responses. In Harvard Business Review, a number of Boston Consulting Group execs said heavy oversight of AI tools and multi-agent systems can increase mental fatigue, errors and intent to quit.
China has been building state-backed robot training farms in places such as Wuhan to gather sensor, video and annotation data for humanoid-robot models. Gestala also raised $21.6 million to develop non-invasive ultrasound brain-computer interfaces, with plans for a first prototype and a manufacturing facility in China by the end of the year.
⸻
🛡 Cyber posture
In The Strategist, ASPI’s Gatra Priyandita argued Australia and Taiwan should share cyber-threat information and focus on digital dependencies, high-risk components and public-private coordination. In another piece, Priyandita argued AI is reshaping economic cyber-espionage by making training data, model architectures and fine-tuning methods high-value targets.
Finland’s Security and Intelligence Service warned that Russia and China continue extensive cyber espionage against government systems, technology firms and research institutions. Google’s Threat Intelligence Group later said 2025 set a record for enterprise zero-day exploitation, with China-linked groups especially active against routers, switches and other edge devices and commercial surveillance vendors accounting for more attributed activity than traditional state espionage groups.
U.S. investigators suspect China-linked hackers breached an internal FBI network holding metadata tied to domestic surveillance orders. A separate investigation found that a foreign hacker accessed Epstein-investigation files during a 2023 intrusion into an FBI server in New York, after a server in the bureau’s Child Exploitation Forensic Lab was left exposed while handling digital evidence.
Transport for London said a 2024 attack by Scattered Spider exposed personal data tied to roughly 10 million people and caused about £39 million in damage. Stryker later dealt with a major cyberattack claimed by Handala, disrupting laptops and phones across its network as the company worked with Microsoft on restoration. Separately, an eight-country operation disrupted the SocksEscort residential proxy network, seizing infrastructure and freezing cryptocurrency tied to fraud, ransomware and business email compromise.
Researchers said the Coruna iPhone hacking toolkit was likely built by L3Harris for Western intelligence agencies before leaking to Russian and later Chinese-linked actors. In The Strategist, ASPI’s James Corera and Jason Van der Schyff said NATO has approved configured commercial iPhones and iPads for information up to NATO Restricted, while arguing that carrier networks, cloud services and cryptographic key management still sit inside the security boundary.
President Donald Trump signed an executive order directing agencies to strengthen action against cybercrime, including reviews of operational, technical, diplomatic and regulatory tools and proposals to return seized funds to victims. The administration also released a six-pillar cybersecurity strategy focused on offensive operations, critical infrastructure, supply chains, emerging technologies and cyber workforce measures.
In The Strategist, Van der Schyff and Corera argues that for Chinese-made electric buses the central security issue lies in software updates, remote access and data flows rather than country of manufacture alone. In yet another piece, Corera and Van der Schyff argued Australia’s offshore wind, batteries and distributed energy rollout should be assessed as a software-defined technology stack rather than as isolated projects.
⸻
⚖️ Platform accountability
Meta removed several overseas-run Facebook pages that used AI-generated images and fabricated stories about Pauline Hanson and Gina Rinehart after questions about foreign-run pages targeting Australian audiences. Meta’s Oversight Board later said the company’s deepfake labelling system was inadequate, recommending stronger detection, clearer penalties and wider use of C2PA credentials during crises.
YouTube expanded its likeness-detection pilot to politicians, candidates and journalists, letting verified users scan uploads for facial matches and request removal through the platform’s privacy process. The rollout extends a system already used to identify deepfake impersonations on the service.
Meta disabled more than 150,000 accounts tied to Southeast Asian scam centres in an operation involving Thailand and partners including the U.S., the UK, Australia and Japan. The company also rolled out new scam warnings across Facebook, WhatsApp and Messenger, including alerts for suspicious friend requests, device-linking attempts and chats that match common fraud patterns.
CNN and the Center for Countering Digital Hate said tests of 10 major AI chatbots found eight provided guidance that could help teenage users plan violent attacks. Companies said safeguards had since improved, while former safety engineers said competitive pressure had slowed stronger protections.
⸻
🧒 Online harms & child safety
Indonesia announced a ban on social-media accounts for children under 16, covering platforms including YouTube, TikTok, Facebook, Instagram, Threads, X, Bigo Live and Roblox. A separate scan listed governments in Europe and Asia moving toward similar restrictions after Australia’s 2025 under-16 social-media limits.
Australia’s new online-safety codes took effect for R18+ games, certain websites and generative AI services, requiring age-assurance systems for violent, sexual and self-harm material. A poll of Australian teenagers later found about 70% still used social media despite the under-16 ban, often by changing account ages, using parents’ IDs or bypassing facial checks. Snapchat also refused to remove a 14-year-old’s account until a parent supplied identification, after the account listed the user as 25.
UK lawmakers rejected an amendment for an outright under-16 social-media ban and instead backed powers allowing ministers to restrict access to services or features. Days later, Ofcom and the Information Commissioner’s Office told major platforms to toughen age checks for under-13s, citing widespread reliance on self-declared ages.
WhatsApp began rolling out parent-linked accounts for children under 13 that limit the app to messaging and calls and remove features including Meta AI, Channels and Status. A second report said parents can manage contacts, groups and activity alerts through QR-linked, PIN-protected settings, while chats remain end-to-end encrypted.
U.S. state laws have been pushing platforms to verify the age of all users, often through selfies, facial scans or government IDs handled by third-party identity vendors. Privacy advocates said the systems could concentrate sensitive identity data and widen surveillance, while companies said they were needed to enforce child-safety rules.
⸻
🏛️ Government, procurement & public sector tech
General Services Administration has been drafting AI procurement rules that would require suppliers to give the government an irrevocable licence to use models for any lawful purpose. In the U.S. Senate, staff have now been cleared to use ChatGPT, Gemini and Copilot for official work, subject to limits on sensitive or classified data.
U.S. State Department swapped Anthropic’s Claude Sonnet 4.5 out of StateChat for OpenAI’s GPT-4.1 after a presidential directive ordered agencies to remove Anthropic technology. The change also reset the chatbot’s training data to May 2024 from a June 2025 dataset used under Claude.
Employees from Department of Government Efficiency used ChatGPT to identify National Endowment for the Humanities grants linked to diversity, equity and inclusion, producing a list of 1,477 projects that led to more than $100 million in cancellations. The cuts have since triggered lawsuits alleging unconstitutional discrimination and improper political control of the agency.
U.S. Department of Transportation opened a pilot programme for early eVTOL operations as soon as June, ahead of full FAA certification. FCC chair Brendan Carr also criticised Amazon for challenging SpaceX’s satellite plans while Project Kuiper lags in launches, after regulators approved thousands more second-generation Starlink satellites.
Basel-Stadt suspended its electronic voting pilot after three USB keys failed to decrypt 2,048 ballots, prompting an external investigation and criminal proceedings. In Australia, funding for the proposed AI Accelerator CRC will not start flowing until 2028, despite the programme featuring in the government’s National AI Plan.
⸻
💰 Tech business & markets
Atlassian said it will cut about 10% of its workforce to fund AI and enterprise-sales investment. Another report said the restructuring also includes a chief technology officer change and falls heavily on software-engineering and R&D roles. Block chief executive Jack Dorsey said his company’s roughly 40% workforce reduction is part of rebuilding it around an AI-driven organisational model.
Darktrace named Ed Jennings its third permanent chief executive in 18 months as owner Thoma Bravo pushed a U.S. expansion and a planned $200 million investment there.
⸻
🧑⚖️ Courts, enforcement & regulation
Kalshi and Polymarket have been courting U.S. students with campus marketing, influencers and referral payments, drawing scrutiny over insider information, manipulation and problem gambling. For sports contracts, Polymarket signed up Palantir and TWG AI to flag suspicious trading and screen participants against banned-betting lists.
Leading the Future spent more than $1.3 million attacking New York assemblymember Alex Bores after he backed the RAISE Act for large AI companies. Across federal races, candidates also have been signalling support for AI and crypto industries to attract money from sector-backed super PACs, with the two sectors together positioned to spend nearly $250 million in the 2026 elections.
U.S. Justice Department opened an investigation into Iran’s use of Binance to evade sanctions, interviewing people tied to transactions routed through a Hong Kong payments company and wallets linked to the Islamic Revolutionary Guard Corps and Houthi militants. Binance said the accounts were shut after its internal review.
⸻
🌏 Global policy
🇦🇺 Australia
Australian Energy Market Commission proposed new grid rules for large data centres, requiring sites above 30MW to stay connected during voltage or frequency disturbances instead of dropping load during faults. Andrew Hastie also said Australia should use coal, gas, uranium and its landmass to attract AI data-centre and robotics investment, arguing the country could position itself as a secure location for infrastructure after strikes on Gulf cloud sites.
A leaked Business Council of Australia draft proposed redefining copyright law so AI companies could train on copyrighted material without paying creators by treating computational analysis as outside copyright’s scope. The Albanese government later reiterated it had ruled out weakening copyright protections for AI training, and the proposal did not appear in the council’s final submission.
🇨🇦 Canada
Canada reversed an earlier decision that would have forced TikTok’s local subsidiary to shut down, allowing the company to keep operating under new data-access controls, security gateways and independent third-party monitoring. The government said the decision followed a fresh national-security review.
🇷🇺 Russia
Authorities in Moscow kept mobile internet restrictions in place and appeared to test expanded controls over Russia’s internet infrastructure during Ukrainian drone threats, with outages affecting major carriers, public Wi-Fi, businesses and app-based services. Reports indicated government-approved platforms remained available during the disruptions.
That’s all for this week. For more timely analysis and commentary, check out The Strategist and ASPI’s Stop the World podcast—or our other Substack newsletters:


