X’s pro-China takedown highlights a bigger transparency gap
Plus, France raids X’s offices over Grok, while SpaceX plots orbital AI data centers
Welcome to the latest edition of ASPI’s Cyber & Tech Digest.
Each week, ASPI curates and contextualises the most important developments in cyber, technology, and geopolitics — highlighting what matters and why.
This edition covers the period: 2 February 2026 to 6 February 2026.
Follow the Australian Strategic Policy Institute on Bluesky, LinkedIn, and X.
What We’re Tracking
X suspends alleged pro-China AI influence network
What happened: Reporting by Crikey detailed findings from Clemson University’s Media Forensics Hub, which identified 130 accounts on X posing as ordinary users in Australia, the United States, and the Philippines. Researchers said the accounts were pushing People’s Republic of China–aligned narratives and appeared to rely on AI-generated text.
The Australian cluster consisted of 27 hijacked accounts. According to Crikey, these profiles mixed posts about local community life with pro-China messaging. In November, accounts across all three countries coordinated to amplify allegations against Philippines President Ferdinand Marcos Jr, drawing on content linked to fugitive ex-congressman Zaldy Co.
A day after the initial report, Crikey reported that X had suspended the network following its inquiries, including all 27 Australian accounts.
Why we’re tracking this: The researchers told Crikey the network shared technical and behavioural similarities with CyberCX’s Green Cicada operation identified in 2024, pointing to continuity in tactics rather than a one-off experiment.
The episode sits within a much larger enforcement picture elsewhere. In January 2026, the Google Threat Analysis Group published its Q4 2025 bulletin, reporting the termination of more than 10,000 PRC-linked YouTube channels in a single quarter as part of a long-running investigation into coordinated inauthentic behaviour tied to China.
What people are saying:
Ella Murray of Clemson University’s Media Forensics Hub told Crikey the network functioned as “an AI-powered troll farm” adopting local personas and intervening in Western political conversations “at a frightening scale.”
X did not respond to Crikey’s questions about the network or the suspensions.
My view: This is careful, credible work by Clemson University’s Media Forensics Hub, and effective reporting by Crikey, with a clear outcome in the suspension of 27 stolen Australian accounts. But the scale matters. Compared with the millions of PRC-linked accounts the Google Threat Analysis Group says it removed from YouTube in late 2025, this operation is small. That contrast sharpens the problem on X, where executives such as Nikita Bier have made claims about interference involving millions of accounts without publishing a methodology note or sharing datasets for independent verification. As I wrote about in The Strategist this week, if platforms are going to assert state-level interference at that magnitude, credibility depends on evidence that external researchers can scrutinise.
— Fergus Ryan, CTS
What We’re Watching
A weekly scan of notable developments we’re tracking across technology, policy, and geopolitics.
⸻
🧭 Geopolitics and compute
SpaceX acquired xAI in an all-stock deal that reportedly sets up a “mega IPO” and fuses launch capacity with AI compute ambitions. SpaceX also sought FCC approval for solar-powered orbital “AI data centers,” pitching satellites as a lower-impact alternative to terrestrial buildouts. Ukraine meanwhile said Starlink terminals used by Russian forces were deactivated, disrupting communications and assault operations.
India is offering tax holidays through 2047 to lure global AI and cloud workloads into Indian data centers, tying incentives to offshore revenue and data-centre operating structures. Alphabet is also plotting a major expansion in Bangalore as US visa restrictions make hiring harder. Big Tech meanwhile threw tens of billions into India’s data centers and AI infrastructure as incentives and policy settings pull investment onshore.
New South Wales launched Australia’s first parliamentary inquiry into data centers, examining whether rapid approvals underestimated energy and water impacts. Rural communities in the United States are also trying to block or slow large AI data-centre projects over land, power, water and local costs.
Apple faces margin pressure as the AI infrastructure boom tightens supply and raises component prices for chips and memory used in iPhones.
⸻
🏛️ State power
Abu Dhabi–backed entities bought a secret 49% stake in World Liberty Financial, a Trump-family crypto venture, as Sheikh Tahnoon bin Zayed Al Nahyan reportedly lobbied the US for advanced AI chips.
China pushes AI firms to move fast while complying with expanding rules on information control, algorithm disclosures and platform gatekeeping.
Capgemini will sell its US government unit after scrutiny tied to work with ICE, citing limits on overseeing classified work.
Britain will work with Microsoft and academics to build deepfake detection tools and standards as non-consensual synthetic content spreads.
Egypt moves to ban Roblox as part of a child-safety push that would involve regulators and telecoms authorities.
⸻
⚖️ Regulation and courts
The US Senate unveils the bipartisan SCAM Act, aiming to force platforms to verify advertisers and curb fraudulent ads.
French prosecutors raid X’s Paris offices as investigations widen into its algorithms and AI-generated sexualised images linked to Grok. UK authorities also open a fresh investigation into Grok as France raids X’s offices and summons Elon Musk and Linda Yaccarino for April hearings.
Australia’s eSafety Commissioner says global scrutiny of Elon Musk’s Grok hit a “tipping point” after the French raid, as regulators coordinate probes into sexualised deepfake content. xAI’s Grok can still generate sexualised images despite announced safeguards, raising legal and financial risk as investigations spread.
US states led by Colorado appeal remedies in the Google Search antitrust case, arguing the court’s restrictions are too weak.
The Administrative Review Tribunal partially reverses privacy regulator findings against Bunnings over facial recognition, but still faults the retailer’s customer notice. Bunnings also wins tribunal backing for facial recognition as a response to serious retail crime, even as transparency failures are highlighted.
⸻
📱 Platforms and speech
Snapchat blocks more than 415,000 Australian accounts it says belong to under-16s, while warning age-assurance systems still have “significant gaps.” Snapchat also says eSafety rejected its proposal to restrict under-16s to messaging and calling, because the regulator can only assess services “as they currently exist.”
TikTok restores US service after cold weather disrupted power at an Oracle-operated data centre, breaking posting, discovery and real-time engagement counts.
Indonesia lifts its ban on Grok after X Corp provided conditional assurances on misuse prevention, with re-blocking threatened for future violations.
Meta lets illegal offshore crypto-gambling promotions by an Australian influencer remain online despite user reports and warnings about ACMA penalties.
⸻
🧠 AI and safety
Civitai still hosts older deepfake “bounties” and related files targeting real women, despite announcing a deepfake ban in May 2025.
Grok can still generate sexually explicit images of real men through creative prompting despite safeguards.
Researchers find potentially disempowering chatbot responses in roughly 1 in 50–70 conversations, measuring risk potential rather than confirmed harm.
Anthropic wrestles internally over speed versus safety as it scales products, transparency practices and fundraising. Anthropic also updates Claude Opus 4.6 to handle more complex financial research, pushing beyond coding into higher-value enterprise work.
OpenAI runs a controlled ChatGPT ads beta with a reported $200,000 minimum advertiser commitment. Anthropic meanwhile debuts a Super Bowl ad pitching Claude as ad-free and taking aim at OpenAI’s advertising move.
Scientists use the EU’s Destination Earth “digital twin” to sharpen climate forecasts with AI and supercomputers, as the programme enters a new phase in June.
Researchers warn AI conferences are being hit by a flood of low-quality “slop” papers and reviews, pushing organisers to tighten submission rules.
Beijing enlists AI to scale Traditional Chinese Medicine across diagnostics, prescriptions and research, while experts caution diagnosis remains context-dependent.
The NSW Government releases its 2026–2028 Cyber Security Strategy, adding a new assurance framework and reinforcing 24-hour incident reporting.
The eSafety Commissioner reports major tech firms still have significant gaps in detecting and preventing online child sexual abuse, particularly in live video services.
Charges for image-based abuse are rising in Australia as legal gaps persist, highlighting how technology-enabled harms are evolving.
⸻
💸 Money and power
Investors wipe roughly $300 billion off software and data stocks as new AI tools intensify fears that core services will be automated.
Elon Musk considers mergers and a potential SpaceX IPO to finance the capital demands of AI expansion as xAI reportedly burns around $1 billion a month.
Tim Cook vows to lobby US lawmakers on immigration as Apple relies on global talent pipelines.
OpenAI shifts resources toward ChatGPT, prompting senior staff exits as the organisation transitions toward a more commercially driven operating model.
Justice Department records show the “Epstein files” map extensive contacts between Jeffrey Epstein and tech leaders, without establishing criminal wrongdoing by those named. New files also reveal Epstein invested in or sought access to startups like Coinbase and Jawbone after his 2008 conviction.
Newly released emails show Epstein advising publicist Masha Bucher as she built Day One Ventures and moved into venture capital.


