Thu Feb 06 10:01:02 UTC 2025: ## Australia Bans Government Use of Chinese AI Chatbot DeepSeek Over Data Security Concerns

**Sydney, Australia –** The Australian government has banned the use of DeepSeek, a Chinese-developed open-source AI chatbot, by government agencies citing concerns over data security and privacy. The decision, supported by several experts from the University of Sydney’s School of Computer Science and Business School, highlights growing anxieties surrounding the use of large language models (LLMs) and the potential risks associated with cross-border data transfer.

Dr. Jonathan Kummerfeld, an AI and natural language processing expert, praised the ban, stating that there is currently no control over how data provided to DeepSeek is used. While acknowledging the potential scientific benefits of the underlying technology, he emphasized that Australia can develop its own systems without the associated risks.

Dr. Suranga Seneviratne, a privacy and cybersecurity expert, likened the situation to the concerns surrounding TikTok, noting the inherent risks associated with LLMs, including data privacy breaches, the potential for backdoors, and the possibility of inaccurate outputs (“hallucinations”). The open-source nature of DeepSeek, while offering transparency in its code, also presents challenges in enforcing a complete ban, as anyone can host their own instance.

Professor Kai Riemer, Professor of Information Technology and Organisation at the University of Sydney Business School, stressed that the ban is a matter of prudent data security, regardless of the chatbot’s origin. He emphasized that government data should remain within secure Australian systems. He also compared DeepSeek and similar AI products to MP3s—they didn’t invent music but significantly impacted its accessibility.

Professor Uri Gal, focusing on the organizational and ethical aspects of digital technologies, highlighted the sensitive nature of government information and the potential exposure of confidential data through DeepSeek’s extensive data collection practices. He also underscored the broader public risks associated with generative AIs, including misinformation, biases, and privacy breaches. The ban, he explained, serves as a preemptive measure to protect national security and public trust.

Dr. Armin Chitizadeh, an AI ethics expert, added concerns about the race to develop AI potentially leading to shortcuts in data protection and the tendency of users to blindly trust AI-generated content, even when inaccurate. He also pointed out the ability of AI to draw significant conclusions from seemingly insignificant data, raising concerns about the control and potential misuse of this inferred information.

The ban underscores the growing need for robust data protection safeguards and a cautious approach to the adoption of AI technologies, particularly those involving cross-border data flows and potentially sensitive government information.

Read More