OpenAI Discovers Evidence Of AI-Powered Chinese Surveillance Tool Tracking Real-Time Discussion


OpenAI recently announced that it had discovered evidence of a Chinese security operation developing an AI-powered surveillance tool designed to track real-time discussions of anti-China sentiments on social media platforms in Western countries. According to the company’s researchers, they were able to detect this campaign—dubbed Peer Review—because one of the developers working on the tool used OpenAI’s technology to troubleshoot parts of its underlying computer code, reported the New York Times.

Ben Nimmo, a principal investigator at OpenAI, described this as the first instance where the company had uncovered an AI-driven surveillance tool of this nature.

ALSO READ | Preity Zinta Praises Grok 3, Netizens Question If This Was Paid Promotion. Here’s What She Said

“Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models,” Nimmo said.

Concerns about the misuse of AI for surveillance, hacking, disinformation, and other harmful activities have been on the rise. While experts like Nimmo acknowledge that AI can facilitate such operations, they also emphasize that the same technology can be leveraged to detect and counteract these threats. According to the New York Times, Nimmo’s team suspects that the Chinese surveillance tool is based on Llama, an AI model developed by Meta, which was open-sourced and made accessible to developers worldwide.

OpenAI Reveals Another Chinese-Led Operation

In a comprehensive report detailing the exploitation of AI for deceptive and malicious activities, OpenAI also revealed another Chinese-led operation known as Sponsored Discontent. This campaign reportedly utilized OpenAI’s technology to generate English-language posts that targeted Chinese dissidents. Additionally, the same group is said to have used the company’s AI tools to translate articles into Spanish before circulating them in Latin America. These articles reportedly contained criticisms of U.S. politics and society.

In a separate finding, OpenAI researchers identified another campaign, believed to be operating from Cambodia, that employed AI-generated content to enhance a scam known as “pig butchering.” According to the report, AI-generated social media comments were used to lure men into fraudulent investment schemes through online interactions.

Please follow and like us:

Leave a Comment

Your email address will not be published. Required fields are marked *

error

Enjoy this blog? Please spread the word :)

Scroll to Top