OpenAI has detected and shut down operations aimed at deceiving ChatGPT. State-supported groups have been found to manipulate the AI. With millions of users worldwide, ChatGPT continuously learns and improves from each query. However, OpenAI has announced that there are teams working to deceive the AI model and has put an end to these operations.
OpenAI Dismantles Operations Attempting to Deceive ChatGPT
After examining data from the past two years, OpenAI identified that some groups were using ChatGPT deceptively. The company dismantled five state-supported covert operations originating from Russia, China, Iran, and Israel. These operations aimed to produce misleading responses using AI models like GPT-3.5 and GPT-4. While the true identities and intentions of these organizations were not disclosed, their goal was reported to be manipulating public opinion and political outcomes.
According to the company’s statement, a Russian operation named “Doppelganger” created fake news headlines, transformed these into social media posts, and prepared multilingual comments to undermine support for Ukraine. Another Russian group targeted Ukraine, Moldova, the USA, and the Baltic states.
The Chinese network known as “Spamouflage,” notorious for its deceptive activities on Facebook and Instagram, was expanding its operations using ChatGPT. The network used the AI model to conduct research and create multilingual content for social platforms. OpenAI emphasized that its investigative team is working to cut off access and funding to these organizations, collaborating with tech companies, NGOs, and governments in the process.
OpenAI’s proactive measures highlight the ongoing battle against misinformation and the importance of maintaining the integrity of AI systems like ChatGPT.