Open AI released its first report on how its artificial intelligence tools are being used in covert influence operations on Thursday. The report revealed that the company has disrupted disinformation campaigns originating from Russia, China, Israel, and Iran.
Bad actors utilized the company’s generative AI models to create and post propaganda content on social media platforms and translate that content into various languages. However, none of these campaigns gained traction or reached a large audience, as reported here.
With generative AI becoming increasingly prevalent, researchers and lawmakers are growing concerned about its potential to propagate misinformation online. Companies like ChatGPT’s OpenAI are taking steps to address these concerns and regulate the use of their technology.
OpenAI’s detailed 39-page report documents how the company’s software has been used for propaganda. The report states that OpenAI researchers identified and banned accounts linked to covert influence operations involving state and private actors in the past three months.
In Russia, operations created and disseminated content critical of the United States, Ukraine, and the Baltic states using OpenAI models. A Chinese influence operation generated multilingual text posted on Twitter and Medium, while Iranian activists produced articles attacking the US and Israel in English and French.
Moreover, a network of fake social media accounts run by an Israeli company also spread content, including denouncing US student protests as anti-Semitic. OpenAI removed known disinformation spreaders from its platform and continues to monitor and report on covert influence activities.
The report underscores the integration of generative AI into disinformation campaigns to enhance content generation, although AI is not the sole tool used for propaganda. Bad actors are leveraging AI to streamline the creation and dissemination of misinformation, raising concerns about its impact.
Over the past year, bad actors have exploited generative AI to influence public opinion and elections globally. The use of AI tools has lowered barriers to creating disinformation campaigns, prompting companies like OpenAI to enforce stricter policies.
OpenAI intends to release regular reports on covert influence activities and enforce policies to combat misinformation on its platform.
Source: www.theguardian.com