Sunday, October 19, 2025
HomeTechnologyUS foreign adversaries use ChatGPT with other AI models in cyber operations:...

US foreign adversaries use ChatGPT with other AI models in cyber operations: Report

Malicious actors from U.S. foreign adversaries have been utilizing advanced artificial intelligence (AI) models to carry out various cyber operations, according to a recent report by OpenAI. The report revealed that these actors, specifically those linked to China and Russia, have been using OpenAI’s ChatGPT in combination with other AI models, such as China’s DeepSeek, to conduct phishing campaigns and covert influence operations.

The use of AI in cyber operations is not a new phenomenon, but the report sheds light on the increasing sophistication of these malicious actors. With the rapid advancements in AI technology, these actors are able to carry out their operations with greater efficiency and effectiveness, posing a significant threat to national security and the global economy.

OpenAI’s ChatGPT is an AI model that generates human-like text responses based on a given prompt. It has gained popularity in recent years, with its ability to mimic human conversation and generate coherent and relevant responses. This makes it a powerful tool for cyber operations, as it can be used to deceive and manipulate unsuspecting individuals.

The report found that actors linked to China and Russia have been using ChatGPT in conjunction with other AI models to conduct phishing campaigns. These campaigns involve sending fraudulent emails or messages to individuals, tricking them into revealing sensitive information or downloading malicious software. By using ChatGPT, these actors are able to create convincing and personalized messages, increasing the chances of success in their phishing attempts.

Moreover, the report also uncovered the use of ChatGPT in covert influence operations. These operations involve spreading false information or propaganda with the aim of influencing public opinion or disrupting democratic processes. By using ChatGPT, these actors are able to generate large volumes of content, making it difficult for authorities to detect and counter their actions.

The report also highlighted the use of China’s DeepSeek in conjunction with OpenAI’s ChatGPT. DeepSeek is an AI model that specializes in targeted information gathering and analysis. By combining DeepSeek’s capabilities with ChatGPT’s text generation abilities, these actors are able to conduct more sophisticated and targeted cyber operations.

The use of AI models in cyber operations by malicious actors is a cause for concern, as it presents new challenges for cybersecurity experts and policymakers. Traditional methods of detecting and preventing cyber attacks may not be effective against AI-driven attacks. This calls for a more proactive and collaborative approach in addressing this issue.

OpenAI’s report also raised questions about the responsibility of AI developers in ensuring the ethical use of their technology. While AI has the potential to bring about significant benefits, it also has the potential to be misused for malicious purposes. As AI technology continues to advance, it is crucial for developers to consider the potential implications of their creations and take necessary precautions to prevent their misuse.

The report by OpenAI serves as a wake-up call for governments, organizations, and individuals to be more vigilant in protecting themselves against AI-driven cyber attacks. It also highlights the need for increased investment in AI research and development to stay ahead of these malicious actors.

In conclusion, the use of AI models, particularly ChatGPT, by malicious actors from U.S. foreign adversaries is a concerning development in the world of cybersecurity. The report by OpenAI sheds light on the increasing sophistication of these actors and the potential threats they pose. It is imperative for all stakeholders to work together in addressing this issue and ensuring the responsible use of AI technology.

Read also

POPULAR TODAY