LOADING

Type to search

North Korean Hackers ChatGPT Phishing: Deepfake ID Used in Cyber Espionage

Cyber Threat News Cybersecurity

North Korean Hackers ChatGPT Phishing: Deepfake ID Used in Cyber Espionage

Share
North Korean hackers ChatGPT phishing

North Korean hackers have used ChatGPT in a phishing campaign, generating a fake South Korean military ID to deceive victims. The Kimsuky group, linked to Pyongyang, was behind this attack, which shows how generative AI in cybercrime is expanding beyond text generation. The incident underlines a worrying trend: AI cyber attacks are becoming more sophisticated, relying on deepfakes, forged documents, and well-planned social engineering.

TL;DR

Researchers in South Korea uncovered that the Kimsuky group used ChatGPT to create a forged deepfake ID in a fake military ID phishing campaign. By using AI prompt engineering, attackers bypassed restrictions and produced documents resembling official IDs. This is part of a wider pattern of North Korean hackers misusing AI tools, including Claude and ChatGPT, for phishing, infiltration, and recruitment — tactics that support both espionage and North Korea sanctions and cyber operations.

How the Forgery Was Carried Out

The campaign was exposed by Genians, a South Korean cybersecurity company. They discovered phishing emails carrying malicious links that appeared to come from a .mil.kr domain, mimicking the South Korean military. Attached was a forged deepfake ID, created through ChatGPT.

Normally, the model blocks attempts to generate government IDs. But the hackers demonstrated how AI prompt engineering could bypass those restrictions. To confirm, Genians researchers tried similar prompts. At first, ChatGPT refused because reproducing government IDs is illegal in South Korea. But after tweaking the wording, they were able to generate ID-like visuals — an example of how North Korean hackers use AI for phishing and test system limits.

Who They Targeted

The phishing attempt wasn’t random. Targets included South Korean journalists, researchers, and human rights activists focused on North Korea. By sending messages from a spoofed .mil.kr address and attaching a forged military card, attackers increased the credibility of the email.

This was not mass spam, but a precise cyber espionage campaign designed to trick trusted voices who shape narratives about North Korea. Exactly how many people were affected isn’t yet clear, but the precision indicates this was an intelligence-driven operation.

From Emails to Deepfake Documents

Phishing has always been about exploiting trust. In the past, crude emails with spelling mistakes were the giveaway. But AI-powered phishing attacks now change that. Attackers can generate polished, culturally relevant emails and back them with phishing with deepfake documents that look official.

An AI deepfake in hacking campaigns adds a new dimension. Instead of stealing IDs, attackers can generate customized ones on demand. A fake military ID phishing attempt isn’t just about malware delivery — it’s about eroding the confidence people have in what looks “official.”

Kimsuky’s Expanding Mission

The Kimsuky group, active since at least 2012, is known for targeting ministries, defense contractors, think tanks, and journalists. Their operations combine malware distribution with intelligence collection.

The U.S. Department of Homeland Security stated in a 2020 advisory that Kimsuky “is most likely tasked by the North Korean regime with a global intelligence-gathering mission.” This confirms that their activities are not limited to South Korea but part of a worldwide espionage agenda aligned with Pyongyang’s strategic goals.

AI Across the Attack Chain

According to Genians director Mun Chong-hyun, attackers are now using AI throughout the hacking process: planning scenarios, writing malware, building tools, and impersonating recruiters. The forged deepfake ID is just one element of this broader use of AI tools exploited in cyber espionage.

Beyond ChatGPT: A Pattern of AI Misuse

The Genians discovery in July fits into a larger pattern. In August, Anthropic revealed that North Korean hackers using Claude AI tool posed as developers and managed to get hired by U.S. Fortune 500 companies. Claude helped them build fake identities, pass coding assessments, and even deliver technical work once inside — a tactic that provided both revenue and insider access.

Earlier, in February, OpenAI announced it had enforced an OpenAI North Korea ban on accounts generating fraudulent résumés, cover letters, and social media posts. These were part of recruitment schemes meant to trick outsiders into unknowingly aiding North Korea’s projects.

Together, these cases are clear examples of AI in cybercrime campaigns, showing that AI-generated résumés used in fraud, forged IDs, and impersonation tactics are all part of an evolving playbook.

The Larger Context

U.S. officials argue that North Korean hackers use cyber operations for two parallel purposes: gathering intelligence and raising funds. Cryptocurrency theft, phishing, and North Korea AI hacking schemes are all tools for sanctions evasion. The money supports Pyongyang’s nuclear weapons development, while the espionage supports its strategic intelligence-gathering.

The integration of AI — from ChatGPT phishing to Claude-powered fake developer profiles — adds scale and efficiency to these efforts. It’s a warning that AI deepfakes in hacking are here to stay.

Why Defenses Need to Adapt

Spotting phishing by broken English is no longer enough. Clean emails and phishing with deepfake documents demand new defenses. Organizations must verify identities through trusted systems, not rely on what lands in inboxes.

Security awareness training should now include examples of AI in cybercrime campaigns, helping employees recognize risks even when emails look authentic. Technical defenses must evolve too — from stronger email gateways to detection tools that flag AI-generated text and visuals.

Frequently Asked Questions (FAQs)

  1. Who is the Kimsuky hacking group?
    Kimsuky is a North Korean hackers unit active since 2012. The DHS has described them as tasked with global intelligence gathering. They run cyber espionage campaigns against governments, defense contractors, journalists, and researchers.
  2. How did they use ChatGPT in this case?
    The Kimsuky group manipulated ChatGPT prompts to generate a deepfake ID. This ChatGPT phishing tactic added credibility to emails, making recipients more likely to click malware links.
  3. Who were the targets of this phishing campaign?
    Targets included journalists, researchers, and activists in South Korea. Emails even came from a spoofed .mil.kr address, imitating the military.
  4. Have North Korean hackers used AI tools before?
    Yes. In July, they used ChatGPT for fake military ID phishing. In August, they misused Claude in job infiltration schemes. In February, OpenAI confirmed the OpenAI North Korea ban on fraudulent recruiting accounts.
  5. Why is this development concerning?
    Because AI-powered phishing attacks with deepfake IDs are harder to spot. This shows can ChatGPT be used to create fake IDs is not theoretical but practical, raising risks for governments and companies alike.
  6. How does this tie into North Korea’s larger goals?
    Cybercrime funds sanctions evasion and nuclear programs. From crypto theft to AI-generated résumés used in fraud, every tactic supports Pyongyang’s strategy of survival and power projection.

Closing Thoughts

This case is one of the clearest examples of AI in cybercrime campaigns so far. The Kimsuky group is evolving, moving from crude phishing to advanced AI cyber attacks that involve ChatGPT phishing, Claude Code misuse, and phishing with deepfake documents.

For defenders, the lesson is simple: treat every attachment, résumé, or ID with skepticism. The rise of AI deepfakes in hacking means appearances are no longer proof of authenticity. In the age of AI, verify everything.

Author

  • Maya Pillai is a technology writer with over 20 years of experience. She specializes in cybersecurity, focusing on ransomware, endpoint protection, and online threats, making complex issues easy to understand for businesses and individuals.

    View all posts
Tags:
Maya Pillai

Maya Pillai is a technology writer with over 20 years of experience. She specializes in cybersecurity, focusing on ransomware, endpoint protection, and online threats, making complex issues easy to understand for businesses and individuals.

  • 1

You Might also Like