Google reveals the first AI-generated zero-day vulnerability: hackers aim to bypass 2FA for large-scale exploitation

ChainNewsAbmedia

Google Threat Intelligence Group (GTIG) revealed on May 11 its first zero-day exploitation case “assisted by an AI model” on the wild: a hacker group planned to launch “large-scale exploitation” against a widely used open-source web-based system administration tool, with the goal of bypassing that tool’s two-factor authentication (2FA) login mechanism. Per a CNBC report, Google coordinated with the tool’s vendor to complete the vulnerability patching before the attack went live.

The incident itself: how zero-day vulnerabilities were “manufactured” by AI

After analyzing the Python exploit script left behind by the hackers, GTIG was “highly confident” the script was generated with help from an AI model. The determination was based on multiple LLM-typical tells found in the script:

A large number of tutorial-style docstrings and comments (unlike the usually concise code style of real hackers)

It contains “hallucination-style CVSS scores” (a common fabrication behavior of AI models)

A structured, textbook-like Python programming style with a detailed explanation menu

Clean template traces such as _C ANSI color classes that are “typical” of LLM training data

The vulnerability itself is a “high-level semantic logic flaw,” originating from a hard-coded trust assumption. Google described this as the type of vulnerability that LLMs are best at uncovering in code analysis. The real attack path: after obtaining the victim’s legitimate account credentials, the hackers bypass 2FA directly through the flaw.

Google’s response: silent patching with the vendor; the attack was not formally launched

Google did not disclose the name of the targeted open-source system administration tool, nor did it name the AI model vendor. After finding it, GTIG coordinated with the tool’s maintenance vendor to carry out a “responsible disclosure” process, silently patched the vulnerability, and Google assessed that this handling likely disrupted the momentum before the hacker group could formally launch large-scale exploitation.

Google also did not explicitly name the attacker—only described them as “cybercrime threat actors,” without identifying whether they have ties to nation-state actors.

Industry significance: AI x cybersecurity enters a new stage

Media observation: This case is the first publicly recorded instance by Google of “AI models being weaponized in the wild for vulnerability discovery and generation of exploitation code.” In the past six months, market discussion has debated whether “AI hacker capabilities are being exaggerated,” with both sides making arguments: the proponents say that open-source LLMs plus a dedicated dataset are enough to help find vulnerabilities, while the skeptics argue that exploit code written by LLMs usually cannot work in real environments.

GTIG’s assessment provides a concrete data point—LLMs can not only find vulnerabilities, but also write “operational” code sufficient to enable large-scale exploitation. Ryan Dewhurst, a cybersecurity researcher, commented: “AI has accelerated vulnerability discovery, reduced the effort needed to identify, validate, and weaponize flaws.”

Events that can be tracked next include: whether Google will continue to publicly release more AI-assisted hacker cases, whether other cybersecurity vendors (Microsoft Defender, CrowdStrike, Mandiant, etc.) will make similar observations, and whether LLM vendors (OpenAI, Anthropic, Google’s own) will build stricter detection mechanisms for vulnerability-analysis-type requests.

This Google article revealing the first case of AI-created zero-day vulnerabilities: hackers want to bypass 2FA for large-scale exploitation first appeared on ChainNews ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments