AI suite supply chain sees two-way attacks: Mistral and fake OpenAI models are both compromised

ChainNewsAbmedia

AI developer tool ecosystems saw two major supply-chain attacks reported on May 12 the same day: (1) Microsoft Threat Intelligence revealed that Mistral AI’s PyPI package was injected with malicious code; (2) a fake OpenAI Hugging Face model project climbed to #1 on the trending chart and, within 18 hours, attracted 244k downloads and stole large amounts of account credentials. Based on a Decrypt report, both incidents exposed the fragility of AI developer ecosystems to supply-chain infiltration.

Table of Contents

Toggle

Mistral AI package case: a two-stage attack disguised as the Hugging Face Transformers name

Fake OpenAI Hugging Face case: a 6-stage Rust-written infostealer

Industrial significance: AI supply chain becomes a new attack surface

Mistral AI package case: a two-stage attack disguised as the Hugging Face Transformers name

Mistral AI’s PyPI packages (Python package manager) were injected with malicious code, disclosed by Microsoft Threat Intelligence on X on May 12:

Affected scope: mistralai PyPI package v2.4.6

Trigger method: automatically executes when a package is imported on a Linux system

Second-stage payload: downloads transformers.pyz from a remote server and runs it in the background

Naming trap: transformers.pyz deliberately imitates the name of the popular Hugging Face Transformers library

Actual function: steals developers’ login credentials and access tokens; on some systems, randomly deletes files located within IP ranges in Israel or Iran

On May 13, Mistral confirmed the supply-chain attack, but emphasized that “Mistral infrastructure was not compromised, and the attack originated from an affected developer device.” The attack was attributed to the broader Shai-Hulud malware family (active since September 2025, targeting npm and PyPI open-source package supply chains).

Fake OpenAI Hugging Face case: a 6-stage Rust-written infostealer

Meanwhile, on the AI model platform Hugging Face, a fake model project named “Open-OSS/privacy-filter” appeared, intentionally mimicking OpenAI’s Privacy Filter model released in April:

Cumulative downloads: 244k times within 18 hours

Cumulative likes: 667 (including 657 suspected bot accounts刷出)

Trending rank: previously climbed to #1 on the Hugging Face trends chart

Trigger instruction: advises users to run _start.bat (Windows) or loader.py (Linux/Mac)

Actual behavior: a 6-stage Rust-written infostealer that steals the following data:

—Chrome/Firefox browser passwords and cookies

—Discord token

—cryptocurrency wallet mnemonic phrases

—SSH and FTP credentials

—screenshots of all screens

The model project was uncovered by AI security company HiddenLayer, and Hugging Face has taken it down. In parallel, HiddenLayer also identified seven similar malicious model projects, some impersonating other popular AI models such as Qwen3 and DeepSeek.

Industrial significance: AI supply chain becomes a new attack surface

Chain-news observation: this week’s three concurrently exposed AI-related supply-chain incidents—Mistral PyPI, the fake OpenAI HuggingFace, and the AI manufacturing zero-day vulnerability case disclosed by Google on 5/11—show that the AI developer ecosystem has become hackers’ priority attack surface.

Common patterns across the three cases:

Attackers disguise themselves as legitimate AI tool vendors (PyPI packages, HuggingFace models, AI manufacturing exploit programs)

Targeting “Web3 and AI developers,” a group with high-privilege tokens, encrypted wallets, and cloud accounts

Money laundering/theft paths spread quickly—244k downloads within 18 hours for the Hugging Face case indicate the impact expanded rapidly

Insufficient review mechanisms on large platforms (PyPI, HuggingFace) to promptly identify fake projects

For crypto and Web3 developers, these incidents reinforce the “social engineering + 6-month dwell time” threat mentioned in CertiK’s report released the same week: “North Korean hackers stole $2.06 billion” in 2025—by 2026, attackers no longer need to directly hack exchanges; they only need to poison open-source packages used by developers to indirectly obtain the corresponding keys and funds.

Practical defense actions for individual developers: verify signatures and the publisher before installing packages; run newly downloaded AI models in a dedicated virtual machine; rotate exchange API keys regularly; do not store crypto wallet mnemonic phrases on network-connected devices. At the team level, it is necessary to establish an “SBOM (software bill of materials)” and a supply-chain signing workflow.

Events that can be tracked next include: the results of Mistral’s investigation into compromised internal devices; whether Hugging Face introduces stricter review mechanisms for trending charts; and follow-up information on the other malicious model projects revealed by HiddenLayer (including fake versions of Qwen3 and DeepSeek).

This article: Two supply-chain attacks against AI packages—both Mistral and the fake OpenAI model were infiltrated First appeared in: 鏈新聞 ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments