Description

NVIDIA has rolled out a critical security update for its open-source Megatron-LM framework after uncovering two serious vulnerabilities. Attackers can exploit the vulnerabilities CVE-2025-23264 and CVE-2025-23265 to run malicious code on systems using Megatron-LM versions earlier than 0.12.0. NVIDIA's security team promptly resolved the issue. Megatron-LM is essential for training large transformer-based neural networks and supports enterprise AI, high-performance computing, and research. The vulnerabilities are caused by inadequate input validation in a Python part of the framework. Exploiting these flaws involves submitting a specially designed malicious file, which can trigger code injection and result in remote code execution, privilege escalation, unauthorized data access, or tampering. According to NVIDIA’s security advisory, attackers can take advantage of these weaknesses with minimal effort and without user interaction, heightening risks for automated model loading and dynamic pipeline setups—common practices in contemporary AI operations. Both vulnerabilities have been rated with a CVSS v3.1 score of 7.8, indicating they are of "High" severity. NVIDIA addressed the security issues CVE-2025-23264 and CVE-2025-23265 with the release of Megatron-LM version 0.12.1. Users and organizations are urged to update their systems without delay to reduce exposure. All previous versions, including those on different branches, are vulnerable. Researchers Yu Rong and Hao Fan responsibly disclosed the security flaws and have been credited by NVIDIA for their work. As AI adoption grows, the security of core frameworks such as Megatron-LM is more important than ever. Organizations using Megatron-LM should make patching a top priority to safeguard their AI infrastructure and sensitive data.