Description

NVIDIA identified and addressed high-severity security vulnerabilities within its Merlin AI framework, specifically affecting the NVTabular and Transformers4Rec components. The flaws arise from unsafe deserialization of untrusted data in Linux-based environments used for machine learning pipelines and recommender systems. These issues could allow attackers to supply specially crafted serialized objects that, when processed by vulnerable components, may lead to remote code execution, denial of service, data corruption, or unauthorized information access. Given Merlin’s widespread use in AI training and production workflows, the vulnerabilities present a significant risk to organizations relying on automated data processing and model training at scale. The root cause of these vulnerabilities lies in insufficient validation and control during the deserialization process. NVTabular’s Workflow and Transformers4Rec’s Trainer components deserialize objects without adequately restricting the classes or data structures being reconstructed. This unsafe design pattern enables malicious payloads to execute code during deserialization. Such weaknesses are common in systems that prioritize flexibility and performance over strict input validation, particularly in AI pipelines that ingest large volumes of external or semi-trusted data artifacts such as model checkpoints, workflow states, or training inputs. Organizations using NVIDIA Merlin should immediately update NVTabular and Transformers4Rec to patched versions that include the relevant security fixes. Deserialization of untrusted data should be avoided wherever possible, or replaced with safer serialization formats and strict allow-listing mechanisms. Access to AI pipelines must be restricted to trusted sources, and execution environments should follow least-privilege principles. Additionally, regular security reviews of ML workflows, monitoring for anomalous behavior, and isolating training infrastructure from external networks can significantly reduce exploitation risk.