The popular Python Pickle serialization format, which is common for distributing AI models, offers ways for attackers to inject malicious code that will be executed on computers when loading models ...
AI frameworks, including Meta’s Llama, are prone to automatic Python deserialization by pickle that could lead to remote code execution. Meta’s large language model (LLM) framework, Llama, suffers a ...
At least 100 instances of malicious AI ML models were found on the Hugging Face platform, some of which can execute code on the victim's machine, giving attackers a persistent backdoor. Hugging Face ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results