vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Critical Security Advisories for vllm in pypi Clear Filters
Critical
6 months ago
vLLM Allows Remote Code Execution via PyNcclPipe Communication Service
pypi
vllm
Critical
7 months ago
CVE-2025-24357 Malicious model remote code execution fix bypass with PyTorch < 2.6.0
pypi
vllm
Critical
8 months ago
vLLM deserialization vulnerability in vllm.distributed.GroupCoordinator.recv_object
pypi
vllm
Critical
8 months ago
vLLM allows Remote Code Execution by Pickle Deserialization via AsyncEngineRPCServer() RPC server entrypoints
pypi
vllm