
vllm
pypi · A high-throughput and memory-efficient inference and serving engine for LLMs · Repository · Package
Security Advisories for vllm in pypi
High
about 1 month ago
vLLM has remote code execution vulnerability in the tool call parser for Qwen3-Coder
pypi
vllm
Moderate
4 months ago
vLLM has a Weakness in MultiModalHasher Image Hashing Implementation
pypi
vllm
Low
4 months ago
Potential Timing Side-Channel Vulnerability in vLLM’s Chunk-Based Prefix Caching
pypi
vllm
Moderate
4 months ago
vLLM has a Regular Expression Denial of Service (ReDoS, Exponential Complexity) Vulnerability in `pythonic_tool_parser.py`
pypi
vllm
Critical
5 months ago
vLLM Allows Remote Code Execution via PyNcclPipe Communication Service
pypi
vllm
High
5 months ago
Remote Code Execution Vulnerability in vLLM Multi-Node Cluster Configuration
pypi
vllm
Moderate
5 months ago
phi4mm: Quadratic Time Complexity in Input Token Processing leads to denial of service
pypi
vllm
Critical
5 months ago
CVE-2025-24357 Malicious model remote code execution fix bypass with PyTorch < 2.6.0
pypi
vllm
Critical
7 months ago
vLLM deserialization vulnerability in vllm.distributed.GroupCoordinator.recv_object
pypi
vllm
Critical
7 months ago
vLLM allows Remote Code Execution by Pickle Deserialization via AsyncEngineRPCServer() RPC server entrypoints
pypi
vllm
Low
8 months ago
vLLM uses Python 3.12 built-in hash() which leads to predictable hash collisions in prefix cache
pypi
vllm