vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Security Advisories for vllm in pypi
Moderate
13 days ago
vLLM vulnerable to DoS via large Chat Completion or Tokenization requests with specially crafted `chat_template_kwargs`
pypi
vllm
High
13 days ago
vLLM vulnerable to DoS with incorrect shape of multimodal embedding inputs
pypi
vllm
High
about 2 months ago
vLLM is vulnerable to Server-Side Request Forgery (SSRF) through `MediaConnector` class
pypi
vllm
Moderate
about 2 months ago
vLLM: Resource-Exhaustion (DoS) through Malicious Jinja Template in OpenAI-Compatible Server
pypi
vllm
High
3 months ago
vLLM has remote code execution vulnerability in the tool call parser for Qwen3-Coder
pypi
vllm
Moderate
6 months ago
vLLM has a Weakness in MultiModalHasher Image Hashing Implementation
pypi
vllm
Low
6 months ago
Potential Timing Side-Channel Vulnerability in vLLM’s Chunk-Based Prefix Caching
pypi
vllm
Moderate
6 months ago
vLLM has a Regular Expression Denial of Service (ReDoS, Exponential Complexity) Vulnerability in `pythonic_tool_parser.py`
pypi
vllm
Critical
7 months ago
vLLM Allows Remote Code Execution via PyNcclPipe Communication Service
pypi
vllm
High
7 months ago
Remote Code Execution Vulnerability in vLLM Multi-Node Cluster Configuration
pypi
vllm
Moderate
7 months ago
phi4mm: Quadratic Time Complexity in Input Token Processing leads to denial of service
pypi
vllm
Critical
7 months ago
CVE-2025-24357 Malicious model remote code execution fix bypass with PyTorch < 2.6.0
pypi
vllm
Critical
9 months ago
vLLM deserialization vulnerability in vllm.distributed.GroupCoordinator.recv_object
pypi
vllm
Critical
9 months ago
vLLM allows Remote Code Execution by Pickle Deserialization via AsyncEngineRPCServer() RPC server entrypoints
pypi
vllm
Low
10 months ago
vLLM uses Python 3.12 built-in hash() which leads to predictable hash collisions in prefix cache
pypi
vllm
High
10 months ago
vllm: Malicious model to RCE by torch.load in hf_model_weights_iterator
pypi
vllm