vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Low Security Advisories for vllm in pypi Clear Filters
Low
6 months ago
Potential Timing Side-Channel Vulnerability in vLLM’s Chunk-Based Prefix Caching
pypi
vllm
Low
10 months ago
vLLM uses Python 3.12 built-in hash() which leads to predictable hash collisions in prefix cache
pypi
vllm