A high-throughput and memory-efficient inference and serving engine for LLMs
[email protected] is safe to use (health: 76/100)
vLLM Deserialization of Untrusted Data vulnerability
vLLM allows Remote Code Execution by Pickle Deserialization via AsyncEngineRPCServer() RPC server entrypoints
vLLM deserialization vulnerability in vllm.distributed.GroupCoordinator.recv_object
vLLM Denial of Service via the best_of parameter
Get this data programmatically — free, no authentication required:
curl https://depscope.dev/api/check/pypi/vllmLast updated: 2026-04-03T04:05:52.513885Z
Data from DepScope — Package Intelligence for AI Agents