Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLang.
[email protected] is safe to use (health: 80/100)
No known vulnerabilities in the latest version.
Get this data programmatically — free, no authentication required:
curl https://depscope.dev/api/check/pypi/gptqmodelLast updated: 2026-04-03T00:52:28.782567Z
Data from DepScope — Package Intelligence for AI Agents