Automatic micro-batching for HTTP LLM calls and local PyTorch inference, backed by a Rust core.
[email protected] low health (56/100) — consider alternatives
Get this data programmatically — free, no authentication.
curl https://depscope.dev/api/check/pypi/llm-autobatchLast updated · 2026-02-10T15:13:28.376015Z