litellm known bugs

pypi

21 known bugs in litellm, with affected versions, fixes and workarounds. Sourced from upstream issue trackers.

21
bugs
Known bugs
SeverityAffectedFixed inTitleStatusSource
high1.80.51.83.7
LiteLLM: Server-Side Template Injection in /prompts/test endpoint
### Impact The `POST /prompts/test` endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process. The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host. Proxy deployments running an affected version are in scope. ### Patches The issue is fixed in **`1.83.7-stable`**. The fix switches the prompt template renderer to a sandboxed environment that blocks the attributes this attack relies on. LiteLLM recommends upgrading to `1.83.7-stable` or later. ### Workarounds If upgrading is not immediately possible: 1. Block `POST /prompts/test` at your reverse proxy or API gateway. 2. Review and rotate API keys that should not have access to prompt management routes.
fixedosv:GHSA-xqmj-j6mv-4862
highany1.53.1.dev1
LiteLLM Vulnerable to Denial of Service (DoS)
A vulnerability in BerriAI/litellm, as of commit 26c03c9, allows unauthenticated users to cause a Denial of Service (DoS) by exploiting the use of ast.literal_eval to parse user input. This function is not safe and is prone to DoS attacks, which can crash the litellm Python server.
fixedosv:GHSA-gw2q-qw9j-rgv7
highany1.44.12
LiteLLM Reveals Portion of API Key via a Logging File
In berriai/litellm before version 1.44.12, the `litellm/litellm_core_utils/litellm_logging.py` file contains a vulnerability where the API key masking code only masks the first 5 characters of the key. This results in the leakage of almost the entire API key in the logs, exposing a significant amount of the secret key. The issue affects version v1.44.9.
fixedosv:GHSA-g5pg-73fc-hjwq
highany1.44.8
LiteLLM Server-Side Request Forgery (SSRF) vulnerability
A Server-Side Request Forgery (SSRF) vulnerability exists in berriai/litellm version 1.38.10. This vulnerability allows users to specify the `api_base` parameter when making requests to `POST /chat/completions`, causing the application to send the request to the domain specified by `api_base`. This request includes the OpenAI API key. A malicious user can set the `api_base` to their own domain and intercept the OpenAI API key, leading to unauthorized access and potential misuse of the API key.
fixedosv:GHSA-g26j-5385-hhw3
highany1.61.15
LiteLLM Has an Improper Authorization Vulnerability
An improper authorization vulnerability exists in the main-latest version of BerriAI/litellm. When a user with the role 'internal_user_viewer' logs into the application, they are provided with an overly privileged API key. This key can be used to access all the admin functionality of the application, including endpoints such as '/users/list' and '/users/get_users'. This vulnerability allows for privilege escalation within the application, enabling any account to become a PROXY ADMIN.
fixedosv:GHSA-fjcf-3j3r-78rp
highany1.56.2
LiteLLM Vulnerable to Denial of Service (DoS) via Crafted HTTP Request
A Denial of Service (DoS) vulnerability exists in berriai/litellm version v1.44.5. This vulnerability can be exploited by appending characters, such as dashes (-), to the end of a multipart boundary in an HTTP request. The server continuously processes each character, leading to excessive resource consumption and rendering the service unavailable. The issue is unauthenticated and does not require any user interaction, impacting all users of the service.
fixedosv:GHSA-fh2c-86xm-pm2x
highany\u2014
LiteLLM Has a Leakage of Langfuse API Keys
In berriai/litellm version v1.52.1, an issue in proxy_server.py causes the leakage of Langfuse API keys when an error occurs while parsing team settings. This vulnerability exposes sensitive information, including langfuse_secret and langfuse_public_key, which can provide full access to the Langfuse project storing all requests.
openosv:GHSA-879v-fggm-vxw2
highany\u2014
litellm passes untrusted data to `eval` function without sanitization
A remote code execution (RCE) vulnerability exists in the berriai/litellm project due to improper control of the generation of code when using the `eval` function unsafely in the `litellm.get_secret()` method. Specifically, when the server utilizes Google KMS, untrusted data is passed to the `eval` function without any sanitization. Attackers can exploit this vulnerability by injecting malicious values into environment variables through the `/config/update` endpoint, which allows for the update of settings in `proxy_server_config.yaml`.
openosv:GHSA-7ggm-4rjg-594w
highany1.83.0
LiteLLM: Password hash exposure and pass-the-hash authentication bypass
### Impact Three issues combine into a full authentication bypass chain: 1. Weak hashing: User passwords are stored as unsalted SHA-256 hashes, making them vulnerable to rainbow table attacks and trivially identifying users with identical passwords. 2. Hash exposure: Multiple API endpoints (/user/info, /user/update, /spend/users) return the password hash field in responses to any authenticated user regardless of role. Plaintext passwords could also potentially be exposed in certain scenarios. 4. Pass-the-hash: The /v2/login endpoint accepts the raw SHA-256 hash as a valid password without re-hashing, allowing direct login with a stolen An already authenticated user can retrieve another user's password hash from the API and use it to log in as that user. This enables full privilege escalation in three HTTP requests. ### Patches Fixed in v1.83.0. Passwords are now hashed with scrypt (random 16-byte salt, n=16384, r=8, p=1). Password hashes are stripped from all API responses. Existing SHA-256 hashes are transparently migrated on next login.
fixedosv:GHSA-69x8-hrgq-fjj8
highany1.83.0
LiteLLM: Privilege escalation via unrestricted proxy configuration endpoint
### Impact The `/config/update endpoint` does not enforce admin role authorization. A user who is already authenticated into the platform can then use this endpoint to do the following: - Modify proxy configuration and environment variables - Register custom pass-through endpoint handlers pointing to attacker-controlled Python code, achieving remote code execution - Read arbitrary server files by setting UI_LOGO_PATH and fetching via /get_image - Take over other priveleged accounts by overwriting UI_USERNAME and UI_PASSWORD environment variables ### Patches Fixed in v1.83.0. The endpoint now requires `proxy_admin` role. ### Workarounds Restrict API key distribution. There is no configuration-level workaround.
fixedosv:GHSA-53mr-6c8q-9789
high1.40.3.dev2\u2014
LiteLLM Vulnerable to Remote Code Execution (RCE)
BerriAI/litellm version 1.40.12 contains a vulnerability that allows remote code execution. The issue exists in the handling of the 'post_call_rules' configuration, where a callback function can be added. The provided value is split at the final '.' mark, with the last part considered the function name and the remaining part appended with the '.py' extension and imported. This allows an attacker to set a system method, such as 'os.system', as a callback, enabling the execution of arbitrary commands when a chat response is processed.
openosv:GHSA-53gh-p8jc-7rg8
highany1.35.36
Arbitrary file deletion in litellm
BerriAI's litellm, in its latest version, is vulnerable to arbitrary file deletion due to improper input validation on the `/audio/transcriptions` endpoint. An attacker can exploit this vulnerability by sending a specially crafted request that includes a file path to the server, which then deletes the specified file without proper authorization or validation. This vulnerability is present in the code where `os.remove(file.filename)` is used to delete a file, allowing any user to delete critical files on the server such as SSH keys, SQLite databases, or configuration files.
fixedosv:GHSA-3xr8-qfvj-9p9j
medium1.82.7\u2014
Two litellm versions published containing credential harvesting malware
After an API Token exposure from an exploited Trivy dependency, two new releases of `litellm` were uploaded to PyPI containing automatically activated malware, harvesting sensitive credentials and files, and exfiltrating to a remote API. The malicious code runs during importing any module from the package and scans the file system and environment variables, collecting all kinds of sensitive data, including but not limited to private SSH keys, credentials to Git and Docker repositories, dotenv files, tokens to Kubernetes service accounts, databases and LDAP configuration. Also exfiltrated are multiple shell history files and cryptowallet keys. The malware actively attempts to obtain cloud access tokens from metadata servers and retrieve secrets stored in AWS Secrets Manager. All collected data are sent to the domain models.litellm[.]cloud Furthermore, the code includes a persistence mechanism by configuring a SystemD service unit masqueraded as "System Telemetry Service" on the host it runs on, and in a Kubernetes environment also by creating a new pod. The persistence script then contacts hxxps://checkmarx[.]zone/raw for further instructions. Anyone who has installed and run the project should assume any credentials available to litellm environment may have been exposed, and revoke/rotate them accordingly. The affected environment should be isolated and carefully reviewed against any unexpected modifications and network traffic.
openosv:PYSEC-2026-2
mediumany\u2014
Malicious code in litellm (PyPI)
--- _-= Per source details. Do not edit below this line.=-_ ## Source: google-open-source-security (6a89401cbf53902e8374fbf3b424a77bb5e5f8c437176232eab7c3237d10ecbe) LiteLLM was compromised through trivy security scan in a GitHub workflow. Attackers uploaded malicious versions of LiteLLM to PyPI. The malicious code would exfiltrate sensitive secrets to an attcker controlled domain. ## Source: ossf-package-analysis (c1d5a2e721c5f8b33b0530ddf98150cadf034a8cd16483e143fc2925b2cfa70c) The OpenSSF Package Analysis project identified 'litellm' @ 1.82.8 (pypi) as malicious. It is considered malicious because: - The package executes one or more commands associated with malicious behavior.
openosv:MAL-2026-2144
mediumany1.40.15
litellm vulnerable to improper access control in team management
berriai/litellm version 1.34.34 is vulnerable to improper access control in its team management functionality. This vulnerability allows attackers to perform unauthorized actions such as creating, updating, viewing, deleting, blocking, and unblocking any teams, as well as adding or deleting any member to or from any teams. The vulnerability stems from insufficient access control checks in various team management endpoints, enabling attackers to exploit these functionalities without proper authorization.
fixedosv:GHSA-qqcv-vg9f-5rr3
mediumany1.40.0
SQL injection in litellm
An SQL Injection vulnerability exists in the berriai/litellm repository, specifically within the `/global/spend/logs` endpoint. The vulnerability arises due to improper neutralization of special elements used in an SQL command. The affected code constructs an SQL query by concatenating an unvalidated `api_key` parameter directly into the query, making it susceptible to SQL Injection if the `api_key` contains malicious data. This issue affects the latest version of the repository. Successful exploitation of this vulnerability could lead to unauthorized access, data manipulation, exposure of confidential information, and denial of service (DoS).
fixedosv:GHSA-h6m6-jj8v-94jj
mediumany\u2014
SQL injection in litellm
A blind SQL injection vulnerability exists in the berriai/litellm application, specifically within the '/team/update' process. The vulnerability arises due to the improper handling of the 'user_id' parameter in the raw SQL query used for deleting users. An attacker can exploit this vulnerability by injecting malicious SQL commands through the 'user_id' parameter, leading to potential unauthorized access to sensitive information such as API keys, user information, and tokens stored in the database. The affected version is 1.27.14.
openosv:GHSA-8j42-pcfm-3467
criticalany1.83.0
LiteLLM: Authentication bypass via OIDC userinfo cache key collision
### Impact When JWT authentication is enabled (`enable_jwt_auth: true`), the OIDC userinfo cache uses `token[:20]` as the cache key. JWT headers produced by the same signing algorithm generate identical first 20 characters. This configuration option is not enabled by default. **Most instances are not affected.** An unauthenticated attacker can craft a token whose first 20 characters match a legitimate user's cached token. On cache hit, the attacker inherits the legitimate user's identity and permissions. This affects deployments with JWT/OIDC authentication enabled. ### Patches Fixed in v1.83.0. The cache key now uses the full hash of the JWT token. ### Workarounds Disable OIDC userinfo caching by setting the cache TTL to 0, or disable JWT authentication entirely.
fixedosv:GHSA-jjhc-v7c2-5hh6
criticalany1.40.16
litellm vulnerable to remote code execution based on using eval unsafely
BerriAI/litellm version v1.35.8 contains a vulnerability where an attacker can achieve remote code execution. The vulnerability exists in the `add_deployment` function, which decodes and decrypts environment variables from base64 and assigns them to `os.environ`. An attacker can exploit this by sending a malicious payload to the `/config/update` endpoint, which is then processed and executed by the server when the `get_secret` function is triggered. This requires the server to use Google KMS and a database to store a model.
fixedosv:GHSA-gppg-gqw8-wh9g
critical1.82.7\u2014
Two LiteLLM versions published containing credential harvesting malware
After an API Token exposure from an exploited trivy dependency, two new releases of `litellm` were uploaded to PyPI containing automatically activated malware, harvesting sensitive credentials and files, and exfiltrating to a remote API. Anyone who has installed and run the project should assume any credentials available to litellm environment may have been exposed, and revoke/rotate thema ccordingly.
openosv:GHSA-5mg7-485q-xm76
criticalany1.34.42
LiteLLM has Server-Side Template Injection vulnerability in /completions endpoint
BerriAI/litellm is vulnerable to Server-Side Template Injection (SSTI) via the `/completions` endpoint. The vulnerability arises from the `hf_chat_template` method processing the `chat_template` parameter from the `tokenizer_config.json` file through the Jinja template engine without proper sanitization. Attackers can exploit this by crafting malicious `tokenizer_config.json` files that execute arbitrary code on the server.
fixedosv:GHSA-46cm-pfwv-cgf8
API access

Get this data programmatically \u2014 free, no authentication.

curl https://depscope.dev/api/bugs/pypi/litellm
litellm bugs — known issues per version | DepScope | DepScope