langchain known bugs
pypi36 known bugs in langchain, with affected versions, fixes and workarounds. Sourced from upstream issue trackers.
36
bugs
Known bugs
| Severity | Affected | Fixed in | Title | Status | Source |
|---|---|---|---|---|---|
| high | any | 0.0.247 | langchain SQL Injection vulnerability SQL injection vulnerability in langchain allows a remote attacker to obtain sensitive information via the SQLDatabaseChain component. | fixed | osv:GHSA-7q94-qpjr-xpgm |
| high | any | 0.0.329 | Langchain Server-Side Request Forgery vulnerability In Langchain before 0.0.329, prompt injection allows an attacker to force the service to retrieve data from an arbitrary URL, essentially providing SSRF and potentially injecting content into downstream tasks. | fixed | osv:GHSA-6h8p-4hx9-w66c |
| high | any | 0.0.317 | LangChain Server Side Request Forgery vulnerability LangChain before 0.0.317 allows SSRF via `document_loaders/recursive_url_loader.py` because crawling can proceed from an external server to an internal server. | fixed | osv:GHSA-655w-fm8m-m478 |
| medium | any | 0.1.11 | PYSEC-2024-43: advisory LangChain through 0.1.10 allows ../ directory traversal by an actor who is able to control the final part of the path parameter in a load_chain call. This bypasses the intended behavior of loading configurations only from the hwchase17/langchain-hub GitHub repository. The outcome can be disclosure of an API key for a large language model online service, or remote code execution. | fixed | osv:PYSEC-2024-43 |
| medium | any | 73c42306745b0831aa6fe7fe4eeb70d2c2d87a82 | PYSEC-2024-118: advisory A Denial-of-Service (DoS) vulnerability exists in the `SitemapLoader` class of the `langchain-ai/langchain` repository, affecting all versions. The `parse_sitemap` method, responsible for parsing sitemaps and extracting URLs, lacks a mechanism to prevent infinite recursion when a sitemap URL refers to the current sitemap itself. This oversight allows for the possibility of an infinite loop, leading to a crash by exceeding the maximum recursion depth in Python. This vulnerability can be exploited to occupy server socket/port resources and crash the Python process, impacting the availability of services relying on this functionality. | fixed | osv:PYSEC-2024-118 |
| medium | any | c2a3021bb0c5f54649d380b42a0684ca5778c255 | PYSEC-2024-115: advisory A vulnerability in the GraphCypherQAChain class of langchain-ai/langchain-community version 0.2.5 allows for SQL injection through prompt injection. This vulnerability can lead to unauthorized data manipulation, data exfiltration, denial of service (DoS) by deleting all data, breaches in multi-tenant security environments, and data integrity issues. Attackers can create, update, or delete nodes and relationships without proper authorization, extract sensitive data, disrupt services, access data across different tenants, and compromise the integrity of the database. | fixed | osv:PYSEC-2024-115 |
| medium | any | 0.0.247 | PYSEC-2023-98: advisory An issue in langchain v.0.0.199 allows an attacker to execute arbitrary code via the PALChain in the python exec method. | fixed | osv:PYSEC-2023-98 |
| medium | any | 0.0.247 | PYSEC-2023-92: advisory Langchain 0.0.171 is vulnerable to Arbitrary code execution in load_prompt. | fixed | osv:PYSEC-2023-92 |
| medium | any | 0.0.225 | PYSEC-2023-91: advisory Langchain 0.0.171 is vulnerable to Arbitrary Code Execution. | fixed | osv:PYSEC-2023-91 |
| medium | any | 9ecb7240a480720ec9d739b3877a52f76098a2b8 | PYSEC-2023-205: advisory LangChain before 0.0.317 allows SSRF via document_loaders/recursive_url_loader.py because crawling can proceed from an external server to an internal server. | fixed | osv:PYSEC-2023-205 |
| medium | any | 0.0.132 | PYSEC-2023-18: advisory In LangChain through 0.0.131, the LLMMathChain chain allows prompt injection attacks that can execute arbitrary code via the Python exec method. | fixed | osv:PYSEC-2023-18 |
| medium | any | 0.0.308 | PYSEC-2023-162: advisory An issue in LanChain-ai Langchain v.0.0.245 allows a remote attacker to execute arbitrary code via the evaluate function in the numexpr library. | fixed | osv:PYSEC-2023-162 |
| medium | any | 0.0.171 | PYSEC-2023-151: advisory An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via the via the a json file to the load_prompt parameter. | fixed | osv:PYSEC-2023-151 |
| medium | any | 0.0.233 | PYSEC-2023-147: advisory An issue in langchain langchain-ai v.0.0.232 and before allows a remote attacker to execute arbitrary code via a crafted script to the PythonAstREPLTool._run component. | fixed | osv:PYSEC-2023-147 |
| medium | any | 0.0.195 | PYSEC-2023-146: advisory An issue in Harrison Chase langchain v.0.0.194 and before allows a remote attacker to execute arbitrary code via the from_math_prompt and from_colored_object_prompt functions. | fixed | osv:PYSEC-2023-146 |
| medium | any | 0.0.247 | PYSEC-2023-145: advisory An issue in LangChain v.0.0.231 allows a remote attacker to execute arbitrary code via the prompt parameter. | fixed | osv:PYSEC-2023-145 |
| medium | any | 0.0.236 | PYSEC-2023-138: advisory An issue in Harrison Chase langchain v.0.0.194 allows an attacker to execute arbitrary code via the python exec calls in the PALChain, affected functions include from_math_prompt and from_colored_object_prompt. | fixed | osv:PYSEC-2023-138 |
| medium | any | 0.0.247 | PYSEC-2023-110: advisory SQL injection vulnerability in langchain v.0.0.64 allows a remote attacker to obtain sensitive information via the SQLDatabaseChain component. | fixed | osv:PYSEC-2023-110 |
| medium | any | 0.0.247 | PYSEC-2023-109: advisory An issue in langchain v.0.0.64 allows a remote attacker to execute arbitrary code via the PALChain parameter in the Python exec method. | fixed | osv:PYSEC-2023-109 |
| medium | any | 0.0.353 | langchain vulnerable to path traversal langchain-ai/langchain is vulnerable to path traversal due to improper limitation of a pathname to a restricted directory ('Path Traversal') in its LocalFileStore functionality. An attacker can leverage this vulnerability to read or write files anywhere on the filesystem, potentially leading to information disclosure or remote code execution. The issue lies in the handling of file paths in the mset and mget methods, where user-supplied input is not adequately sanitized, allowing directory traversal sequences to reach unintended directories. | fixed | osv:GHSA-rgp8-pm28-3759 |
| medium | any | 0.2.5 | Denial of service in langchain-community Denial of service in `SitemapLoader` Document Loader in the `langchain-community` package, affecting versions below 0.2.5. The `parse_sitemap` method, responsible for parsing sitemaps and extracting URLs, lacks a mechanism to prevent infinite recursion when a sitemap URL refers to the current sitemap itself. This oversight allows for the possibility of an infinite loop, leading to a crash by exceeding the maximum recursion depth in Python. This vulnerability can be exploited to occupy server socket/port resources and crash the Python process, impacting the availability of services relying on this functionality. | fixed | osv:GHSA-3hjh-jh2h-vrg6 |
| low | any | 0.1.0 | langchain Server-Side Request Forgery vulnerability With the following crawler configuration:
```python
from bs4 import BeautifulSoup as Soup
url = "https://example.com"
loader = RecursiveUrlLoader(
url=url, max_depth=2, extractor=lambda x: Soup(x, "html.parser").text
)
docs = loader.load()
```
An attacker in control of the contents of `https://example.com` could place a malicious HTML file in there with links like "https://example.completely.different/my_file.html" and the crawler would proceed to download that file as well even though `prevent_outside=True`.
https://github.com/langchain-ai/langchain/blob/bf0b3cc0b5ade1fb95a5b1b6fa260e99064c2e22/libs/community/langchain_community/document_loaders/recursive_url_loader.py#L51-L51
Resolved in https://github.com/langchain-ai/langchain/pull/15559 | fixed | osv:GHSA-h9j7-5xvc-qhg5 |
| low | any | 0.0.339 | LangChain directory traversal vulnerability LangChain through 0.1.10 allows ../ directory traversal by an actor who is able to control the final part of the path parameter in a load_chain call. This bypasses the intended behavior of loading configurations only from the hwchase17/langchain-hub GitHub repository. The outcome can be disclosure of an API key for a large language model online service, or remote code execution. | fixed | osv:GHSA-h59x-p739-982c |
| low | 0.2.0 | 0.2.19 | Langchain SQL Injection vulnerability A vulnerability in the GraphCypherQAChain class of langchain-ai/langchain version 0.2.5 allows for SQL injection through prompt injection. This vulnerability can lead to unauthorized data manipulation, data exfiltration, denial of service (DoS) by deleting all data, breaches in multi-tenant security environments, and data integrity issues. Attackers can create, update, or delete nodes and relationships without proper authorization, extract sensitive data, disrupt services, access data across different tenants, and compromise the integrity of the database. | fixed | osv:GHSA-45pg-36p6-83v9 |
| critical | any | 0.0.225 | Langchain OS Command Injection vulnerability Langchain before v0.0.225 was discovered to contain a remote code execution (RCE) vulnerability in the component JiraAPIWrapper (aka the JIRA API wrapper). This vulnerability allows attackers to execute arbitrary code via crafted input. As noted in the "releases/tag" reference, a fix is available. | fixed | osv:GHSA-x32c-59v5-h7fg |
| critical | any | 0.0.325 | LangChain vulnerable to arbitrary code execution An issue in langchain langchain-ai before version 0.0.325 allows a remote attacker to execute arbitrary code via a crafted script to the PythonAstREPLTool._run component. | fixed | osv:GHSA-prgp-w7vf-ch62 |
| critical | any | 0.0.236 | langchain Code Injection vulnerability An issue in Harrison Chase langchain allows an attacker to execute arbitrary code via the PALChain,from_math_prompt(llm).run in the python exec method. | fixed | osv:GHSA-gwqq-6vq7-5j86 |
| critical | any | \u2014 | LangChain vulnerable to code injection In LangChain through 0.0.131, the `LLMMathChain` chain allows prompt injection attacks that can execute arbitrary code via the Python `exec()` method. | open | osv:GHSA-fprp-p869-w6q2 |
| critical | any | 0.0.247 | LangChain vulnerable to arbitrary code execution An issue in LangChain prior to v.0.0.247 allows a remote attacker to execute arbitrary code via the prompt parameter. | fixed | osv:GHSA-fj32-q626-pjjc |
| critical | any | 0.0.308 | Langchain vulnerable to arbitrary code execution via the evaluate function in the numexpr library An issue in LanChain-ai Langchain v.0.0.245 allows a remote attacker to execute arbitrary code via the evaluate function in the numexpr library.
Patches: Released in v.0.0.308. numexpr dependency is optional for langchain. | fixed | osv:GHSA-f73w-4m7g-ch9x |
| critical | any | 0.0.236 | LangChain vulnerable to arbitrary code execution An issue in Harrison Chase langchain before version 0.0.236 allows a remote attacker to execute arbitrary code via the `from_math_prompt` and `from_colored_object_prompt` functions. | fixed | osv:GHSA-92j5-3459-qgp4 |
| critical | any | 0.0.247 | Langchain SQL Injection vulnerability In Langchain before 0.0.247, prompt injection allows execution of arbitrary code against the SQL service provided by the chain. | fixed | osv:GHSA-8h5w-f6q9-wg35 |
| critical | any | 0.0.312 | langchain vulnerable to arbitrary code execution An issue in langchain v.0.0.171 allows a remote attacker to execute arbitrary code via the via the a json file to the `load_prompt` parameter. This is related to `__subclasses__` or a template. | fixed | osv:GHSA-7gfq-f96f-g85j |
| critical | any | 0.0.247 | Langchain vulnerable to arbitrary code execution Langchain 0.0.171 is vulnerable to Arbitrary code execution in `load_prompt`. | fixed | osv:GHSA-6643-h7h5-x9wh |
| critical | any | 0.0.236 | langchain vulnerable to arbitrary code execution An issue in langchain allows a remote attacker to execute arbitrary code via the PALChain parameter in the Python exec method. | fixed | osv:GHSA-57fc-8q82-gfp3 |
| critical | any | 0.0.247 | langchain arbitrary code execution vulnerability An issue in langchain allows an attacker to execute arbitrary code via the PALChain in the python exec method. | fixed | osv:GHSA-2qmj-7962-cjq8 |
API access
Get this data programmatically \u2014 free, no authentication.
curl https://depscope.dev/api/bugs/pypi/langchain