Building applications with LLMs through composability
[email protected] is safe to use (health: 81/100)
Langchain vulnerable to arbitrary code execution via the evaluate function in the numexpr library
Fixed in 2.8.5
LangChain vulnerable to code injection
Get this data programmatically — free, no authentication required:
curl https://depscope.dev/api/check/pypi/langchainLast updated: 2026-04-03T14:26:02.557339Z
Data from DepScope — Package Intelligence for AI Agents