Building applications with LLMs through composability
[email protected] is safe to use (health: 65/100)
langchain_experimental vulnerable to arbitrary code execution via PALChain in the python exec method
LangChain Experimental Eval Injection vulnerability
Get this data programmatically — free, no authentication required:
curl https://depscope.dev/api/check/pypi/langchain-experimentalLast updated: 2025-12-11T05:30:47.234163Z
Data from DepScope — Package Intelligence for AI Agents