depscope
Packages
IntegrateAPI DocsCuratorBenchmarkCoverage
Sign inGet API access
depscope/cran/iml

iml

cranv0.11.4

Interpretable Machine Learning. Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) <doi:10.48550/arxiv.1801.01489>, accumulated local effects plots described by Apley (2018) <doi:10.48550/arxiv.1612.08468>, p

License MIT + file LICENSE0 versions1 maintainers8 deps1,907 weekly dl
giuseppec/iml/
44
/ 100
Health
safe to use

[email protected] is safe to use (health: 44/100)

Health breakdown0 – 100
5/25
maintenance
6/20
popularity
25/25
security
6/15
maturity
2/15
community
Vulnerabilities
0
none known

Health History

Dependency Tree

License Audit

Dependencies (8)
checkmatedata.tableFormulafuturefuture.applyggplot2MetricsR6
API access

Get this data programmatically — free, no authentication.

curl https://depscope.dev/api/check/cran/iml

First published · 2025-02-24 13:13:05

Last updated · 2025-02-24T11:50:02+00:00

DepScope

Package intelligence for AI agents. 19 ecosystems.

Resources
API DocumentationHallucination BenchmarkFor EnterpriseSwagger / OpenAPIPopular PackagesCoverageAI Plugin SetupWatch the pitch (60s)
Legal
Legal hubPrivacy PolicyTerms of ServiceCookie PolicyAcceptable UseAttributionDPASub-processorsSecurityImprintContact中文
© 2026 Cuttalo srl — Italy · VAT IT03242390734Built for AI agents