Local Interpretable Model-Agnostic Explanations. When building complex models, it is often difficult to explain why the model should be trusted. While global measures such as accuracy are useful, they cannot be used for explaining why a model made a specific prediction. 'lime' (a port of the 'lime' 'Python' package) is a method for explaining the
Package name resembles 'lme4' (possible typosquat). Verify the name is what you intend before installing.
Get this data programmatically — free, no authentication.
curl https://depscope.dev/api/check/cran/limeFirst published · 2025-12-11 17:01:49
Last updated · 2025-12-11T15:00:02+00:00