Understand how LIME and SHAP explain AI decisions. Adjust applicant features, see predictions change, and compare explanation methods side by side.
Approximates complex models locally
How it works: LIME perturbs the input data, observes changes in the prediction, and fits a simple linear model to explain that specific, local decision. Think of it as putting a magnifying glass on a single prediction to understand what drove it.
Based on cooperative game theory
How it works: SHAP assigns each feature a unified importance value for a prediction. It calculates exactly which features (e.g., age, income) drove the decision based on marginal contributions, providing both magnitude and direction of each feature's influence.
Adjust the applicant's features below to see how the model's prediction changes, then explore how LIME and SHAP explain the decision differently.
| Aspect | LIME | SHAP |
|---|---|---|
| Approach | Local surrogate model (perturbation-based) | Game-theoretic (Shapley values) |
| Scope | Local only (per-prediction) | Both local and global |
| Consistency | Can vary between runs (stochastic) | Deterministic, unique solution |
| Additivity | No — importances are relative weights | Yes — values sum to prediction |
| Speed | Fast (fewer perturbations) | Slower (exponential combinations) |
| Model-agnostic | Yes | Yes (KernelSHAP) / No (TreeSHAP) |
| Best for | Quick local explanations, debugging | Rigorous analysis, regulatory compliance |