Explainable AI (XAI) Explorer

Understand how LIME and SHAP explain AI decisions. Adjust applicant features, see predictions change, and compare explanation methods side by side.

LIME Local Interpretable Model-agnostic Explanations

Approximates complex models locally

How it works: LIME perturbs the input data, observes changes in the prediction, and fits a simple linear model to explain that specific, local decision. Think of it as putting a magnifying glass on a single prediction to understand what drove it.

SHAP SHapley Additive exPlanations

Based on cooperative game theory

How it works: SHAP assigns each feature a unified importance value for a prediction. It calculates exactly which features (e.g., age, income) drove the decision based on marginal contributions, providing both magnitude and direction of each feature's influence.

Loan Approval Model Simulator

Adjust the applicant's features below to see how the model's prediction changes, then explore how LIME and SHAP explain the decision differently.

35
$75K
700
30%
5
8
Decision
Approval Score
Confidence
Threshold
0.50

LIME Local Explanation

LIME creates local perturbations around this specific applicant, observes how the model responds, and fits a simple interpretable model. The bars show which features pushed the prediction towards approval (green, right) or denial (red, left) for this specific case. Values may vary slightly between runs due to the stochastic perturbation process.

SHAP Shapley Value Explanation

SHAP computes each feature's marginal contribution using Shapley values from game theory. The bars show the exact contribution of each feature to the final prediction score (positive = towards approval, negative = towards denial). SHAP values are additive: the base value plus all SHAP values equals the prediction score.

LIME vs SHAP — Side by Side

LIME Feature Importance

SHAP Feature Importance

AspectLIMESHAP
ApproachLocal surrogate model (perturbation-based)Game-theoretic (Shapley values)
ScopeLocal only (per-prediction)Both local and global
ConsistencyCan vary between runs (stochastic)Deterministic, unique solution
AdditivityNo — importances are relative weightsYes — values sum to prediction
SpeedFast (fewer perturbations)Slower (exponential combinations)
Model-agnosticYesYes (KernelSHAP) / No (TreeSHAP)
Best forQuick local explanations, debuggingRigorous analysis, regulatory compliance