monad.ui.interpretability.interpret
monad.ui.interpretability.interpretmonad.ui.interpretability.interpret(predictions_path, output_path, checkpoint_path device, prediction_date=datetime.datetime.now(tz=timezone.utc), limit_batches=None, target_index=None, classification_resample=False, recommended_value=None,
group_size=500,
**kwargs)
Computes interpretability results for a model.
from pathlib import Path
from datetime import datetime
from monad.config.time_range import TimeRange
from monad.ui.config import DataMode
from monad.ui.interpretability import interpret 
interpret(
    output_path=Path("<path/where/results/should/be/saved>"),
    predictions_path=Path("<path/to/predictions/my_predictions.tsv>"),
    checkpoint_path=Path("<path/to/downstream/model/checkpoints>"), # location of scenario model,
    device="cpu",
    limit_batches=100,
    target_index=0,
    prediction_date=datetime(2023, 8, 1),
)| Parameters | 
|---|
output_path: pathlib.Path 
Where interpretability results should be stored.
predictions_path: pathlib.Path 
Path to your saved predictions.
checkpoint_path: pathlib.Path 
Path to the model checkpoint - the model that was used to run predictions.
device: str 
Device to compute the attributions on. Most commonly "cpu" or "cuda"/"cuda:X" where X is the device number.
target_index: Optional[str] 
Default: None 
Output indices for which interpretations are computed. For multiclass and multi-label classification should be the id number of a class. No target index is needed for recommendation.
limit_batches: Optional[int] 
Default: None 
Number of batches to compute attributions. If None all batches will be used. Defaults to None. Limiting batches will decrease computation time.
classification_resample: bool 
Default: False 
If data should be resampled to obtain balanced classes. Defaults to True. Applicable only for classification models.
recommended_value: Optional[str] 
Default: None 
Value of recommended entity for which the interpretation should be generated. Applicable only for recommendation models.
group_size: int 
Default: 500 
Maximal number of samples to take from each group. For classification resampling, a group size is a size of one class. For recommendation, group size is the total number of observations that have recommended value among top predictions. If a group has fewer observations than value of this parameter, all available ones will be taken. Parameter used only if classification_resampling is set to True or recommended_value is set.
prediction_date: Optional[datetime.datetime] 
Default: datetime.datetime.now() 
The date for which the predictions should be interpreted. Defaults to current date.
Additonally, as kwargs, you can pass any parameters defined in data_params block in YAML configuration to overwrite those used during the training of the scenario model.
kwargs: Any 
Default: dict. 
Data configuration parameters to change.
| Returns | 
|---|
Saves results under output_path.
