API Reference

load_from_checkpoint

monad.ui.module.load_from_checkpoint

monad.ui.load_from_checkpoint(checkpoint_path, pl_logger=None, scoring=False, split=None, predictions_to_include_fn=None, predictions_to_exclude_fn=None, **kwargs)

Restores a trained scenario model for evaluation, inference or continued training.

from monad.ui.module import load_from_checkpoint

trainer = load_from_checkpoint(
        checkpoint_path="<path/to/checkpoint>",
    )
Parameters

checkpoint_path: str | pathlib.Path
Directory where all the checkpoint artifacts are stored.


pl_logger: Optional[pytorch_lightning.loggers.Logger]
Default: None.
An instance of PyTorch Lightning logger to use.


scoring: bool
Default: False.
Leave as False if the intention is to resume training, or set to True if it is to perform inference.


predictions_to_include_fn: Optional[Callable[[Events,Attributes, dict[str, float]], numpy.typing.NDArray[np.str_] | list[str] | None]]
Default: None.
A function that returns items/classes the predictions should be narrowed to for each entity. Mutually exclusive with predictions_to_exclude_fn.



predictions_to_exclude_fn: Optional[Callable[[Events,Attributes, dict[str, float]], numpy.typing.NDArray[np.str_] | list[str] | None]]
Default: None.
A function that returns items/classes that should be excluded from the predictions for each entity. Mutually exclusive with predictions_to_include_fn.


split: [TimeSplitOverride|EntitySplitOverride]
Default: None.
Optional override for the split configuration set during previous run.

Note: This parameter is handy when you need to override or add split information (e.g., if the test period was not defined in the pretrain configuration file). However, when running test or prediction with models trained using an entity-based split, use prediction_date from TestingParams instead.


kwargs: Any
Default: dict.
Data configuration parameters to change.

Returns

MonadModule