API Reference

TestingParams

class monad.ui.config.TestingParams

class from monad.ui.config import TestingParams

Defines model testing setup.

from monad.ui.config import OutputType, TestingParams
from monad.ui.module import load_from_checkpoint

testing_params = TestingParams(
    local_save_location = "<path/to/predictions/predictions_and_ground_truth.tsv>",
    output_type = OutputType.DECODED,
)

Parameters

output_type: OutputType
Output format in which to save the predictions. The table below explains how different values affect prediction outputs.


prediction_date: Optional[datetime.datetime]
Default: None
For which date do test evaluation in case when entity-based split is used or for which date make predictions during the inference.


local_save_location: pathlib.Path | None
Default: None
If provided, points to the location in the local filesystem where evaluation results will be stored in TSV format.


remote_save_location: DataLocation | None
Default: None
If provided, defines a table in a remote database where the evaluation results will be stored.


limit_test_batches: int
Default: None
If provided, defines how many of batches to run evaluation over.


top_k: int
Default: None
Only for recommendation task. Number of top k values to recommend. It is highly advised to use this to reduce the size of the prediction file.


precision: Literal[64, 32, 16, "64", "32", "16", "bf16", "16-true", "16-mixed", "bf16-true", "bf16-mixed", "32-true", "64-true"]
Default: DEFAULT_PRECISION
Controls Float precision used for training; double precision (64, ‘64’ or ‘64-true’), full precision (32, ‘32’ or ‘32-true’), 16bit mixed precision (16, ‘16’, ‘16-mixed’) or bfloat16 mixed precision (‘bf16’, ‘bf16-mixed’). DEFAULT_PRECISION constant sets precision to "bf16-mixed" if CUDA is available, else "16-mixed".


devices: list[int] | int
Default: 1
The devices to use. Positive integer defines how many devices to use, a list of integers indices which devices should be used, the value -1 indicate that all available devices should be used.


accelerator: Literal["cpu", "gpu"]
Default: "gpu"
The accelerator to use: GPU or CPU.


strategy: str | None
Default: None
Strategy for the distributed training. Supported strategies are:

  • None: Pytorch Lightnings default strategy,
  • "ddp": Distributed Data Parallel,
  • "fsdp": Fully Sharded Data-Parallel 2 with a full tensor parallelism
  • "fsdp:%d:%d": Fully Sharded Data-Parallel 2 where first the int defines the data parallelism (replication) and the second int defines tensor parallelism (sharding).

metrics: list[MetricParams]
Default: list
List of custom metrics.


callbacks: list[pytorch_lightning.Callback]
Default: list
List of additional Pytorch Lightning callbacks to add.


entity_ids: EntityIds
_Default: None _
Restricts the set of entity IDs used during training or testing.