HomeGuidesRecipesChangelog
Guides

Customizing the loading from foundation model

⚠️

Check This First!

This article refers to BaseModel accessed via Docker container. Please refer to Snowflake Native App section if you are using BaseModel as SF GUI application.


The arguments of the load_from_foundation_model are required to instantiate the scenario model trainer and specify the location of the foundation model for the scenario, the modelling task and expected output, and the target function. Optionally, they let you also e.g. use a customized logger, adapt the data source loading configuration, and modify majority of parameters described in Data configuration section of the foundation model.

Scenario Model Parameters
  • checkpoint_path : str
    No default, required.
    Directory where all the checkpoint artifacts of the selected foundation model are stored.

  • downstream_task : Task
    No default, required
    One of machine learning tasks defined in BaseModel. Possible values are RegressionTask(), RecommendationTask(), BinaryClassificationTask(), MultilabelClassificationTask(), MulticlassClassificationTask()`.

  • target_fn : Callable[[Events, Events, Attributes, Dict], Union[Tensor, ndarray, Sketch]]
    No default, required
    Target function for the specified task. Needs definition in the script and return type aligned with the task.

  • with_head: bool
    Default: False
    Whether to use last layer from the foundation model. May improve the quality of recommendation tasks.

  • predictions_to_include_fn: Optional[PredictionsFilteringFnType]
    Default: None.
    A function that returns items/classes the predictions should be narrowed to for each entity.
    Mutually exclusive with predictions_to_exclude_fn.

  • predictions_to_exclude_fn: Optional[PredictionsFilteringFnType]
    Default: None.
    A function that returns items/classes that should be excluded from the predictions for each entity. Mutually exclusive with predictions_to_include_fn.

  • split: Optional[TimeSplitOverride | EntitySplitOverride]
    Default: None. Optional override for the split configuration set in foundation model training.

    • TimeSplitOverride: a dict[DataMode, TimeRange], where
      DataMode ∈ {TRAIN, VALIDATION, TEST, PREDICT} and TimeRange has start_date: datetime and end_date: datetime. To override/add a test period, set DataMode.TEST and provide dates, e.g.: {DataMode.TEST: TimeRange(start_date=datetime(2023, 8, 1), end_date=datetime(2023, 8, 22))}

    • EntitySplitOverride: a Dict with fields
      training: int (percentage), validation: int (percentage), training_validation_end: datetime. The first two fields define train/validation percentages; training_validation_end marks the end of training & validation, leaving the remaining time for the test period.

  • pl_logger : Logger
    Default: None, optional
    Instance of PytorchLightning logger.


📘

Good to know

Additionally, as part of load_from_foundation_model input, you can expand or overwrite configurations made during the Foundation Model training stage.

  • data_params: dates to separate training, validation and test sets, managing sampling, number of split points, declaring extra columns to be available for target function etc.
  • query_optimization: chunking query, capping sample size or CPUs in case of infrastructure constraints.

To review the list of modifiable parameters refer to: