Scenario Model Stage Overview

The overview of the workflow

⚠️

Check This First!

This article refers to BaseModel accessed via Docker container. Please refer to Snowflake Native App section if you are using BaseModel as SF GUI application.

The Foundation Model is designed to understand the behavior and interactions between entities and develop a general predictive capability in the domain. During training, it doesn’t have a more specific goal than that.

To address a particular business problem, you need to create a downstream model tailored to the scenario, by fine-tuning the Foundation Model for the specific task.

In this article we focus on that step:

To build a downstream model for their particular business objective, the user needs to adapt the training script template, and execute the training using a Python function or via the command line.

The script should do the following:

  • perform required imports, incl. the correct class for the task, aligned with your scenario,

  • define the target function, which will steer the loss calculation during model training,

  • specify location of your source foundation model and where to store your scenario model,

  • set and adapt, if required, the training parameters,

  • instantiate the trainer, loading your foundation model and setting correct task and output,

  • train the model using the trainer's fit() method.

The picture below shows the above sections in an example training script.

Given the breadth of the material, it has been divided into two sections:

  1. Defining the Task and the Target:

    • Identify the machine learning problem:
      Determine the machine learning problem that aligns with the business objective.
    • Define the target function:
      Specify the function that will guide the model's optimization process.

  2. Setting up the learning process:

    • Select the pre-trained Foundation Model:
      Point the script to the directory containing the features of the pre-trained model.
    • Configure the modelling task:
      Set up the downstream task and, if necessary, adjust the training parameters.
    • Instantiate the trainer:
      Create an instance of the trainer and, if needed, modify the loading process.

Follow the links above, or proceed to the following articles for detailed explanations of the above tasks.