Build your first scenario!

In this section you will train your first Downstream Model.

In this exercise, we will focus on Propensity model which is a multi-label classification task.

Step 1 - Creating Downstream Task

Once you navigate to Scenarios after you log into the BaseModel, you will see a listing of all models created so far. More details about this screen and navigation can be found here.

  1. Navigate to Create Scenario button on the upper right hand side corner, and click it. You will now see an interface with a few building blocks:
    1. Basic Settings - where you will select the Foundation Model to use
    2. Target Function- where you will define what the model should predict
    3. Audience Filter - where you will be able to limit the entities the model will be using for downstream task
    4. Schedule - where you will define when the training shall commence

Step 2 - Define Basic Settings

  1. In the first block - Basic Settings, select the Foundation Model that you wish to use and Type of prediction task. Please refer to documentation for more information about types of prediction tasks.

  2. Select Quickstart Foundation model that we have trained in the previous step

  3. Select Classification - Multilabel from drop down list. Your configuration should look like this:

  4. Click Apply in the upper right-hand side.

Step 3 - Prepare Target function

The next step is to prepare target function. This step is exactly the same as for the Docker version. There are 2 important resources for this section:

  1. Recipes - a collection of use cases of target functions with step-by-step explanation of each line of code
  2. Modeling Target function section from main documentation.

For the purpose of this exercise:

  1. Copy-Paste the following code:

    def propensity_target_fn(_history: Events, future: Events, _entity: Attributes, _ctx: Dict) -> torch.Tensor:
        TARGET_NAMES = [
            "Garment Upper body",
            "Garment Lower body",
            "Garment Full body",
            "Accessories",
            "Underwear",
            "Shoes",
            "Swimwear",
            "Socks & Tights",
            "Nightwear",
        ]
        TARGET_ENTITY = get_qualified_column_name(column_name="product_group_name", data_sources_path=["articles"])
    
        purchase_target, _ = future["transactions"].groupBy(TARGET_ENTITY).exists(groups=TARGET_NAMES)
        return purchase_target
    
    
  2. Click Apply

Step 4 - Define Audience and Schedule

Next we can refine the audience in the Audience filter block - for this tutorial we skip this option as we want to train our model on all existing data.

  1. Finally we need to define schedule. Let's select One-time training and Start immediately in the options and then click Apply
  2. Now you are ready to hit Run Model button and start training your first Foundation Model!

Step 5 - Generate Predictions

In this step we will configure the Scoring section - which allows you to get specyfic predictions from the Downstream Model.

  1. Continuing from the screen where we have left off, after running the model, we configure Scoring Audience and Scoring Schedule. We skip Scoring Audience in this tutorial, as we assume that we want to calculate our propensity scores for the whole population that we model.

  2. Scoring schedule - Similiar to Foundation Model and Training schedules, we will select one-time scoring starting immediately. The final screen will look like this:

Step 6 - Save Predictions to Snowflake table

The final step is to create a new table and store predictions there:

  1. Click Create New button in Scoring Output section
  2. Select HM_KAGGLE database
  3. Select PUBLIC Schema
  4. Name your results, e.g "results_propensity"
  5. Click Apply

The setup should look like this:

Final Step - run all

Click Run Scenario button in upper right-hand corner.