Build your first scenario!
In this section you will train your first Downstream Model.
In this exercise, we will focus on Propensity model which is a multi-label classification task.
Step 1 - Creating Downstream Task
Once you navigate to Scenarios after you log into the BaseModel, you will see a listing of all models created so far. More details about this screen and navigation can be found here.
- Navigate to
Create Scenario
button on the upper right hand side corner, and click it. You will now see an interface with a few building blocks:- Basic Settings - where you will select the Foundation Model to use
- Target Function- where you will define what the model should predict
- Audience Filter - where you will be able to limit the entities the model will be using for downstream task
- Schedule - where you will define when the training shall commence
Step 2 - Define Basic Settings
-
In the first block - Basic Settings, select the Foundation Model that you wish to use and Type of prediction task. Please refer to documentation for more information about types of prediction tasks.
-
Select
Quickstart
Foundation model that we have trained in the previous step -
Select
Classification - Multilabel
from drop down list. Your configuration should look like this: -
Click Apply in the upper right-hand side.
Step 3 - Prepare Target function
The next step is to prepare target function. This step is exactly the same as for the Docker version. There are 2 important resources for this section:
- Recipes - a collection of use cases of target functions with step-by-step explanation of each line of code
- Modeling Target function section from main documentation.
For the purpose of this exercise:
-
Copy-Paste the following code:
def propensity_target_fn(_history: Events, future: Events, _entity: Attributes, _ctx: Dict) -> torch.Tensor: TARGET_NAMES = [ "Garment Upper body", "Garment Lower body", "Garment Full body", "Accessories", "Underwear", "Shoes", "Swimwear", "Socks & Tights", "Nightwear", ] TARGET_ENTITY = get_qualified_column_name(column_name="product_group_name", data_sources_path=["articles"]) purchase_target, _ = future["transactions"].groupBy(TARGET_ENTITY).exists(groups=TARGET_NAMES) return purchase_target
-
Click
Apply
Step 4 - Define Audience and Schedule
Next we can refine the audience in the Audience filter block - for this tutorial we skip this option as we want to train our model on all existing data.
- Finally we need to define schedule. Let's select
One-time training
andStart immediately
in the options and then clickApply
- Now you are ready to hit
Run Model
button and start training your first Foundation Model!
Step 5 - Generate Predictions
In this step we will configure the Scoring section - which allows you to get specyfic predictions from the Downstream Model.
-
Continuing from the screen where we have left off, after running the model, we configure Scoring Audience and Scoring Schedule. We skip Scoring Audience in this tutorial, as we assume that we want to calculate our propensity scores for the whole population that we model.
-
Scoring schedule - Similiar to Foundation Model and Training schedules, we will select one-time scoring starting immediately. The final screen will look like this:
Step 6 - Save Predictions to Snowflake table
The final step is to create a new table and store predictions there:
- Click
Create New
button in Scoring Output section - Select HM_KAGGLE database
- Select PUBLIC Schema
- Name your results, e.g "results_propensity"
- Click
Apply
The setup should look like this:
Final Step - run all
Click Run Scenario
button in upper right-hand corner.
Updated 1 day ago