HomeGuidesRecipesChangelog
Guides

Snowflake Snowpark Container Services data sources

Connection parameters in YAML configuration file

โš ๏ธ

Check This First!

This article refers to BaseModel accessed via Docker container. Please refer to Snowflake Native App section if you are using BaseModel as SF GUI application.


Various data sources are specified in the YAML file used by the pretrain function and configured by the entries in data_location section. Below is an example code for Snowpark Container Services that should be adapted to your configuration.

๐Ÿ“˜

This section describes the connection when training is performed as a job on a Snowflake Snowpark container. To perform training in a different environment, but still using data from the Snowflake database, please refer to Snowflake data sources.

data_location:
  database_type: snowflake_scs
  connection_params:
    warehouse: warehouse_name,
    database: example_db,
    schema: example_schema,
  table_name: some_table
Parameters
  • database_type : str, required No default value.
    Information about the database type or source file. All data tables should be stored in the same type. Set to: snowflake.
  • connection_params : dict, required Configures the connection to the database. For Snowflake its keyword arguments are:
    • warehouse : str, required No default value.
      Specifies the virtual warehouse to use once connected, or specifies an empty string. The specified warehouse should be an existing warehouse for which the specified default role has privileges. Environment variable can be called. Examples: "warehouse_name", "${SNOWFLAKE_WAREHOUSE}"
    • database : str, required No default value.
      Specifies the default database to use once connected, or specifies an empty string. The specified database should be an existing database for which the specified default role has privileges. Examples: "example_db"
    • db_schema : str, required No default value.
      Specifies the default schema to use for the specified database once connected, or specifies an empty string. The specified schema should be an existing schema for which the specified default role has privileges. Examples: "public"
    • authenticator : Literal["oauth"], optional Default: "oauth"
      Specifies the authentication method; always set to 'oauth' for this configuration.
  • table_name (str) Specifies the table to use to create features. Example: customers.

The connection_params should be set separately in each data_location block, for each data source.

๐Ÿšง

Note

For security reasons, avoid providing credentials and other Snowflake connection variables directly in the code; instead, set them as environment variables and call as such, an in the example below.

Example

The following example demonstrates the connection to Snowflake in the context of a simple configuration with two data sources.

data_sources:
  -type: main_entity_attribute
   main_entity_column: UserID
   name: customers
   data_location:
     database_type: snowflake
     connection_params:
            warehouse: ${SNOWFLAKE_WAREHOUSE}
            database: EXAMPLE_DB
            schema: EXAMPLE_SCHEMA
     table_name: customers
   disallowed_columns: [CreatedAt]
  -type: event
   main_entity_column: UserID
   name: purchases
   date_column: 
     name: Timestamp
   data_location:
     database_type: snowflake
     connection_params:
        warehouse: ${SNOWFLAKE_WAREHOUSE}
        database: EXAMPLE_DB
        schema: EXAMPLE_SCHEMA
     table_name: purchases
   where_condition: "Timestamp >= today() - 365"
   sql_lambdas: 
     - alias: price_float
       expression: "TO_DOUBLE(price)"

For more details about Python connector to Snowflake please refer to the Snowflake documentation.

๐Ÿ“˜

Learn more

The detailed description of optional fields such as disallowed_columns, where_condition, sql_lambda, and many others is provided here