Databricks
Connection parameters in YAML configuration file
Note
This article refers to BaseModel accessed via Docker container. Please refer to Snowflake Native App section if you are using BaseModel as SF GUI application.
Various data sources are specified in the YAML file used by the pretrain
function and configured by the entries in data_location
section. Below is an example code that should be adapted to your configuration.
data_location:
database_type: databricks
connection_params:
server_hostname: some_host_name,
http_path: some_path,
access_token: some_access_token
table_name: some_table
Parameters |
---|
- database_type : str, required
No default value.
Information about the database type or source file. All data tables should be stored in the same type.
Set to:databricks
.
- connection_params : dict, required
Configures the connection to the database.
For Databricks, keyword arguments are:
- server_hostname : str, required
No default value.
Server Hostname value for your cluster or SQL warehouse,
- http_path : str, required
No default value.
HTTP Path value for your cluster or SQL warehouse,
- access_token : str, required
No default value.
Databricks personal access token.
- session_configuration : dict[str, Any], optional
default=None
Databricks personal access token.
- http_headers : list[tuple[str, str]], optional
default=None
Additional (key, value) pairs to set in HTTP headers on every RPC request the client makes.
- catalog : str, optional
default=None
Initial catalog to use for the connection.
- db_schema : str, optional
default="default"
Initial schema to use for the connection.
- server_hostname : str, required
- table_name : str, required
No default value.
Specifies the table to use to create features.
Example:customers
.
The connection_params
should be set separately in each data_location
block, for each data source.
Note
For security reasons, avoid providing token and Databricks connection variables directly in the code; instead, set them as environment variables and call as such, an in the example below.
Example |
---|
The following example demonstrates the connection to Databricks in the context of a simple configuration with two data sources.
data_sources:
-type: main_entity_attribute
main_entity_column: UserID
name: customers
data_location:
database_type: databricks
connection_params:
server_hostname: ${DATABRICKS_SERVER_HOSTNAME}
http_path: ${DATABRICKS_HTTP_PATH}
access_token: ${DATABRICKS_TOKEN}
table_name: customers
disallowed_columns: [CreatedAt]
-type: event
main_entity_column: UserID
name: purchases
date_column: Timestamp
data_location:
database_type: databricks
connection_params:
server_hostname: ${DATABRICKS_SERVER_HOSTNAME}
http_path: ${DATABRICKS_HTTP_PATH}
access_token: ${DATABRICKS_TOKEN}
table_name: purchases
where_condition: "Timestamp >= today() - 365"
sql_lambdas:
- alias: price_float
expression: "TO_DOUBLE(price)"
For more details about Python connector to Databricks please refer to the Databricks documentation.
Note
The detailed description of optional fields such as
disallowed_columns
,where_condition
,sql_lambda
, and many others is provided here
Updated 2 months ago