Features

  • Grouped Decimal Features in Interpretability
    Introduced the ability to handle and analyze grouped decimal features, enhancing model interpretability by offering more granular insights into feature behavior.

  • Event Attributions to interpret recommendation models
    With event attributions, users can now trace back and understand how specific events influence model outputs and predictions.

  • Prediction Storage in Snowflake Database
    Added functionality to save predictions directly into a Snowflake database, simplifying data integration and storage workflows.

  • Data Source Name in Minimum Group Size Logs
    Added logging of the data source name when enforcing minimum group size requirements, making it easier to identify and troubleshoot group size issues.

  • Join Functionality for Attribute Data Sources (enhanced)
    Expanded support to allow joining attribute data sources with multiple data sources, enabling richer and more complex data merging capabilities.

  • Filtering on Extra Columns in Data Source Definition
    Users can now filter, group, and leverage extra columns passed in the data source definition for audience building purposes. Previously, these columns were restricted to BaseModel for Foundation Model training.

  • New Parameter in DataParams: training_end_date
    Introduced the training_end_date parameter, with a default value set to validation_start_date - 1, providing more flexibility and control over model training timelines.

  • New Parameters in TestingParams: local_save_location, remote_save_location
    Introduced local_save_location and remote_save_location as parameters within TestingParams, replacing the previous save_path to offer greater clarity and customization in specifying where to save test results.

    🚧

    Note

    Please adapt your configuration file to reflect this syntax change.

  • Extended Group Max Retries
    Default values of group computation retries and retry interval has been increased. Default forGROUPS_N_RETRIES is not set to 20 and default for GROUPS_RETRY_INTERVAL is now set to 60. Increasing the time of computation reduces the likelihood of failures due to transient issues and improves overall robustness. For more information see num_groups in Dividing event tables section.

  • Entity Number Limit for Target Function Validation
    The number of entities that can be used when validated target functions is no capped to ensure efficiency and prevent overload during validation process.

  • Enhanced Debug Messages for Target Function Validation
    More comprehensive debug messages have been added during target function validation to assist in troubleshooting and increase transparency in the validation process.

Fixes

  • Fixed None value causing issues in grouping.
  • Fixed regression loss calculation and logging.
  • Fixed an error when pandas query is not able to parse groups from joined data sources
  • Made Neptune alerter log system metrics to common namespace.
  • Removed unused validation for the main entity attribute data source & unused loss function.
  • Converted sparse batch to dense in interpretability.
  • Fixed handling of metrics not found in Neptune.
  • Fixed memory consumption by replacing list construction with a generator.
  • Created directory based on cache path.
  • Added schema to columns selection in Hive builder.
  • Handled potential NaNs in decimal calculator.

Docs

  • Updated the documentation navigation to be more readable and user-friendly.
  • Added Recipes section for easy reference when building target functions.

Features

  • Add max groups to event data source config
  • Support grouping for decimal modality
  • Implement groups for feature stats
  • Fixes

  • Align predict between classification and recommendation
  • Allow loss weighting in multiclass classification
  • Make clickhouse dialect provider support nullable columns
  • Max splitpoints set and logging configuration
  • Fix interpretability for decimal features

Features

  • Add dimension checks in interpretability
  • Allow to set maximum percentage of nulls in a column
  • Forbid undefined fields in configs
  • Handle none as entity_id in parquets
  • Create new data source definition.
  • New config.yaml design.
  • Enable caching queried data
  • Handle duplicated column names when joining tables.
  • Support parquet data source.
  • Fix monad metrics
  • Validate allowed_columns.
  • Support lambdas at config level
  • Implement mechanism for metric initialization
  • Add joins to benchmarking configs
  • Cast main_entity_id to string
  • Validate columns uniqueness
  • Allow defining lambdas in extra columns
  • Add recommendations to interpretability
  • Set max number of expressions via environment variable
  • Verify if data source name contains any forbidden sequences

Fixes

  • Add recency modality slices to feature value interpretability
  • Allow join_on column in select
  • Allow None value for limit_train_batches
  • Always use stored config at pretraining phase
  • Changed defaults for loader params
  • Check data source type before accessing date column
  • Fix Recommendation model
  • Fix to date parsing in hive
  • Make snowflake config work with new setup
  • Use alias and table name correctly
  • Fix metrics in training params
  • Append suffix to with clause alias
  • Fix detecting cyclic joins.

0.6.0 (2024-04-23)

Features

  • Add BM colors to interpretability plot
  • Add interpret function for use in scripts
  • Add methods for weighting training examples
  • Adjust hive to use ini files
  • Enable setting 'ignore_entities_without_events' flag.
  • Extract queries from connectors
  • Create common mechanism for query execution
  • Refactor query builders
  • Add treemap visualization
  • Add treemap generation from predefined hierarchy
  • Replace sampling method with actual sampling
  • Make attribution average optional.
  • Introduce Python 3.11
  • Add regression task to interpretability
  • Support training resuming
  • Create chunks based on partition column
  • Support booleans in fit stage

Fixes

  • Add quotation marks around table names in dialect providers
  • Add quotation marks to entity ids subquery
  • Add reset method to LongCastingMetric
  • Add return statement to FM get trainable module
  • Cast Hive decimal columns to float
  • Change cache dir type
  • Fixing id info parsing
  • Handle empty iterator while caching
  • Hash sketches hashing function and tests
  • Add options to change interpretability sample size
  • Fix time shift when caching datetimes.
  • Handle decimal types in Hive training iterator.
  • Fix ignore_entities_without_events flag
  • Fix combining tiles with the same name and different id
  • Catching prediction on None object and fixing runtime threshold
  • Remove dask-ml, bump ray, use compatible dask version
  • Set enable_checkpointing flag accordingly to the callbacks setup
  • Small fix in one-hot-encoders
  • Stop logging warnings for uppercase unquoted columns in snowflake

0.5.0 (2024-01-18)

Features

  • Add interpretability
  • Add logging column names
  • Add resume option for columns & fix minor bug related to text columns processing
  • Add target filtering to the inference module.
  • Use PyODBC for connecting with Hive.

Fixes

  • Chunking in hive queries fixed
  • Convert max num columns to int
  • Fix cleora circular dependency imports