Changelog

Release 0.20

This technical release focuses on stability and model robustness improvements.

New features

  • Custom metrics support
    Users can now define and register custom evaluation metrics within downstream models. This includes full compatibility with torchmetrics, duplicate-name checks, and consistent integration across training, validation, and monitoring phases.

Improvements

  • Improved stability and speed of real-time inference
    Optimized the inference server interface and pipeline for more stable initialization, better resource utilization, and faster CI execution.

  • Improved numerical stability of training on highly multimodal data
    Enhanced buffer sampling and shuffling to ensure better coverage of training examples, smoother convergence, and improved overall training stability.

  • Improved regression and classification predictions
    Revised the random splitting strategy and application a uniform mixture to raw scores, leading to more balanced score distributions and reduced training bias.

Fixes

  • Sketch width and optimal depth alignment
    Prevented potential crashes caused by conflicting sketch dimensions when handling certain class counts.

  • Date parsing for time series not working with formatted data
    Fixed an issue where date columns were not parsed or sanitized in time-series data when a date format was provided.

  • Invalid one-hot recommendation metrics for low number of candidates
    Prevented crashes on OneHotRecommendationTask models when the candidate pool was smaller than k.

  • Text embedding stability
    Fixed occasional crashes during text model training under specific edge conditions.

  • Short time-series handling
    Resolved a crash that occurred when the time-series length was shorter than the kernel size.

  • Insufficient training data handling
    Introduced graceful exit with a clear and informative message when the available data is insufficient to fill a full batch across all selected devices.