Metrics

In this article we provide a list of predefined metrics that can be used during model training and testing.

Metrics can be passed by their name via metrics parameter in TrainingParams and TestingParams. To read more, visit Fine-tuning training parameters and Customizing testing metrics.

Metric nameTaskDescription
MultipleTargetsRecallMulticlass ClassificationA fraction of correct class predictions. The result is one number.
MultipleTargetsRecallPerClassMulticlass ClassificationThe number of correctly predicted positive instances divided by the total number of actual positive instances for a given class. This yields one value (recall) per class.
PrecisionAtKRecommendationsThe fraction of the top K predicted items that are relevant, computed for each instance and averaged over the dataset.
MeanAveragePrecisionAtKRecommendationsThe average precision at each position of relevant items within the top K results, giving higher scores when relevant items appear earlier in the ranking, and then averages this over all instances.
HitRateAtKRecommendationsThe fraction of cases where the top K predictions contain at least one relevant item.
MeanReciprocalRankRecommendationsA ranking metric that measures how early the first relevant item appears in a list of recommendations, defined as the average reciprocal rank of the first relevant item across all entities (e.g. customers), with entities that have no relevant item contributing zero.
NDCGAtKRecommendationsThe measure of how well the top K results are ordered, rewarding highly relevant items appearing earlier in the list and normalizing the score against the ideal ranking so values lie between 0 and 1.