Experiment tracking
Check This First!
This article refers to BaseModel accessed via Docker container. Please refer to Snowflake Native App section if you are using BaseModel as SF GUI application.
BaseModel allows to use PyTorch Lightning loggers to track trainings (see: Available loggers ). You can use it to easily monitor and compare metrics and model parameters.
The Section bellow applies to foundation model
The example demonstrates how to integrate Neptune with foundation model training.
Example |
---|
from pathlib import Path
from pytorch_lightning.loggers import NeptuneLogger
from monad.ui import pretrain
neptune_logger = NeptuneLogger(
project="workspace-name/project-name",
api_token="NEPTUNE_API_TOKEN", # replace with your own
log_model_checkpoints=False, # optional
)
pretrain(
config_path=Path("path/to/config.yaml"),
output_path=Path("path/to/store/pretrain/artifacts"),
pl_logger = neptune_logger
)
The logger should be passed to the pretrain
function via the pl_logger
parameter.
The Section bellow applies to downstream models
The example demonstrates how to integrate Neptune with downstream model training.
Example |
---|
from pathlib import Path
from pytorch_lightning.loggers import NeptuneLogger
from monad.ui.module import BinaryClassificationTask, load_from_foundation_model
def target_fn(history: Events, future: Events, entity: Attributes, ctx: Dict) -> np.ndarray:
...
neptune_logger = NeptuneLogger(
project="workspace-name/project-name",
api_token="NEPTUNE_API_TOKEN", # replace with your own
log_model_checkpoints=False, # optional
)
trainer = load_from_foundation_model(
checkpoint_path = fm_path,
downstream_task = BinaryClassificationTask(),
target_fn = target_fn,
pl_logger=neptune_logger,
)
The logger should be passed to the load_from_foundation_model
function via the pl_logger
parameter.
Updated about 1 month ago