lmflow.pipeline.utils.peft_trainer#

Trainer for Peft models

Classes#

PeftTrainer

PeftSavingCallback

Correctly save PEFT model and not full model

Module Contents#

class lmflow.pipeline.utils.peft_trainer.PeftTrainer[source]#

Bases: transformers.Trainer

_save_checkpoint(_, trial, metrics=None)[source]#

Don’t save base model, optimizer etc. but create checkpoint folder (needed for saving adapter)

class lmflow.pipeline.utils.peft_trainer.PeftSavingCallback[source]#

Bases: transformers.trainer_callback.TrainerCallback

Correctly save PEFT model and not full model

_save(model, folder)[source]#
on_train_end(args: transformers.training_args.TrainingArguments, state: transformers.trainer_callback.TrainerState, control: transformers.trainer_callback.TrainerControl, **kwargs)[source]#

Save final best model adapter

on_epoch_end(args: transformers.training_args.TrainingArguments, state: transformers.trainer_callback.TrainerState, control: transformers.trainer_callback.TrainerControl, **kwargs)[source]#

Save intermediate model adapters in case of interrupted training

on_save(args: transformers.training_args.TrainingArguments, state: transformers.trainer_callback.TrainerState, control: transformers.trainer_callback.TrainerControl, **kwargs)[source]#