We've released our memory-efficient finetuning algorithm LISA, check out [Paper][User Guide] for more details!

lmflow.pipeline.finetuner#

The Finetuner class simplifies the process of running finetuning process on a language model for a TunableModel instance with given dataset.

Module Contents#

Classes#

Finetuner

Initializes the Finetuner class with given arguments.

Attributes#

logger

lmflow.pipeline.finetuner.logger[source]#
class lmflow.pipeline.finetuner.Finetuner(model_args, data_args, finetuner_args, *args, **kwargs)[source]#

Bases: lmflow.pipeline.base_tuner.BaseTuner

Initializes the Finetuner class with given arguments.

Parameters:
model_argsModelArguments object.

Contains the arguments required to load the model.

data_argsDatasetArguments object.

Contains the arguments required to load the dataset.

finetuner_argsFinetunerArguments object.

Contains the arguments required to perform finetuning.

argsOptional.

Positional arguments.

kwargsOptional.

Keyword arguments.

group_text(tokenized_datasets, model_max_length)[source]#

Groups texts together to form blocks of maximum length model_max_length and returns the processed data as a dictionary.

tune(model, dataset, transform_dataset_in_place=True, data_collator=None)[source]#

Perform tuning for a model

Parameters:
modelTunableModel object.

TunableModel to perform tuning.

dataset:

dataset to train model.