We've released our memory-efficient finetuning algorithm LISA, check out [Paper][User Guide] for more details!

lmflow.models.hf_encoder_decoder_model#

This is a class called HFDecoderModel which is a wrapper around transformers model and tokenizer classes. It has several methods such as __init__, tokenize, and train that are used for training and fine-tuning the model. The __init__ method takes in several arguments such as model_args, tune_strategy, and ds_config, which are used to load the pretrained model and tokenizer, and initialize the training settings.

The tokenize method is used to tokenize the input text and return the input IDs and attention masks that can be fed to the model for training or inference.

This class supports different tune_strategy options such as ‘normal’, ‘none’, ‘lora’, and ‘adapter’, which allow for different fine-tuning settings of the model. However, the ‘lora’ and ‘adapter’ strategies are not yet implemented.

Overall, this class provides a convenient interface for loading and fine-tuning transformer models and can be used for various NLP tasks such as language modeling, text classification, and question answering.

Module Contents#

Classes#

HFEncoderDecoderModel

Initializes a HFEncoderDecoderModel instance.

Attributes#

logger

lmflow.models.hf_encoder_decoder_model.logger[source]#
class lmflow.models.hf_encoder_decoder_model.HFEncoderDecoderModel(model_args, tune_strategy='normal', ds_config=None, device='gpu', use_accelerator=False, custom_model=False, with_deepspeed=True, pipeline_args=None, *args, **kwargs)[source]#

Bases: lmflow.models.encoder_decoder_model.EncoderDecoderModel, lmflow.models.interfaces.tunable.Tunable

Initializes a HFEncoderDecoderModel instance.

Parameters:
model_args

Model arguments such as model name, path, revision, etc.

tune_strategystr or none, default=”normal”.

A string representing the dataset backend. Defaults to “huggingface”.

ds_config

Deepspeed configuations.

argsOptional.

Positional arguments.

kwargsOptional.

Keyword arguments.

abstract tokenize(dataset, *args, **kwargs)[source]#

Tokenize the full dataset.

Parameters:
dataset

Text dataset.

argsOptional.

Positional arguments.

kwargsOptional.

Keyword arguments.

Returns:
tokenized_datasets

The tokenized dataset.

encode(input: str | List[str], *args, **kwargs) List[int] | List[List[int]][source]#

Perform encoding process of the tokenizer.

Parameters:
inputsstr or list.

The text sequence.

argsOptional.

Positional arguments.

kwargsOptional.

Keyword arguments.

Returns:
outputs

The tokenized inputs.

decode(input, *args, **kwargs) str | List[str][source]#

Perform decoding process of the tokenizer.

Parameters:
inputslist.

The token sequence.

argsOptional.

Positional arguments.

kwargsOptional.

Keyword arguments.

Returns:
outputs

The text decoded from the token inputs.

inference(inputs, *args, **kwargs)[source]#

Perform generation process of the model.

Parameters:
inputs

The sequence used as a prompt for the generation or as model inputs to the model.

argsOptional.

Positional arguments.

kwargsOptional.

Keyword arguments.

Returns:
outputs

The generated sequence output

merge_lora_weights()[source]#
save(dir, save_full_model=False, *args, **kwargs)[source]#

Perform generation process of the model.

Parameters:
dir

The directory to save model and tokenizer

save_full_modelOptional.

Whether to save full model.

kwargsOptional.

Keyword arguments.

Returns:
outputs

The generated sequence output

get_max_length()[source]#

Return max acceptable input length in terms of tokens.

get_tokenizer()[source]#

Return the tokenizer of the model.

get_backend_model()[source]#

Return the backend model.