lmflow.tokenization.hf_decoder_model#

Attributes#

Functions#

blocking(→ Dict)

tokenize_function(→ Dict)

Handels text_only and text2text datasets tokenization

conversation_tokenize_function(→ Dict)

Handels conversation datasets tokenization

Module Contents#

lmflow.tokenization.hf_decoder_model.logger[source]#
lmflow.tokenization.hf_decoder_model.tok_logger[source]#
lmflow.tokenization.hf_decoder_model.blocking(token_dict: Dict, block_size: int, model_max_length: int, pad_token_id: int, padding_side: str, truncation_side: str = 'right') Dict[source]#
lmflow.tokenization.hf_decoder_model.tokenize_function(examples, data_args: lmflow.args.DatasetArguments, tokenizer: transformers.PreTrainedTokenizer | transformers.PreTrainedTokenizerFast, column_names, label_columns, tokenized_column_order, add_special_tokens, use_truncation) Dict[source]#

Handels text_only and text2text datasets tokenization

lmflow.tokenization.hf_decoder_model.conversation_tokenize_function(examples, data_args: lmflow.args.DatasetArguments, tokenizer: transformers.PreTrainedTokenizer | transformers.PreTrainedTokenizerFast, column_names, conversation_template: lmflow.utils.conversation_template.ConversationTemplate) Dict[source]#

Handels conversation datasets tokenization