We've released our memory-efficient finetuning algorithm LISA, check out [Paper][User Guide] for more details!

lmflow.utils.conversation_template.llama#

Module Contents#

Classes#

Llama2ConversationTemplate

Attributes#

logger

LLAMA3_TEMPLATE

LLAMA2_TEMPLATE

lmflow.utils.conversation_template.llama.logger[source]#
class lmflow.utils.conversation_template.llama.Llama2ConversationTemplate[source]#

Bases: lmflow.utils.conversation_template.base.ConversationTemplate

_encode(tokenizer: transformers.PreTrainedTokenizer, messages: List[Dict[str, str]], system: str | None = None, tools: str | None = None, **kwargs) Sequence[Tuple[List[int], List[int]]][source]#
lmflow.utils.conversation_template.llama.LLAMA3_TEMPLATE[source]#
lmflow.utils.conversation_template.llama.LLAMA2_TEMPLATE[source]#