We've released our memory-efficient finetuning algorithm LISA, check out [Paper][User Guide] for more details!

lmflow.utils.position_interpolation.llama_rope_scaled_monkey_patch#

Module Contents#

Classes#

CondenseRotaryEmbedding

Functions#

replace_llama_with_condense(pi_ratio, ntk_ratio)

class lmflow.utils.position_interpolation.llama_rope_scaled_monkey_patch.CondenseRotaryEmbedding(dim, pi_ratio, ntk_ratio, max_position_embeddings=2048, base=10000, device=None)[source]#

Bases: torch.nn.Module

forward(x, seq_len=None)[source]#
lmflow.utils.position_interpolation.llama_rope_scaled_monkey_patch.replace_llama_with_condense(pi_ratio, ntk_ratio)[source]#