Skip to main content
Ctrl+K
We've released our memory-efficient finetuning algorithm LISA, check out [Paper][User Guide] for more details!

LMFlow

  • Blogs
  • Examples
  • API Reference
  • About
  • LMFlow
  • Blogs
  • Examples
  • API Reference
  • About
  • LMFlow

Section Navigation

  • Dataset
  • Checkpoints
  • Finetuning
  • Reward Modeling
  • RAFT
  • LMFlow Benchmark Guide
  • Examples

Examples#

We provide several examples to show how to use our package in your problem.

Data preparation#

  • Dataset
    • Dataset Format in General
    • Supported Dataset and Detailed Formats
      • Conversation
      • TextOnly
      • Text2Text
      • Paired Conversation
  • Checkpoints
    • LLaMA Checkpoint

Finetuning#

For SFT,

  • Finetuning
    • Full Parameters
    • Layerwise Importance Sampled AdamW (LISA)
    • Low-Rank Adaptation (LoRA)

For alignment process,

  • Reward Modeling
    • Introduction
    • Step 1 Supervised Finetuning (SFT)
    • Step 2 Reward Modeling
    • Examples
  • RAFT
    • 1 Introduction
      • 1.1 Dataset description
    • 2 Reward Modeling
      • 2.1 Supervised Finetuning (SFT)
      • 2.2 Reward Modeling
      • 2.3 LoRA Merge and Get Reward Model
    • 3 RAFT Alignment
      • 3.1 Algorithms Overview
      • 3.2 Hyper-parameters
      • 3.3 Examples

Inference#

Refer to examples.

Evaluation#

  • LMFlow Benchmark Guide
  • 1. NLL Task Setting
    • Setup
    • Create Your Task Dataset File
    • Task Registration
  • 2. LM-Evaluation Task Setting

previous

LMFlow Benchmark: An Automatic Evaluation Framework for Open-Source LLMs

next

Dataset

On this page
  • Data preparation
  • Finetuning
  • Inference
  • Evaluation

This Page

  • Show Source

© Copyright LMFlow 2024.

Created using Sphinx 8.2.3.

Built with the PyData Sphinx Theme 0.16.1.