lmflow.utils.envs#

ref: pytorch/torchtune

Functions#

is_accelerate_env()

Return True if any environment variable name starts with ACCELERATE_.

require_cuda_for_gpu_mode(→ None)

Raise if GPU execution was requested but CUDA is not available.

set_cuda_device(→ None)

Bind this process to local_rank on CUDA; raises if CUDA is unavailable.

get_device_name(→ str)

Get the device name based on the current machine.

get_torch_device(→ Any)

Return torch.<device_name> for the current device name.

Module Contents#

lmflow.utils.envs.is_accelerate_env()[source]#

Return True if any environment variable name starts with ACCELERATE_.

lmflow.utils.envs.require_cuda_for_gpu_mode() None[source]#

Raise if GPU execution was requested but CUDA is not available.

lmflow.utils.envs.set_cuda_device(local_rank: int) None[source]#

Bind this process to local_rank on CUDA; raises if CUDA is unavailable.

lmflow.utils.envs.get_device_name() str[source]#

Get the device name based on the current machine.

lmflow.utils.envs.get_torch_device() Any[source]#

Return torch.<device_name> for the current device name.

If torch has no attribute with that name, logs a warning and returns torch.cuda as fallback.