flashinfer.comm¶
This module provides communication primitives and utilities for distributed computing, including CUDA IPC, AllReduce operations, and memory management utilities.
CUDA IPC Utilities¶
|
|
|
Creates a shared buffer and returns a list of pointers representing the buffer on all processes in the group. |
|
Frees a shared buffer. |
DLPack Utilities¶
|
Pack GPU memory into a PyTorch tensor with specified stride. |
Mapping Utilities¶
|
A node with 8 GPUs, tp_size = 4, cp_size = 1, pp_size = 2 |
TensorRT-LLM AllReduce¶
Types and Enums¶
Core Operations¶
|
Parameters: - allreduce_in: the input tensor. [token_num, hidden_dim] - world_size: the size of the process group. - world_rank: the rank of the current process. - token_num: the number of tokens in the sequence. - hidden_dim: the dimension of the hidden states. - workspace_ptrs: the workspace pointers. - launch_with_pdl: whether to launch with pdl. - use_oneshot: whether to use oneshot. If None, internal heuristics will be used. - trigger_completion_at_end: whether to trigger completion at the end. - fp32_acc: whether to use fp32 accumulation. - pattern_code: the pattern code. - allreduce_out: the output tensor. [token_num, hidden_dim] - residual_in: the residual input tensor. [token_num, hidden_dim] - residual_out: the residual output tensor. [token_num, hidden_dim] - norm_out: the norm output tensor. [token_num, hidden_dim] - quant_out: the quant output tensor. [token_num, hidden_dim] - scale_out: the scale output tensor. Initialization referece: tests/comm/test_trtllm_allreduce_fusion.py - rms_gamma: the rms gamma tensor. [hidden_dim] - rms_eps: the rms epsilon value. - scale_factor: the scale factor. For cudaGraphs safety, it should be a tensor. - layout_code: the layout code. - metadata: optional workspace metadata dict from create_ipc_workspace_for_all_reduce_fusion. If provided, validates that token_num <= max_token_num, world_size == tp_size, and hidden_dim == workspace hidden_dim. Raises ValueError if validation fails. |
|
Parameters: - inp: the input tensor. |
|
Parameters: - world_size: the size of the process group. |
Parameters: - allreduce_in: the input tensor. |
Workspace Management¶
Parameters: - rank: the rank of the current process. |
|
Parameters: - tp_rank: the rank of the current process. |
|
Note: This function is used to destroy a workspace for all reduce. |
|
Parameters: - workspace: the workspace to destroy. |
Initialization and Utilities¶
|
|
|
Initialize 3 lamport buffers by negative zero. |
Helper function to compute the padded size of the fp4 swizzled layout. |
vLLM AllReduce¶
|
Performs an out-of-place all reduce. |
|
|
|
|
|
|
|
|
MNNVL (Multi-Node NVLink)¶
Core Classes¶
|
|
|
Wrapper class for McastDeviceMemory to facilitate PyTorch tensor creation. |
Utility Functions¶
|
Create a PyTorch tensor from a CUDA memory pointer using DLPack. |
|
A helper function that allocates memory on cuda and copies the data from the host to the device. |
TensorRT-LLM MNNVL AllReduce¶
|
Perform a multi-node NVLink all-reduce operation across multiple GPUs. |
Performs MNNVL TwoShot Allreduce + RMSNorm. |
|