flashinfer.fp4_quantization.fp4_quantize

flashinfer.fp4_quantization.fp4_quantize(input: torch.Tensor, global_scale: torch.Tensor | None = None, sf_vec_size: int = 16, sf_use_ue8m0: bool = False, is_sf_swizzled_layout: bool = True, is_sf_8x4_layout: bool = False) Tuple[torch.Tensor, torch.Tensor]

Quantize input tensor to FP4 format.

This function implements FP4 quantization that converts input tensors to a compressed FP4 format with associated scale factors. It supports various input data types and scale factor layouts.

Parameters:
  • input (torch.Tensor) – Input tensor of shape [M, K] with dtype fp16/bf16/fp8_quantized.

  • global_scale (torch.Tensor, optional) – Global scale factor of shape [1] and dtype float32.

  • sf_vec_size (int, optional) – Scale factor vector size. Defaults to 16.

  • sf_use_ue8m0 (bool, optional) – Whether to use UE8M0 format for scale factors. Defaults to False.

  • is_sf_swizzled_layout (bool, optional) – Whether to use swizzled layout for scale factors. Defaults to True.

  • is_sf_8x4_layout (bool, optional) – Whether to use 8x4 layout or 128x4 layout for scale factors. Defaults to False.

Returns:

A tuple containing:
  • Quantized tensor of shape [M, K/2] with dtype FLOAT4_E2M1X2

  • Scale factors tensor with shape determined by layout and sf_vec_size

Return type:

Tuple[torch.Tensor, torch.Tensor]

Raises:

NotImplementedError – If any of the following features are requested but not implemented: - BFloat16 input when BFloat16 is not enabled - FP8 input when FP8 is not enabled - sf_vec_size other than 16 or 32