Welcome to FlashInfer’s documentation!#

Blog | Discussion Forum | GitHub

FlashInfer is a library and kernel generator for Large Language Models that provides high-performance implementation of LLM GPU kernels such as FlashAttention, PageAttention and LoRA. FlashInfer focus on LLM serving and inference, and delivers state-of-the-art performance across diverse scenarios.

PyTorch API Reference