Token Sparse Attention: Efficient Long-Context Inference with Interleaved Token Selection
Abstract
Token Sparse Attention enables efficient long-context inference by dynamically compressing and decompressing attention tensors at the token level, achieving significant speedup with minimal accuracy loss.
The quadratic complexity of attention remains the central bottleneck in long-context inference for large language models. Prior acceleration methods either sparsify the attention map with structured patterns or permanently evict tokens at specific layers, which can retain irrelevant tokens or rely on irreversible early decisions despite the layer-/head-wise dynamics of token importance. In this paper, we propose Token Sparse Attention, a lightweight and dynamic token-level sparsification mechanism that compresses per-head Q, K, V to a reduced token set during attention and then decompresses the output back to the original sequence, enabling token information to be reconsidered in subsequent layers. Furthermore, Token Sparse Attention exposes a new design point at the intersection of token selection and sparse attention. Our approach is fully compatible with dense attention implementations, including Flash Attention, and can be seamlessly composed with existing sparse attention kernels. Experimental results show that Token Sparse Attention consistently improves accuracy-latency trade-off, achieving up to times3.23 attention speedup at 128K context with less than 1% accuracy degradation. These results demonstrate that dynamic and interleaved token-level sparsification is a complementary and effective strategy for scalable long-context inference.
Community
Token Sparse Attention is a complementary approach to efficient sparse attention that dynamically performs token-level compression during attention and reversibly decompresses the representations afterward.
Code release is in progress; a cleaned and documented implementation will be released soon.
arXiv explained breakdown of this paper ๐ https://arxivexplained.com/papers/token-sparse-attention-efficient-long-context-inference-with-interleaved-token-selection
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Unified Sparse Attention via Multi-Granularity Compression (2025)
- Training-free Context-adaptive Attention for Efficient Long Context Modeling (2025)
- BLASST: Dynamic BLocked Attention Sparsity via Softmax Thresholding (2025)
- HyLRA: Hybrid Layer Reuse Attention for Efficient Long-Context Inference (2026)
- Focus-dLLM: Accelerating Long-Context Diffusion LLM Inference via Confidence-Guided Context Focusing (2026)
- KV Admission: Learning What to Write for Efficient Long-Context Inference (2025)
- Block Sparse Flash Attention (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper