Large language models cache key (K) and value (V) tensors for every previously seen token — the "KV cache." At long context lengths this cache dominates GPU memory. Recent work (Google's TurboQuant, ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results