TensorRT-LLMs/tests/unittest/_torch
William Zhang 2614d71994
[TRTLLM-7410][feat] Enable KV cache reuse and chunked prefill for mistral3.1 (#7628)
Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
2025-09-17 08:11:16 -07:00
..
attention [https://nvbugs/5453806][unwaive] Unwaive fp8 kvcache attention test (#7243) 2025-09-05 12:13:57 -04:00
auto_deploy [None][chore] AutoDeploy: clean up of model unit test configuration (#7742) 2025-09-17 10:42:01 +08:00
compilation [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
debugger Fix: fix nvbug 5356427 (#5464) 2025-06-25 22:24:26 +08:00
executor [TRTLLM-7353][feat] Implement capturable drafting loops for speculation (#7100) 2025-09-01 14:37:44 -04:00
misc [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
modeling [TRTLLM-7410][feat] Enable KV cache reuse and chunked prefill for mistral3.1 (#7628) 2025-09-17 08:11:16 -07:00
models/checkpoints/hf [None][feat] Skip prefetching consolidated safetensors when appropriate (#7013) 2025-08-25 23:56:21 -04:00
modules [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
multi_gpu [TRTLLM-4629] [feat] Add support of CUDA13 and sm103 devices (#7568) 2025-09-16 09:56:18 +08:00
multi_gpu_modeling [None][chore] Mass integration of release/1.0 - 3rd (#7519) 2025-09-08 14:03:04 +08:00
multimodal [TRTLLM-6903][feat] Support chunked prefill for multimodal models (#6843) 2025-09-14 20:10:10 -07:00
sampler [TRTLLM-7155][feat] Unify sampler handle logits implementation. (#6867) 2025-08-22 08:09:30 +02:00
speculative [None][ci] waive test_llama_eagle3[True-FLASHINFER-False-False-False-False-True] (#7788) 2025-09-17 15:12:55 +08:00
thop [TRTLLM-6898][feat] Add Cute DSL nvfp4 linear op (#7632) 2025-09-16 14:25:26 +08:00
helpers.py [None][chore] share input_ids buffers among different cuda graphs (#7236) 2025-09-06 17:49:42 -04:00
pattern_watcher.py [TRTLLM-3105][feat] Add Piecewise CUDA Graph Support (#3804) 2025-05-09 11:04:01 +08:00
test_connector.py [None][feat] KV Cache Connector API (#7228) 2025-08-28 23:09:27 -04:00
test_torch_sampler.py [TRTLLM-7153] [feat] Move stop_criteria to sample_async (#7041) 2025-09-07 17:36:49 +03:00