TensorRT-LLMs/examples
Chang Liu e47c787dd7
[TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405)
Signed-off-by: Fanrong Li <23290157+lfr-0531@users.noreply.github.com>
Signed-off-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
Signed-off-by: Tracin <10434017+Tracin@users.noreply.github.com>
2025-10-24 13:40:41 -04:00
..
apps [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
auto_deploy [None][chore] AutoDeploy: cleanup old inference optimizer configs (#8039) 2025-10-17 15:55:57 -04:00
bindings/executor [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
cpp/executor [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
cpp_library [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
disaggregated [https://nvbugs/5429636][feat] Kv transfer timeout (#8459) 2025-10-22 09:29:02 -04:00
dora Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
draft_target_model [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
eagle [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
infinitebench Update TensorRT-LLM (#1725) 2024-06-04 20:26:32 +08:00
language_adapter [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
llm-api [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
llm-eval/lm-eval-harness chore: update doc by replacing use_cuda_graph with cuda_graph_config (#5680) 2025-07-04 15:39:15 +09:00
longbench [TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405) 2025-10-24 13:40:41 -04:00
lookahead [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
medusa [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
models [TRTLLM-8682][chore] Remove auto_parallel module (#8329) 2025-10-22 20:53:08 -04:00
ngram [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
openai_triton [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
python_plugin [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
quantization [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
ray_orchestrator [None][doc] Ray orchestrator initial doc (#8373) 2025-10-14 21:17:57 -07:00
redrafter [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
sample_weight_stripping [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
scaffolding [None][feat] Dev DeepConf (#8362) 2025-10-16 11:01:31 +08:00
serve [TRTLLM-8737][feat] Support media_io_kwargs on trtllm-serve (#8528) 2025-10-24 12:53:40 -04:00
trtllm-eval test: Add LLGuidance test and refine guided decoding (#5348) 2025-06-25 14:12:56 +08:00
wide_ep [None][chore] Move submit.sh to python and use yaml configuration (#8003) 2025-10-20 22:36:50 -04:00
constraints.txt [None][chore] Bump version to 1.2.0rc2 (#8562) 2025-10-22 14:35:05 +08:00
eval_long_context.py Feat: Variable-Beam-Width-Search (VBWS) part3 (#3338) 2025-04-08 23:51:27 +08:00
generate_checkpoint_config.py [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
generate_xgrammar_tokenizer_info.py Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
hf_lora_convert.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
mmlu.py [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
run.py [nvbug/5374773] chore: Add a runtime flag to enable fail fast when attn window is too large to fit at least one sequence in KV cache (#5974) 2025-07-25 18:10:40 -04:00
summarize.py [None][chore] update torch_dtype -> dtype in 'transformers' (#8263) 2025-10-15 17:09:30 +09:00
utils.py [TRTLLM-6341][feature] Support SWA KV cache reuse (#6768) 2025-09-24 14:28:24 +08:00