| .. |
|
apps
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
auto_deploy
|
[#8245][feat] Autodeploy: Guided Decoding Support (#8551)
|
2025-10-28 09:29:57 +08:00 |
|
bindings/executor
|
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850)
|
2025-09-25 21:02:35 +08:00 |
|
configs
|
[TRTLLM-8680][doc] Add table with one-line deployment commands to docs (#8173)
|
2025-11-03 17:42:41 -08:00 |
|
cpp/executor
|
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850)
|
2025-09-25 21:02:35 +08:00 |
|
cpp_library
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
disaggregated
|
[TRTLLM-7251][test] Get submit eplb slots empty key work (#8945)
|
2025-11-05 05:21:02 -08:00 |
|
dora
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
draft_target_model
|
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850)
|
2025-09-25 21:02:35 +08:00 |
|
eagle
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
infinitebench
|
Update TensorRT-LLM (#1725)
|
2024-06-04 20:26:32 +08:00 |
|
language_adapter
|
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850)
|
2025-09-25 21:02:35 +08:00 |
|
layer_wise_benchmarks
|
[None][fix] Layer wise benchmarks: use local models, lint (#8799)
|
2025-10-30 09:47:46 -07:00 |
|
llm-api
|
[TRTLLM-8511][feat] Add update_weights and sleep_wakeup support for rl integration (#8302)
|
2025-11-04 10:19:24 -08:00 |
|
llm-eval/lm-eval-harness
|
chore: update doc by replacing use_cuda_graph with cuda_graph_config (#5680)
|
2025-07-04 15:39:15 +09:00 |
|
longbench
|
[TRTLLM-8535][feat] Support DeepSeek V3.2 with FP8 + BF16 KV cache/NVFP4 + BF16 KV cache (#8405)
|
2025-10-24 13:40:41 -04:00 |
|
lookahead
|
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850)
|
2025-09-25 21:02:35 +08:00 |
|
medusa
|
[None][chore] update torch_dtype -> dtype in 'transformers' (#8263)
|
2025-10-15 17:09:30 +09:00 |
|
models
|
[None][feat] Add qwen3-next nvfp4 support (#8526)
|
2025-11-06 09:45:44 +08:00 |
|
ngram
|
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850)
|
2025-09-25 21:02:35 +08:00 |
|
openai_triton
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
opentelemetry
|
[None][feat] Add opentelemetry tracing (#5897)
|
2025-10-27 18:51:07 +08:00 |
|
python_plugin
|
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850)
|
2025-09-25 21:02:35 +08:00 |
|
quantization
|
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850)
|
2025-09-25 21:02:35 +08:00 |
|
ray_orchestrator
|
[None][chore] Use a cached model path for Ray integration test (#8660)
|
2025-10-27 19:16:06 -07:00 |
|
redrafter
|
[None][chore] update torch_dtype -> dtype in 'transformers' (#8263)
|
2025-10-15 17:09:30 +09:00 |
|
sample_weight_stripping
|
[None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850)
|
2025-09-25 21:02:35 +08:00 |
|
scaffolding
|
[None][feat] Add benchmark to DeepConf (#8776)
|
2025-11-03 16:05:50 +08:00 |
|
serve
|
[TRTLLM-8737][feat] Support media_io_kwargs on trtllm-serve (#8528)
|
2025-10-24 12:53:40 -04:00 |
|
trtllm-eval
|
test: Add LLGuidance test and refine guided decoding (#5348)
|
2025-06-25 14:12:56 +08:00 |
|
wide_ep
|
[None][chore] Add sample yaml for wide-ep example and minor fixes (#8825)
|
2025-11-03 07:48:34 -08:00 |
|
constraints.txt
|
[None][chore] Bump version to 1.2.0rc2 (#8562)
|
2025-10-22 14:35:05 +08:00 |
|
eval_long_context.py
|
[None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127)
|
2025-10-27 13:12:31 -04:00 |
|
generate_checkpoint_config.py
|
[None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
|
2025-09-25 21:02:35 +08:00 |
|
generate_xgrammar_tokenizer_info.py
|
Update TensorRT-LLM (#2783)
|
2025-02-13 18:40:22 +08:00 |
|
hf_lora_convert.py
|
Update TensorRT-LLM (#2755)
|
2025-02-11 03:01:00 +00:00 |
|
mmlu.py
|
[None][chore] update torch_dtype -> dtype in 'transformers' (#8263)
|
2025-10-15 17:09:30 +09:00 |
|
run.py
|
[None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127)
|
2025-10-27 13:12:31 -04:00 |
|
summarize.py
|
[None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127)
|
2025-10-27 13:12:31 -04:00 |
|
utils.py
|
[None][feat] Support ignored prompt length for penalties via new sampling config parameter (#8127)
|
2025-10-27 13:12:31 -04:00 |