TensorRT-LLMs/tensorrt_llm
Perkz Zheng 35c5e4f1c5
feat: add CGA reduction fmha kernels on Blackwell. (#3763)
* update cubins

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* add trtllm-gen kernels for eagle3 and also kernels with cga-reduction

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

* address the comments

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>

---------

Signed-off-by: Perkz Zheng <67892460+PerkzZheng@users.noreply.github.com>
2025-04-29 10:43:54 +08:00
..
_torch feat: add CGA reduction fmha kernels on Blackwell. (#3763) 2025-04-29 10:43:54 +08:00
auto_parallel chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
bench feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
commands Add smart router for moe (#3641) 2025-04-23 12:21:59 +08:00
evaluate [TRTLLM-4763][test] Accuracy test improvement (Part 3.6): Deprecate mmlu_llmapi.py (#3802) 2025-04-23 23:05:13 +08:00
executor fix bug of create cuda stream as default parameter which will be init… (#3764) 2025-04-28 08:16:03 +08:00
inputs feat: llama4 input processor (#3383) 2025-04-25 16:47:14 -07:00
layers feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
llmapi fix: trtllm-bench build trt engine on slurm (#3825) 2025-04-27 22:26:23 +08:00
models doc: fix path after examples migration (#3814) 2025-04-24 02:36:45 +08:00
plugin Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
quantization Fix fp8 kvcache (#3877) 2025-04-29 10:31:10 +08:00
runtime feat: Offloading Multimodal embedding table to CPU in Chunked Prefill Mode (#3380) 2025-04-21 14:31:01 +08:00
scaffolding feat: fix erros on scaffolding README (#3899) 2025-04-29 10:15:06 +08:00
serve feat: trtllm-serve multimodal support (#3590) 2025-04-19 05:01:28 +08:00
tools test: Fix breaking Phi3 multimodal tests (#3544) 2025-04-15 08:02:34 +08:00
__init__.py fix: Detect pmix and raise error when mpirun is not used. (#3858) 2025-04-26 21:49:41 +08:00
_common.py Update (#2978) 2025-03-23 16:39:35 +08:00
_dlpack_utils.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
_ipc_utils.py fix: Proper error bubbling for PyExecutor (#3321) 2025-04-15 14:49:46 +08:00
_mnnvl_utils.py feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
_utils.py test: add kv cache event tests for disagg workers (#3602) 2025-04-18 18:30:19 +08:00
builder.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
disaggregated_params.py Update TensorRT-LLM (#2936) 2025-03-18 21:25:19 +08:00
functional.py Unify two versions of AllReduce custom op (#3032) 2025-04-22 21:58:42 +08:00
graph_rewriting.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
logger.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
lora_manager.py add passing E2E LoRA flow (#3788) 2025-04-23 18:38:06 +03:00
mapping.py Add smart router for moe (#3641) 2025-04-23 12:21:59 +08:00
module.py Update (#2978) 2025-03-23 16:39:35 +08:00
network.py chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
parameter.py Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
profiler.py test [TRTLLM-4477,TRTLLM-4481]: Accuracy test improvement (Part 3.5): Support GSM8K and GPQA (#3483) 2025-04-22 07:38:16 +08:00
prompt_adapter_manager.py Update TensorRT-LLM (#2333) 2024-10-15 15:28:40 +08:00
python_plugin.py Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
sampling_params.py v1.2 (#3082) 2025-03-26 23:31:29 +08:00
top_model_mixin.py Update TensorRT-LLM (#2053) 2024-07-30 21:25:01 +08:00
version.py chore: bump version to 0.20.0rc1 (#3834) 2025-04-24 17:43:37 +08:00