TensorRT-LLMs/cpp/tensorrt_llm/kernels
Zhou Yuxin fca13b8c95
hopper-style context MLA (#5713)
Signed-off-by: Yuxin <yuxinz@nvidia.com>
Signed-off-by: Tomer Asida <57313761+tomeras91@users.noreply.github.com>
Signed-off-by: Yiqing Yan <yiqingy@nvidia.com>
Signed-off-by: qqiao <qqiao@nvidia.com>
Signed-off-by: Fred Wei <20514172+WeiHaocheng@users.noreply.github.com>
Signed-off-by: Omer Ullman Argov <118735753+omera-nv@users.noreply.github.com>
Signed-off-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
Signed-off-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Rashid K <rkaleem@nvidia.com>
Signed-off-by: Zhenhuan Chen <chenzhh3671@gmail.com>
Signed-off-by: Po-Wei Wang (Vincent) <poweiw@nvidia.com>
Signed-off-by: Netanel Haber <nhaber@nvidia.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>
Signed-off-by: Clay <ccs96307@gmail.com>
Signed-off-by: Venky <23023424+venkywonka@users.noreply.github.com>
Signed-off-by: Xin He (SW-GPU) <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: Superjomn <328693+Superjomn@users.noreply.github.com>
Signed-off-by: zhengd-nv <200704041+zhengd-nv@users.noreply.github.com>
Signed-off-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
Signed-off-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Signed-off-by: Frank Di Natale <3429989+FrankD412@users.noreply.github.com>
Signed-off-by: Balaram Buddharaju <169953907+brb-nv@users.noreply.github.com>
Signed-off-by: Linda-Stadter <57756729+Linda-Stadter@users.noreply.github.com>
Signed-off-by: Shunkang <182541032+Shunkangz@users.noreply.github.co>
Signed-off-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Faraz Khoubsirat <58580514+farazkh80@users.noreply.github.com>
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
Signed-off-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
Signed-off-by: Hui Gao <huig@nvidia.com>
Signed-off-by: ShiXiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
Signed-off-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
Signed-off-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
Signed-off-by: jthomson04 <jwillthomson19@gmail.com>
Signed-off-by: Xianjie <5410381+qiaoxj07@users.noreply.github.com>
Signed-off-by: Xianjie Qiao <5410381+qiaoxj07@users.noreply.github.com>
Signed-off-by: Julien Debache <julien.debache@hotmail.com>
Signed-off-by: Yanchao Lu <yanchaol@nvidia.com>
Signed-off-by: Yiteng Niu <6831097+niukuo@users.noreply.github.com>
Signed-off-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
Signed-off-by: bhsueh <11360707+byshiue@users.noreply.github.com>
Signed-off-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Signed-off-by: Christina Zhang <83400082+ChristinaZ@users.noreply.github.com>
Signed-off-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Signed-off-by: Dylan Chen <191843203+DylanChen-NV@users.noreply.github.com>
Signed-off-by: Daniel Campora <961215+dcampora@users.noreply.github.com>
Signed-off-by: David Clark <215764518+davidclark-nv@users.noreply.github.com>
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
Signed-off-by: JieXin Liang <Alcanderian@users.noreply.github.com>
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Signed-off-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Signed-off-by: Xiwen Yu <13230610+VALLIS-NERIA@users.noreply.github.com>
Signed-off-by: Yegor <75512761+Wokzy@users.noreply.github.com>
Signed-off-by: Yegor Yershov <yegor6741@gmail.com>
Signed-off-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Signed-off-by: raayandhar <rdhar@nvidia.com>
Signed-off-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
Signed-off-by: Iman Tabrizian <10105175+tabrizian@users.noreply.github.com>
Signed-off-by: xsimmons <xsimmons@nvidia.com>
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Signed-off-by: Jhao-Ting Chen <jhaotingc@nvidia.com>
Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Signed-off-by: Erin Ho <14718778+hchings@users.noreply.github.com>
Signed-off-by: Chenfei Zhang <chenfeiz@nvidia.com>
Signed-off-by: Dongxu Yang <78518666+dongxuy04@users.noreply.github.com>
Signed-off-by: Hao Lu <14827759+hlu1@users.noreply.github.com>
Signed-off-by: William Zhang <133824995+2ez4bz@users.noreply.github.com>
Signed-off-by: Ubuntu <ubuntu@ip-10-0-20-146.us-west-2.compute.internal>
Signed-off-by: Hanjun Cho <46752251+gkswns0531@users.noreply.github.com>
Signed-off-by: junq <22017000+QiJune@users.noreply.github.com>
Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
Signed-off-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
Signed-off-by: CarstyYou <186021327+CarstyYou@users.noreply.github.com>
Signed-off-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Signed-off-by: narutolhy <582909902@qq.com>
Signed-off-by: ZhanruiSunCh <184402041+ZhanruiSunCh@users.noreply.github.com>
Signed-off-by: wili-65535 <wili-65535@users.noreply.github.com>
Signed-off-by: Frank <3429989+FrankD412@users.noreply.github.com>
Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
Signed-off-by: William Tambellini <wtambellini@sdl.com>
Co-authored-by: tomeras91 <57313761+tomeras91@users.noreply.github.com>
Co-authored-by: Yiqing Yan <yiqingy@nvidia.com>
Co-authored-by: Emma Qiao <qqiao@nvidia.com>
Co-authored-by: WeiHaocheng <20514172+WeiHaocheng@users.noreply.github.com>
Co-authored-by: Omer Ullman Argov <118735753+omera-nv@users.noreply.github.com>
Co-authored-by: Netanel Haber <58652339+netanel-haber@users.noreply.github.com>
Co-authored-by: Robin Kobus <19427718+Funatiq@users.noreply.github.com>
Co-authored-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Co-authored-by: Rashid Kaleem <4079439+arekay@users.noreply.github.com>
Co-authored-by: Zhihan Jiang <68881590+nvzhihanj@users.noreply.github.com>
Co-authored-by: Zhenhuan Chen <chenzhh3671@gmail.com>
Co-authored-by: Po-Wei (Vincent) <poweiw@nvidia.com>
Co-authored-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Co-authored-by: Neta Zmora <nzmora@nvidia.com>
Co-authored-by: Fridah-nv <201670829+Fridah-nv@users.noreply.github.com>
Co-authored-by: Clay <ccs96307@gmail.com>
Co-authored-by: Venky <23023424+venkywonka@users.noreply.github.com>
Co-authored-by: xinhe-nv <200704525+xinhe-nv@users.noreply.github.com>
Co-authored-by: Yan Chunwei <328693+Superjomn@users.noreply.github.com>
Co-authored-by: Zheng Duan <200704041+zhengd-nv@users.noreply.github.com>
Co-authored-by: Yi Zhang <187001205+yizhang-nv@users.noreply.github.com>
Co-authored-by: Kaiyu Xie <26294424+kaiyux@users.noreply.github.com>
Co-authored-by: Frank <3429989+FrankD412@users.noreply.github.com>
Co-authored-by: brb-nv <169953907+brb-nv@users.noreply.github.com>
Co-authored-by: Linda <57756729+Linda-Stadter@users.noreply.github.com>
Co-authored-by: Shunkangz <182541032+Shunkangz@users.noreply.github.com>
Co-authored-by: Yuan Tong <13075180+tongyuantongyu@users.noreply.github.com>
Co-authored-by: Tailing Yuan <yuantailing@gmail.com>
Co-authored-by: Faraz <58580514+farazkh80@users.noreply.github.com>
Co-authored-by: peaceh-nv <103117813+peaceh-nv@users.noreply.github.com>
Co-authored-by: ixlmar <206748156+ixlmar@users.noreply.github.com>
Co-authored-by: HuiGao-NV <huig@nvidia.com>
Co-authored-by: Chuang Zhu <111838961+chuangz0@users.noreply.github.com>
Co-authored-by: ShiXiaowei02 <39303645+Shixiaowei02@users.noreply.github.com>
Co-authored-by: Stefan Niebler <82932102+stnie@users.noreply.github.com>
Co-authored-by: jthomson04 <jwillthomson19@gmail.com>
Co-authored-by: Xianjie Qiao <5410381+qiaoxj07@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Julien Debache <jdebache@nvidia.com>
Co-authored-by: Yanchao Lu <yanchaol@nvidia.com>
Co-authored-by: Yiteng Niu <6831097+niukuo@users.noreply.github.com>
Co-authored-by: Daniel Stokes <40156487+djns99@users.noreply.github.com>
Co-authored-by: bhsueh_NV <11360707+byshiue@users.noreply.github.com>
Co-authored-by: Bo Li <22713281+bobboli@users.noreply.github.com>
Co-authored-by: ChristinaZ <83400082+ChristinaZ@users.noreply.github.com>
Co-authored-by: Larry <197874197+LarryXFly@users.noreply.github.com>
Co-authored-by: DylanChen-NV <191843203+DylanChen-NV@users.noreply.github.com>
Co-authored-by: Daniel Cámpora <961215+dcampora@users.noreply.github.com>
Co-authored-by: davidclark-nv <215764518+davidclark-nv@users.noreply.github.com>
Co-authored-by: Nikita Korobov <14355239+nekorobov@users.noreply.github.com>
Co-authored-by: Yechan Kim <161688079+yechank-nvidia@users.noreply.github.com>
Co-authored-by: liji-nv <59594262+liji-nv@users.noreply.github.com>
Co-authored-by: JieXin Liang <Alcanderian@users.noreply.github.com>
Co-authored-by: Enwei Zhu <21126786+syuoni@users.noreply.github.com>
Co-authored-by: xiweny <13230610+VALLIS-NERIA@users.noreply.github.com>
Co-authored-by: Yegor <75512761+Wokzy@users.noreply.github.com>
Co-authored-by: Yukun He <23156053+hyukn@users.noreply.github.com>
Co-authored-by: Raayan Dhar <58057652+raayandhar@users.noreply.github.com>
Co-authored-by: Dom Brown <3886319+DomBrown@users.noreply.github.com>
Co-authored-by: Chang Liu <9713593+chang-l@users.noreply.github.com>
Co-authored-by: Pamela Peng <179191831+pamelap-nvidia@users.noreply.github.com>
Co-authored-by: Iman Tabrizian <10105175+Tabrizian@users.noreply.github.com>
Co-authored-by: xavier-nvidia <xsimmons@nvidia.com>
Co-authored-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: Jhao-Ting Chen <jhaotingc@nvidia.com>
Co-authored-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
Co-authored-by: Erin <14718778+hchings@users.noreply.github.com>
Co-authored-by: chenfeiz0326 <chenfeiz@nvidia.com>
Co-authored-by: dongxuy04 <78518666+dongxuy04@users.noreply.github.com>
Co-authored-by: 2ez4bz <133824995+2ez4bz@users.noreply.github.com>
Co-authored-by: Hanjun Cho <46752251+gkswns0531@users.noreply.github.com>
Co-authored-by: Ubuntu <ubuntu@ip-10-0-20-146.us-west-2.compute.internal>
Co-authored-by: QI JUN <22017000+QiJune@users.noreply.github.com>
Co-authored-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
Co-authored-by: Anthony Chang <27950904+rosenrodt@users.noreply.github.com>
Co-authored-by: CarstyYou <186021327+CarstyYou@users.noreply.github.com>
Co-authored-by: Jinyang Yuan <154768711+jinyangyuan-nvidia@users.noreply.github.com>
Co-authored-by: narutolhy <582909902@qq.com>
Co-authored-by: Zhanrui Sun <184402041+ZhanruiSunCh@users.noreply.github.com>
Co-authored-by: wili <98001977+wili-65535@users.noreply.github.com>
Co-authored-by: wili-65535 <wili-65535@users.noreply.github.com>
Co-authored-by: Void <18275976+yilin-void@users.noreply.github.com>
Co-authored-by: William Tambellini <wtambellini@sdl.com>
2025-07-23 14:37:20 +08:00
..
beamSearchKernels Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
causalConv1d fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
communicationKernels [fix] Performance Optimization for MNNVL TwoShot Kernel (#5934) 2025-07-17 10:49:51 +08:00
contextFusedMultiHeadAttention hopper-style context MLA (#5713) 2025-07-23 14:37:20 +08:00
cutlass_kernels fix TMA error with GEMM+AR on TP=2 (#6075) 2025-07-18 10:26:08 +08:00
decoderMaskedMultiheadAttention Add is_fp8_output key to XQA kernel cubin hashing (solves Eagle3-one-engine Hopper fp8 bug) (#5813) 2025-07-09 09:26:27 +08:00
dsv3MinLatencyKernels Feat/ds r1 min latency opt round3, add router gemm, fused a gemm, PDL (#4560) 2025-06-14 17:36:22 +08:00
flashMLA feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
fusedLayernormKernels feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
groupRmsNormKernels feat: Add heuristic for GroupRMSNorm kernel selection. (#4047) 2025-05-13 08:52:53 +08:00
internal_cutlass_kernels feat: Add support for per expert activation scaling factors (#5013) 2025-06-28 09:10:35 +12:00
llama4MinLatencyKernels feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
lora chore: Stabilize ABI boundary for internal kernel library (#3117) 2025-04-11 15:07:50 +08:00
moeLoadBalance feat: Misc Opt for large scale EP (#5374) 2025-06-20 13:11:31 +08:00
selectiveScan fix: fix license bug (#5200) 2025-06-13 18:58:15 +08:00
speculativeDecoding refactor: Remove enforced sorted order of batch slots (#3502) 2025-07-14 17:23:02 +02:00
trtllmGenKernels Feat: Add vectorized loading for finalize kernel in MoE Trtllm backend (#5919) 2025-07-17 12:38:29 +08:00
unfusedAttentionKernels feat: Add Mixture of Experts FP8xMXFP4 support (#4750) 2025-06-09 13:25:04 +08:00
userbuffers feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
weightOnlyBatchedGemv [NVBUG-5304516/5319741]Qwen2.5VL FP8 support (#5029) 2025-07-09 23:16:42 +08:00
attentionMask.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
attentionMask.h Update TensorRT-LLM (#2363) 2024-10-22 20:27:35 +08:00
banBadWords.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
banBadWords.h Update TensorRT-LLM (#2008) 2024-07-23 23:05:09 +08:00
banRepeatNgram.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
banRepeatNgram.h Update TensorRT-LLM (#1598) 2024-05-14 16:43:41 +08:00
beamSearchKernels.cu Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
beamSearchKernels.h Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
buildRelativeAttentionBiasKernel.cu Update TensorRT-LLM (#1763) 2024-06-11 16:59:02 +08:00
buildRelativeAttentionBiasKernel.h Update TensorRT-LLM (#1763) 2024-06-11 16:59:02 +08:00
CMakeLists.txt feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
cumsumLastDim.cu open source 7f370deb0090d885d7518c2b146399ba3933c004 (#2273) 2024-09-30 13:51:19 +02:00
cumsumLastDim.h Update TensorRT-LLM (#1725) 2024-06-04 20:26:32 +08:00
customAllReduceKernels.cu Cherry pick feat/llama4 to main (#4739) 2025-05-30 05:28:40 +08:00
customAllReduceKernels.h [TRTLLM-3927] [feat] Finalize + Allreduce + add + rmsnorm fusion (#4756) 2025-06-10 19:55:16 +08:00
decoderMaskedMultiheadAttention.cu Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
decoderMaskedMultiheadAttention.h [https://nvbugspro.nvidia.com/bug/5300080] Fix the bug of setting attention_chunk_size and enable chunked-attention in the generation-phase by default (#4693) 2025-06-03 19:02:57 -04:00
decoderMaskedMultiheadAttentionUtils.h Update TensorRT-LLM (#2363) 2024-10-22 20:27:35 +08:00
decodingCommon.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
decodingKernels.cu Feat: Variable-Beam-Width-Search (VBWS) part4 (#3979) 2025-05-12 22:32:29 +02:00
decodingKernels.h refactor: Improve decoder finalize function (#3077) 2025-03-28 14:33:59 +08:00
delayStream.cu Update (#2978) 2025-03-23 16:39:35 +08:00
delayStream.h Update (#2978) 2025-03-23 16:39:35 +08:00
doraScaling.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
doraScaling.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fmhaDispatcher.cpp chore: add more log in FmhaDispatcher (#6170) 2025-07-18 16:53:02 +08:00
fmhaDispatcher.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
fusedQKNormRopeKernel.cu perf: Add fused q_norm/k_norm/RoPE for Qwen3. (#4482) 2025-05-23 15:31:04 +08:00
fusedQKNormRopeKernel.h perf: Add fused q_norm/k_norm/RoPE for Qwen3. (#4482) 2025-05-23 15:31:04 +08:00
gptKernels.cu Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
gptKernels.h feat: add CGA reduction fmha kernels on Blackwell. (#3763) 2025-04-29 10:43:54 +08:00
groupGemm.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
groupGemm.h Update TensorRT-LLM (#2562) 2024-12-11 00:31:05 -08:00
kvCachePartialCopy.cu [fix] Fix illegal mem access and possible accuracy lose. Cherry-pick … (#5017) 2025-06-09 17:50:57 +08:00
kvCacheUtils.h chore: Improve documentation of Kv_block_array (#5765) 2025-07-05 22:25:27 +02:00
layernormKernels.cu feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
layernormKernels.h feat: Add support for fp8 rowwise quantization (#4876) 2025-06-14 06:37:48 -07:00
logitsBitmask.cu bitmask v3 (#3009) 2025-03-26 15:21:29 +08:00
logitsBitmask.h Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
lookupKernels.cu Update TensorRT-LLM (#1639) 2024-05-21 17:51:02 +08:00
lookupKernels.h Update TensorRT-LLM (#1639) 2024-05-21 17:51:02 +08:00
lruKernel.cu Update TensorRT-LLM (#1688) 2024-05-28 20:07:49 +08:00
lruKernel.h Update TensorRT-LLM (#1688) 2024-05-28 20:07:49 +08:00
mambaConv1dKernels.cu feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
mambaConv1dKernels.h Update TensorRT-LLM (#1954) 2024-07-16 15:30:25 +08:00
mlaChunkedPrefill.cu [TRTLLM-3602][feat] support nvfp4 model and fp8 kv cache for MLA chunked prefill (Blackwell) (#5475) 2025-06-26 22:18:08 +08:00
mlaChunkedPrefill.cuh [TRTLLM-3602][feat] support nvfp4 model and fp8 kv cache for MLA chunked prefill (Blackwell) (#5475) 2025-06-26 22:18:08 +08:00
mlaKernels.cu [TRTLLM-3602][feat] support nvfp4 model and fp8 kv cache for MLA chunked prefill (Blackwell) (#5475) 2025-06-26 22:18:08 +08:00
mlaKernels.h feat: chunked prefill for MLA (Blackwell) (#4651) 2025-06-26 09:01:00 +08:00
moeCommKernels.cu [NvBug 5378370] fix: Fix alltoall for llama4 (apply_router_weight_on_input=True) (#5902) 2025-07-12 15:50:31 +09:00
moeCommKernels.h feat: Add MNNVL MoE A2A support (#3504) 2025-04-25 17:29:08 +08:00
moePrepareKernels.cu feat: moe prepare support topk % 4 != 0 (#5742) 2025-07-22 10:42:46 +08:00
moePrepareKernels.h feat: moe prepare support topk % 4 != 0 (#5742) 2025-07-22 10:42:46 +08:00
multiHeadAttentionCommon.h [TRTLLM-5366][feat]Add support for sm121 (#5524) 2025-07-08 14:27:00 -07:00
noAuxTcKernels.cu Update (#2978) 2025-03-23 16:39:35 +08:00
noAuxTcKernels.h Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
penaltyKernels.cu Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
penaltyKernels.h Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
penaltyTypes.h Update TensorRT-LLM (#1554) 2024-05-07 23:34:28 +08:00
preQuantScaleKernel.cu chore: Mass integration of release/0.20. (#4871) 2025-06-04 14:12:27 +08:00
preQuantScaleKernel.h chore: Mass integration of release/0.20. (#4871) 2025-06-04 14:12:27 +08:00
qserveGemm.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
qserveGemmPerChannel.cu Update TensorRT-LLM (#2532) 2024-12-04 21:16:56 +08:00
qserveGemmPerGroup.cu Update TensorRT-LLM (#2502) 2024-11-26 16:51:34 +08:00
quantization.cu perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
quantization.cuh feat: Add support for MXFP8xMXFP4 in pytorch (#5535) 2025-07-06 15:32:06 -07:00
quantization.h perf: Optimize swizzle_sf, unswizzle_sf, reswizzle_sf (#5318) 2025-06-26 14:03:56 +08:00
recoverFromRingAtten.cu Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
recoverFromRingAtten.h Support RingAttention in the BertAttention plugin and the DiT model (#3661) 2025-05-09 08:06:54 +08:00
renormMoeRoutingKernels.cu feat: reduce unnecessary kernel generation (#5476) 2025-07-04 14:37:49 +08:00
renormMoeRoutingKernels.h Add customized renormalized moe routing kernel for moe cutlass backend (#4955) 2025-06-09 17:38:50 +08:00
rmsnormKernels.cu Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
rmsnormKernels.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
sageAttentionKernels.cu Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
sageAttentionKernels.h Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
samplingAirTopPKernels.cu Update TensorRT-LLM (#2783) 2025-02-13 18:40:22 +08:00
samplingTopKKernels.cu Update TensorRT-LLM (#2849) 2025-03-04 18:44:00 +08:00
samplingTopKKernels.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
samplingTopPKernels.cu chore: remove usernames from comments (#3291) 2025-04-05 13:44:28 +08:00
samplingTopPKernels.h Update TensorRT-LLM (#2755) 2025-02-11 03:01:00 +00:00
splitkGroupGemm.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
splitkGroupGemm.h Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
stopCriteriaKernels.cu Update TensorRT-LLM (#2873) 2025-03-11 21:13:42 +08:00
stopCriteriaKernels.h open source 4dbf696ae9b74a26829d120b67ab8443d70c8e58 (#2297) 2024-10-08 12:19:19 +02:00
topkLastDim.cu [nvbugs/5354884][fix] Update beam search workspace estimation to new upper bound (#5926) 2025-07-19 01:54:51 +08:00
topkLastDim.h Update TensorRT-LLM (#2436) 2024-11-12 15:27:49 +08:00
unfusedAttentionKernels.cu fix: fix for cp > kvHeadNum (#3002) 2025-03-26 12:39:02 +08:00
unfusedAttentionKernels.h fix: fix for cp > kvHeadNum (#3002) 2025-03-26 12:39:02 +08:00
xqaDispatcher.cpp [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00
xqaDispatcher.h [feat] Support XQA-based MLA on SM120 (#4858) 2025-06-06 22:32:49 +08:00