TensorRT-LLMs/cpp/tensorrt_llm
zhhuang-nv 94e6167879
optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907)
Signed-off-by: Zhen Huang <145532724+zhhuang-nv@users.noreply.github.com>
2025-04-29 14:17:07 +08:00
..
batch_manager cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
common optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
cutlass_extensions/include/cutlass_extensions feat: Update cutlass (#2981) 2025-03-26 22:36:27 +08:00
executor cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels optimize cudaMemGetInfo for TllmGenFmhaRunner (#3907) 2025-04-29 14:17:07 +08:00
layers fix: Eagle decoding (#3456) 2025-04-11 22:06:38 +08:00
plugins refactor: Clean up CMakeLists.txt (#3479) 2025-04-18 14:39:29 +08:00
pybind cacheTransceiver buffer manager (#3798) 2025-04-27 11:48:15 +08:00
runtime fix: 5197419 and removed unused runtime kernels (#3631) 2025-04-23 18:04:50 +02:00
thop fix: Fix FMHA-based MLA in the generation phase and add MLA unit test (#3863) 2025-04-29 09:09:43 +08:00
CMakeLists.txt Fix double link to fp8_blockscale_gemm_src (#3707) 2025-04-23 10:16:07 +08:00