TensorRT-LLMs/cpp/tensorrt_llm
danielafrimi 0f084d9566
added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455)
Loraop integration into torch modules

Signed-off-by: Ubuntu <dafrimi@nvidia.com>
2025-04-17 12:48:27 +08:00
..
batch_manager fix: disable KV cache reuse if using attention sink (#3021) 2025-04-16 03:07:32 +08:00
common feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
cutlass_extensions/include/cutlass_extensions feat: Update cutlass (#2981) 2025-03-26 22:36:27 +08:00
executor chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
executor_worker Update TensorRT-LLM (#2792) 2025-02-18 21:27:39 +08:00
kernels add support for smaller hidden_dim (#3609) 2025-04-17 12:00:32 +08:00
layers fix: Eagle decoding (#3456) 2025-04-11 22:06:38 +08:00
plugins feat: Add FP8 support for SM 120 (#3248) 2025-04-14 16:05:41 -07:00
pybind Feat/ Integrate peftCacheManager in PyExecutor creation (#3372) 2025-04-15 15:14:43 +08:00
runtime chore: Clean up cpp runtime (#3537) 2025-04-15 16:06:14 +08:00
thop added loraOp into lora layer + test for mlp and comparison to lora plugin (#3455) 2025-04-17 12:48:27 +08:00
CMakeLists.txt feat: Adding FP8 BMM from Codegen (#3541) 2025-04-16 10:37:15 +02:00