TensorRT-LLMs/examples/openai_triton
Guoming Zhang 202bed4574 [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851)
Signed-off-by: nv-guomingz <137257613+nv-guomingz@users.noreply.github.com>
Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
2025-09-25 21:02:35 +08:00
..
manual_plugin [None][chroe] Rename TensorRT-LLM to TensorRT LLM for source code. (#7851) 2025-09-25 21:02:35 +08:00
plugin_autogen [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00
README.md [None][doc] Rename TensorRT-LLM to TensorRT LLM for homepage and the … (#7850) 2025-09-25 21:02:35 +08:00

Integration for OpenAI Triton

The typical approach to integrate a kernel into TensorRT LLM is to create TensorRT plugins. Specially for integrating OpenAI Triton kernels, there are two methods:

  1. Creating TensorRT plugin manually, you can refer to manual plugin example for details,
  2. Generate the TensorRT plugins automatically, please refer to automatic plugin example for details.