TensorRT-LLMs/cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention
石晓伟 59f41c067d
Update TensorRT-LLM (#708)
* Update TensorRT-LLM

* update

* Bump version to 0.7.0
2023-12-20 16:38:28 +08:00
..
cubin Update TensorRT-LLM (#708) 2023-12-20 16:38:28 +08:00
copy_cu.py Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention32_bf16.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention32_float.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention32_half.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention48_bf16.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention48_float.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention48_half.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention64_bf16.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention64_float.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention64_half.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention80_bf16.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention80_float.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention80_half.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention96_bf16.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention96_float.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention96_half.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention112_bf16.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention112_float.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention112_half.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention128_bf16.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention128_float.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention128_half.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention144_bf16.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention144_float.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention144_half.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention160_bf16.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention160_float.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention160_half.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention192_bf16.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention192_float.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention192_half.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention224_bf16.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention224_float.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention224_half.cu Update TensorRT-LLM (#506) 2023-11-30 16:46:22 +08:00
decoderMaskedMultiheadAttention256_bf16.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention256_float.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttention256_half.cu Initial commit 2023-09-20 00:29:41 -07:00
decoderMaskedMultiheadAttentionLaunch.h Update TensorRT-LLM (#708) 2023-12-20 16:38:28 +08:00
decoderMaskedMultiheadAttentionTemplate.h Update TensorRT-LLM (#708) 2023-12-20 16:38:28 +08:00
decoderXQARunner.cpp Update TensorRT-LLM (#708) 2023-12-20 16:38:28 +08:00
decoderXQARunner.h Update TensorRT-LLM (#708) 2023-12-20 16:38:28 +08:00
mmha_notes.md Initial commit 2023-09-20 00:29:41 -07:00