1 ### OpenAI Chat Client
+ 1 ### :title OpenAI Chat Client
2
3 from openai import OpenAI
4
@@ -585,7 +568,7 @@
Completions API
You can query Completions API with any http clients, a typical example is OpenAI Python client:
- 1 ### OpenAI Completion Client
+ 1 ### :title OpenAI Completion Client
2
3 from openai import OpenAI
4
@@ -1221,9 +1204,9 @@ However, for the PyTorch backend, specified with the
diff --git a/latest/dev-on-cloud/build-image-to-dockerhub.html b/latest/dev-on-cloud/build-image-to-dockerhub.html
index f11a993590..e650565215 100644
--- a/latest/dev-on-cloud/build-image-to-dockerhub.html
+++ b/latest/dev-on-cloud/build-image-to-dockerhub.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -61,7 +68,7 @@
-
+
@@ -316,58 +323,32 @@
Installation
LLM API
Examples
Blogs
@@ -686,9 +669,9 @@ docker push <your_dockerhub_use
diff --git a/latest/dev-on-cloud/dev-on-runpod.html b/latest/dev-on-cloud/dev-on-runpod.html
index 475221a3d8..f714d57bc4 100644
--- a/latest/dev-on-cloud/dev-on-runpod.html
+++ b/latest/dev-on-cloud/dev-on-runpod.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -61,7 +68,7 @@
-
+
@@ -316,58 +323,32 @@
Installation
LLM API
Examples
Blogs
@@ -686,9 +669,9 @@
diff --git a/latest/examples/curl_chat_client.html b/latest/examples/curl_chat_client.html
index 340f0b1392..3785282c34 100644
--- a/latest/examples/curl_chat_client.html
+++ b/latest/examples/curl_chat_client.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -657,9 +640,9 @@
diff --git a/latest/examples/curl_chat_client_for_multimodal.html b/latest/examples/curl_chat_client_for_multimodal.html
index 6ec3d27b13..5a8269c49a 100644
--- a/latest/examples/curl_chat_client_for_multimodal.html
+++ b/latest/examples/curl_chat_client_for_multimodal.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -734,9 +717,9 @@
diff --git a/latest/examples/curl_completion_client.html b/latest/examples/curl_completion_client.html
index dadd6f0951..4fe873b6bc 100644
--- a/latest/examples/curl_completion_client.html
+++ b/latest/examples/curl_completion_client.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -656,9 +639,9 @@
diff --git a/latest/examples/customization.html b/latest/examples/customization.html
index 1b33111902..1b6365d423 100644
--- a/latest/examples/customization.html
+++ b/latest/examples/customization.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -59,11 +66,11 @@
-
+
-
+
@@ -318,58 +325,32 @@
Installation
LLM API
Examples
Blogs
@@ -648,12 +631,12 @@
diff --git a/latest/examples/deepseek_r1_reasoning_parser.html b/latest/examples/deepseek_r1_reasoning_parser.html
index f144a34c36..986259f7dd 100644
--- a/latest/examples/deepseek_r1_reasoning_parser.html
+++ b/latest/examples/deepseek_r1_reasoning_parser.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -657,9 +640,9 @@
diff --git a/latest/examples/genai_perf_client.html b/latest/examples/genai_perf_client.html
index 68af652eaa..5e2cdc22cb 100644
--- a/latest/examples/genai_perf_client.html
+++ b/latest/examples/genai_perf_client.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -662,9 +645,9 @@
diff --git a/latest/examples/genai_perf_client_for_multimodal.html b/latest/examples/genai_perf_client_for_multimodal.html
index 11dd004a1b..a0553608a1 100644
--- a/latest/examples/genai_perf_client_for_multimodal.html
+++ b/latest/examples/genai_perf_client_for_multimodal.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -665,9 +648,9 @@
diff --git a/latest/examples/index.html b/latest/examples/index.html
index 509d9818c0..385194e529 100644
--- a/latest/examples/index.html
+++ b/latest/examples/index.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
The LLM API can be used for both offline or online usage. See more examples of the LLM API here:
For more details on how to fully utilize this API, check out:
next
-
Generate Text Using Medusa Decoding
+
LLM Common Customizations
@@ -699,9 +661,9 @@
diff --git a/latest/examples/llm_api_examples.html b/latest/examples/llm_api_examples.html
index 155fcef3b4..d7f0a3f8aa 100644
--- a/latest/examples/llm_api_examples.html
+++ b/latest/examples/llm_api_examples.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
+
-
+
@@ -207,6 +214,10 @@
+
+
@@ -314,58 +325,32 @@
Installation
LLM API
Examples
Blogs
@@ -509,29 +496,37 @@
@@ -554,11 +549,11 @@
next
-
Online Serving Examples
+
Generate text
@@ -569,9 +564,27 @@
-
+
+
+
+
@@ -662,9 +675,9 @@
diff --git a/latest/examples/llm_eagle2_decoding.html b/latest/examples/llm_eagle2_decoding.html
deleted file mode 100644
index 388e35f91a..0000000000
--- a/latest/examples/llm_eagle2_decoding.html
+++ /dev/null
@@ -1,718 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
Generate Text Using Eagle2 Decoding — TensorRT-LLM
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Back to top
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Generate Text Using Eagle2 Decoding
-Source NVIDIA/TensorRT-LLM .
- 1 ### Generate Text Using Eagle2 Decoding
- 2
- 3 from tensorrt_llm._tensorrt_engine import LLM
- 4 from tensorrt_llm.llmapi import ( EagleDecodingConfig , KvCacheConfig ,
- 5 SamplingParams )
- 6
- 7
- 8 def main ():
- 9 # Sample prompts.
-10 prompts = [
-11 "Hello, my name is" ,
-12 "The president of the United States is" ,
-13 "The capital of France is" ,
-14 "The future of AI is" ,
-15 ]
-16 # The end user can customize the sampling configuration with the SamplingParams class
-17 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
-18
-19 # The end user can customize the kv cache configuration with the KVCache class
-20 kv_cache_config = KvCacheConfig ( enable_block_reuse = True )
-21
-22 llm_kwargs = {}
-23
-24 model = "lmsys/vicuna-7b-v1.3"
-25
-26 # The end user can customize the eagle decoding configuration by specifying the
-27 # speculative_model, max_draft_len, num_eagle_layers, max_non_leaves_per_layer, eagle_choices
-28 # greedy_sampling,posterior_threshold, use_dynamic_tree and dynamic_tree_max_topK
-29 # with the EagleDecodingConfig class
-30
-31 speculative_config = EagleDecodingConfig (
-32 speculative_model = "yuhuili/EAGLE-Vicuna-7B-v1.3" ,
-33 max_draft_len = 63 ,
-34 num_eagle_layers = 4 ,
-35 max_non_leaves_per_layer = 10 ,
-36 use_dynamic_tree = True ,
-37 dynamic_tree_max_topK = 10 )
-38
-39 llm = LLM ( model = model ,
-40 kv_cache_config = kv_cache_config ,
-41 speculative_config = speculative_config ,
-42 max_batch_size = 1 ,
-43 max_seq_len = 1024 ,
-44 ** llm_kwargs )
-45
-46 outputs = llm . generate ( prompts , sampling_params )
-47
-48 # Print the outputs.
-49 for output in outputs :
-50 prompt = output . prompt
-51 generated_text = output . outputs [ 0 ] . text
-52 print ( f "Prompt: { prompt !r} , Generated text: { generated_text !r} " )
-53
-54
-55 if __name__ == '__main__' :
-56 main ()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/latest/examples/llm_eagle_decoding.html b/latest/examples/llm_eagle_decoding.html
deleted file mode 100644
index 18b1e4b85e..0000000000
--- a/latest/examples/llm_eagle_decoding.html
+++ /dev/null
@@ -1,723 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
Generate Text Using Eagle Decoding — TensorRT-LLM
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Back to top
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Generate Text Using Eagle Decoding
-Source NVIDIA/TensorRT-LLM .
- 1 ### Generate Text Using Eagle Decoding
- 2
- 3 from tensorrt_llm import SamplingParams
- 4 from tensorrt_llm._tensorrt_engine import LLM
- 5 from tensorrt_llm.llmapi import EagleDecodingConfig , KvCacheConfig
- 6
- 7
- 8 def main ():
- 9 # Sample prompts.
-10 prompts = [
-11 "Hello, my name is" ,
-12 "The president of the United States is" ,
-13 "The capital of France is" ,
-14 "The future of AI is" ,
-15 ]
-16 # The end user can customize the sampling configuration with the SamplingParams class
-17 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
-18
-19 # The end user can customize the kv cache configuration with the KVCache class
-20 kv_cache_config = KvCacheConfig ( enable_block_reuse = True )
-21
-22 llm_kwargs = {}
-23
-24 model = "lmsys/vicuna-7b-v1.3"
-25
-26 # The end user can customize the eagle decoding configuration by specifying the
-27 # speculative_model, max_draft_len, num_eagle_layers, max_non_leaves_per_layer, eagle_choices
-28 # greedy_sampling,posterior_threshold, use_dynamic_tree and dynamic_tree_max_topK
-29 # with the EagleDecodingConfig class
-30
-31 speculative_config = EagleDecodingConfig (
-32 speculative_model = "yuhuili/EAGLE-Vicuna-7B-v1.3" ,
-33 max_draft_len = 63 ,
-34 num_eagle_layers = 4 ,
-35 max_non_leaves_per_layer = 10 ,
-36 eagle_choices = [[ 0 ], [ 0 , 0 ], [ 1 ], [ 0 , 1 ], [ 2 ], [ 0 , 0 , 0 ], [ 1 , 0 ], [ 0 , 2 ], [ 3 ], [ 0 , 3 ], [ 4 ], [ 0 , 4 ], [ 2 , 0 ], \
-37 [ 0 , 5 ], [ 0 , 0 , 1 ], [ 5 ], [ 0 , 6 ], [ 6 ], [ 0 , 7 ], [ 0 , 1 , 0 ], [ 1 , 1 ], [ 7 ], [ 0 , 8 ], [ 0 , 0 , 2 ], [ 3 , 0 ], \
-38 [ 0 , 9 ], [ 8 ], [ 9 ], [ 1 , 0 , 0 ], [ 0 , 2 , 0 ], [ 1 , 2 ], [ 0 , 0 , 3 ], [ 4 , 0 ], [ 2 , 1 ], [ 0 , 0 , 4 ], [ 0 , 0 , 5 ], \
-39 [ 0 , 0 , 0 , 0 ], [ 0 , 1 , 1 ], [ 0 , 0 , 6 ], [ 0 , 3 , 0 ], [ 5 , 0 ], [ 1 , 3 ], [ 0 , 0 , 7 ], [ 0 , 0 , 8 ], [ 0 , 0 , 9 ], \
-40 [ 6 , 0 ], [ 0 , 4 , 0 ], [ 1 , 4 ], [ 7 , 0 ], [ 0 , 1 , 2 ], [ 2 , 0 , 0 ], [ 3 , 1 ], [ 2 , 2 ], [ 8 , 0 ], \
-41 [ 0 , 5 , 0 ], [ 1 , 5 ], [ 1 , 0 , 1 ], [ 0 , 2 , 1 ], [ 9 , 0 ], [ 0 , 6 , 0 ], [ 0 , 0 , 0 , 1 ], [ 1 , 6 ], [ 0 , 7 , 0 ]]
-42 )
-43
-44 llm = LLM ( model = model ,
-45 kv_cache_config = kv_cache_config ,
-46 speculative_config = speculative_config ,
-47 max_batch_size = 1 ,
-48 max_seq_len = 1024 ,
-49 ** llm_kwargs )
-50
-51 outputs = llm . generate ( prompts , sampling_params )
-52
-53 # Print the outputs.
-54 for output in outputs :
-55 prompt = output . prompt
-56 generated_text = output . outputs [ 0 ] . text
-57 print ( f "Prompt: { prompt !r} , Generated text: { generated_text !r} " )
-58
-59
-60 if __name__ == '__main__' :
-61 main ()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/latest/examples/llm_guided_decoding.html b/latest/examples/llm_guided_decoding.html
index 53a0b7c609..0596d36160 100644
--- a/latest/examples/llm_guided_decoding.html
+++ b/latest/examples/llm_guided_decoding.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -513,52 +496,53 @@
Generate text with guided decoding
Source NVIDIA/TensorRT-LLM .
- 1 ### Generate text with guided decoding
- 2 from tensorrt_llm import SamplingParams
- 3 from tensorrt_llm._tensorrt_engine import LLM
- 4 from tensorrt_llm.llmapi import GuidedDecodingParams
- 5
+ 1 from tensorrt_llm import LLM , SamplingParams
+ 2 from tensorrt_llm.llmapi import GuidedDecodingParams
+ 3
+ 4
+ 5 def main ():
6
- 7 def main ():
- 8
- 9 # Specify the guided decoding backend; xgrammar is supported currently.
-10 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
-11 guided_decoding_backend = 'xgrammar' )
-12
-13 # An example from json-mode-eval
-14 schema = '{"title": "WirelessAccessPoint", "type": "object", "properties": {"ssid": {"title": "SSID", "type": "string"}, "securityProtocol": {"title": "SecurityProtocol", "type": "string"}, "bandwidth": {"title": "Bandwidth", "type": "string"}}, "required": ["ssid", "securityProtocol", "bandwidth"]}'
-15
-16 prompt = [{
-17 'role' :
-18 'system' ,
-19 'content' :
-20 "You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to: \n <schema> \n {'title': 'WirelessAccessPoint', 'type': 'object', 'properties': {'ssid': {'title': 'SSID', 'type': 'string'}, 'securityProtocol': {'title': 'SecurityProtocol', 'type': 'string'}, 'bandwidth': {'title': 'Bandwidth', 'type': 'string'}}, 'required': ['ssid', 'securityProtocol', 'bandwidth']} \n </schema> \n "
-21 }, {
-22 'role' :
-23 'user' ,
-24 'content' :
-25 "I'm currently configuring a wireless access point for our office network and I need to generate a JSON object that accurately represents its settings. The access point's SSID should be 'OfficeNetSecure', it uses WPA2-Enterprise as its security protocol, and it's capable of a bandwidth of up to 1300 Mbps on the 5 GHz band. This JSON object will be used to document our network configurations and to automate the setup process for additional access points in the future. Please provide a JSON object that includes these details."
-26 }]
-27 prompt = llm . tokenizer . apply_chat_template ( prompt , tokenize = False )
-28 print ( f "Prompt: { prompt !r} " )
-29
-30 output = llm . generate ( prompt , sampling_params = SamplingParams ( max_tokens = 50 ))
-31 print ( f "Generated text (unguided): { output . outputs [ 0 ] . text !r} " )
-32
-33 output = llm . generate (
-34 prompt ,
-35 sampling_params = SamplingParams (
-36 max_tokens = 50 , guided_decoding = GuidedDecodingParams ( json = schema )))
-37 print ( f "Generated text (guided): { output . outputs [ 0 ] . text !r} " )
-38
-39 # Got output like
-40 # Prompt: "<|system|>\nYou are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{'title': 'WirelessAccessPoint', 'type': 'object', 'properties': {'ssid': {'title': 'SSID', 'type': 'string'}, 'securityProtocol': {'title': 'SecurityProtocol', 'type': 'string'}, 'bandwidth': {'title': 'Bandwidth', 'type': 'string'}}, 'required': ['ssid', 'securityProtocol', 'bandwidth']}\n</schema>\n</s>\n<|user|>\nI'm currently configuring a wireless access point for our office network and I need to generate a JSON object that accurately represents its settings. The access point's SSID should be 'OfficeNetSecure', it uses WPA2-Enterprise as its security protocol, and it's capable of a bandwidth of up to 1300 Mbps on the 5 GHz band. This JSON object will be used to document our network configurations and to automate the setup process for additional access points in the future. Please provide a JSON object that includes these details.</s>\n"
-41 # Generated text (unguided): '<|assistant|>\nHere\'s a JSON object that accurately represents the settings of a wireless access point for our office network:\n\n```json\n{\n "title": "WirelessAccessPoint",\n "'
-42 # Generated text (guided): '{"ssid": "OfficeNetSecure", "securityProtocol": "WPA2-Enterprise", "bandwidth": "1300 Mbps"}'
-43
+ 7 # Specify the guided decoding backend; xgrammar is supported currently.
+ 8 llm = LLM (
+ 9 model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
+10 guided_decoding_backend = 'xgrammar' ,
+11 disable_overlap_scheduler = True # Not supported by xgrammar mode
+12 )
+13
+14 # An example from json-mode-eval
+15 schema = '{"title": "WirelessAccessPoint", "type": "object", "properties": {"ssid": {"title": "SSID", "type": "string"}, "securityProtocol": {"title": "SecurityProtocol", "type": "string"}, "bandwidth": {"title": "Bandwidth", "type": "string"}}, "required": ["ssid", "securityProtocol", "bandwidth"]}'
+16
+17 prompt = [{
+18 'role' :
+19 'system' ,
+20 'content' :
+21 "You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to: \n <schema> \n {'title': 'WirelessAccessPoint', 'type': 'object', 'properties': {'ssid': {'title': 'SSID', 'type': 'string'}, 'securityProtocol': {'title': 'SecurityProtocol', 'type': 'string'}, 'bandwidth': {'title': 'Bandwidth', 'type': 'string'}}, 'required': ['ssid', 'securityProtocol', 'bandwidth']} \n </schema> \n "
+22 }, {
+23 'role' :
+24 'user' ,
+25 'content' :
+26 "I'm currently configuring a wireless access point for our office network and I need to generate a JSON object that accurately represents its settings. The access point's SSID should be 'OfficeNetSecure', it uses WPA2-Enterprise as its security protocol, and it's capable of a bandwidth of up to 1300 Mbps on the 5 GHz band. This JSON object will be used to document our network configurations and to automate the setup process for additional access points in the future. Please provide a JSON object that includes these details."
+27 }]
+28 prompt = llm . tokenizer . apply_chat_template ( prompt , tokenize = False )
+29 print ( f "Prompt: { prompt !r} " )
+30
+31 output = llm . generate ( prompt , sampling_params = SamplingParams ( max_tokens = 50 ))
+32 print ( f "Generated text (unguided): { output . outputs [ 0 ] . text !r} " )
+33
+34 output = llm . generate (
+35 prompt ,
+36 sampling_params = SamplingParams (
+37 max_tokens = 50 , guided_decoding = GuidedDecodingParams ( json = schema )))
+38 print ( f "Generated text (guided): { output . outputs [ 0 ] . text !r} " )
+39
+40 # Got output like
+41 # Prompt: "<|system|>\nYou are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{'title': 'WirelessAccessPoint', 'type': 'object', 'properties': {'ssid': {'title': 'SSID', 'type': 'string'}, 'securityProtocol': {'title': 'SecurityProtocol', 'type': 'string'}, 'bandwidth': {'title': 'Bandwidth', 'type': 'string'}}, 'required': ['ssid', 'securityProtocol', 'bandwidth']}\n</schema>\n</s>\n<|user|>\nI'm currently configuring a wireless access point for our office network and I need to generate a JSON object that accurately represents its settings. The access point's SSID should be 'OfficeNetSecure', it uses WPA2-Enterprise as its security protocol, and it's capable of a bandwidth of up to 1300 Mbps on the 5 GHz band. This JSON object will be used to document our network configurations and to automate the setup process for additional access points in the future. Please provide a JSON object that includes these details.</s>\n"
+42 # Generated text (unguided): '<|assistant|>\nHere\'s a JSON object that accurately represents the settings of a wireless access point for our office network:\n\n```json\n{\n "title": "WirelessAccessPoint",\n "'
+43 # Generated text (guided): '{"ssid": "OfficeNetSecure", "securityProtocol": "WPA2-Enterprise", "bandwidth": "1300 Mbps"}'
44
-45 if __name__ == '__main__' :
-46 main ()
+45
+46 if __name__ == '__main__' :
+47 main ()
@@ -574,20 +558,20 @@
previous
-
Generate Text in Streaming
+
Distributed LLM Generation
next
-
Generate text
+
Control generated text using logits processor
@@ -691,9 +675,9 @@
diff --git a/latest/examples/llm_inference.html b/latest/examples/llm_inference.html
index 06ee231dcf..1431ad5b5d 100644
--- a/latest/examples/llm_inference.html
+++ b/latest/examples/llm_inference.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -513,47 +496,40 @@
Generate text
Source NVIDIA/TensorRT-LLM .
- 1 ### Generate text
- 2 import tempfile
+ 1 from tensorrt_llm import LLM , SamplingParams
+ 2
3
- 4 from tensorrt_llm import SamplingParams
- 5 from tensorrt_llm._tensorrt_engine import LLM
- 6
- 7
- 8 def main ():
+ 4 def main ():
+ 5
+ 6 # Model could accept HF model name, a path to local HF model,
+ 7 # or TensorRT Model Optimizer's quantized checkpoints like nvidia/Llama-3.1-8B-Instruct-FP8 on HF.
+ 8 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" )
9
-10 # Model could accept HF model name, a path to local HF model,
-11 # or TensorRT Model Optimizer's quantized checkpoints like nvidia/Llama-3.1-8B-Instruct-FP8 on HF.
-12 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" )
-13
-14 # You can save the engine to disk and load it back later, the LLM class can accept either a HF model or a TRT-LLM engine.
-15 llm . save ( tempfile . mkdtemp ())
-16
-17 # Sample prompts.
-18 prompts = [
-19 "Hello, my name is" ,
-20 "The president of the United States is" ,
-21 "The capital of France is" ,
-22 "The future of AI is" ,
-23 ]
-24
-25 # Create a sampling params.
-26 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
-27
-28 for output in llm . generate ( prompts , sampling_params ):
-29 print (
-30 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
-31 )
+10 # Sample prompts.
+11 prompts = [
+12 "Hello, my name is" ,
+13 "The president of the United States is" ,
+14 "The capital of France is" ,
+15 "The future of AI is" ,
+16 ]
+17
+18 # Create a sampling params.
+19 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
+20
+21 for output in llm . generate ( prompts , sampling_params ):
+22 print (
+23 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
+24 )
+25
+26 # Got output like
+27 # Prompt: 'Hello, my name is', Generated text: '\n\nJane Smith. I am a student pursuing my degree in Computer Science at [university]. I enjoy learning new things, especially technology and programming'
+28 # Prompt: 'The president of the United States is', Generated text: 'likely to nominate a new Supreme Court justice to fill the seat vacated by the death of Antonin Scalia. The Senate should vote to confirm the'
+29 # Prompt: 'The capital of France is', Generated text: 'Paris.'
+30 # Prompt: 'The future of AI is', Generated text: 'an exciting time for us. We are constantly researching, developing, and improving our platform to create the most advanced and efficient model available. We are'
+31
32
-33 # Got output like
-34 # Prompt: 'Hello, my name is', Generated text: '\n\nJane Smith. I am a student pursuing my degree in Computer Science at [university]. I enjoy learning new things, especially technology and programming'
-35 # Prompt: 'The president of the United States is', Generated text: 'likely to nominate a new Supreme Court justice to fill the seat vacated by the death of Antonin Scalia. The Senate should vote to confirm the'
-36 # Prompt: 'The capital of France is', Generated text: 'Paris.'
-37 # Prompt: 'The future of AI is', Generated text: 'an exciting time for us. We are constantly researching, developing, and improving our platform to create the most advanced and efficient model available. We are'
-38
-39
-40 if __name__ == '__main__' :
-41 main ()
+33 if __name__ == '__main__' :
+34 main ()
@@ -569,20 +545,20 @@
previous
-
Generate text with guided decoding
+
LLM Examples
next
-
Generate text with customization
+
Generate text asynchronously
@@ -686,9 +662,9 @@
diff --git a/latest/examples/llm_inference_async.html b/latest/examples/llm_inference_async.html
index c22924cd89..5d06d3f009 100644
--- a/latest/examples/llm_inference_async.html
+++ b/latest/examples/llm_inference_async.html
@@ -9,7 +9,7 @@
-
Generate Text Asynchronously — TensorRT-LLM
+
Generate text asynchronously — TensorRT-LLM
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -511,52 +494,50 @@
-Generate Text Asynchronously
+Generate text asynchronously
Source NVIDIA/TensorRT-LLM .
- 1 ### Generate Text Asynchronously
- 2 import asyncio
- 3
- 4 from tensorrt_llm import SamplingParams
- 5 from tensorrt_llm._tensorrt_engine import LLM
- 6
- 7
- 8 def main ():
- 9 # model could accept HF model name or a path to local HF model.
-10 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" )
-11
-12 # Sample prompts.
-13 prompts = [
-14 "Hello, my name is" ,
-15 "The president of the United States is" ,
-16 "The capital of France is" ,
-17 "The future of AI is" ,
-18 ]
-19
-20 # Create a sampling params.
-21 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
-22
-23 # Async based on Python coroutines
-24 async def task ( prompt : str ):
-25 output = await llm . generate_async ( prompt , sampling_params )
-26 print (
-27 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
-28 )
-29
-30 async def main ():
-31 tasks = [ task ( prompt ) for prompt in prompts ]
-32 await asyncio . gather ( * tasks )
+ 1 import asyncio
+ 2
+ 3 from tensorrt_llm import LLM , SamplingParams
+ 4
+ 5
+ 6 def main ():
+ 7 # model could accept HF model name or a path to local HF model.
+ 8 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" )
+ 9
+10 # Sample prompts.
+11 prompts = [
+12 "Hello, my name is" ,
+13 "The president of the United States is" ,
+14 "The capital of France is" ,
+15 "The future of AI is" ,
+16 ]
+17
+18 # Create a sampling params.
+19 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
+20
+21 # Async based on Python coroutines
+22 async def task ( prompt : str ):
+23 output = await llm . generate_async ( prompt , sampling_params )
+24 print (
+25 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
+26 )
+27
+28 async def main ():
+29 tasks = [ task ( prompt ) for prompt in prompts ]
+30 await asyncio . gather ( * tasks )
+31
+32 asyncio . run ( main ())
33
-34 asyncio . run ( main ())
-35
-36 # Got output like follows:
-37 # Prompt: 'Hello, my name is', Generated text: '\n\nJane Smith. I am a student pursuing my degree in Computer Science at [university]. I enjoy learning new things, especially technology and programming'
-38 # Prompt: 'The president of the United States is', Generated text: 'likely to nominate a new Supreme Court justice to fill the seat vacated by the death of Antonin Scalia. The Senate should vote to confirm the'
-39 # Prompt: 'The capital of France is', Generated text: 'Paris.'
-40 # Prompt: 'The future of AI is', Generated text: 'an exciting time for us. We are constantly researching, developing, and improving our platform to create the most advanced and efficient model available. We are'
-41
-42
-43 if __name__ == '__main__' :
-44 main ()
+34 # Got output like follows:
+35 # Prompt: 'Hello, my name is', Generated text: '\n\nJane Smith. I am a student pursuing my degree in Computer Science at [university]. I enjoy learning new things, especially technology and programming'
+36 # Prompt: 'The president of the United States is', Generated text: 'likely to nominate a new Supreme Court justice to fill the seat vacated by the death of Antonin Scalia. The Senate should vote to confirm the'
+37 # Prompt: 'The capital of France is', Generated text: 'Paris.'
+38 # Prompt: 'The future of AI is', Generated text: 'an exciting time for us. We are constantly researching, developing, and improving our platform to create the most advanced and efficient model available. We are'
+39
+40
+41 if __name__ == '__main__' :
+42 main ()
@@ -572,20 +553,20 @@
previous
-
Generate Text Using Eagle Decoding
+
Generate text
next
-
Distributed LLM Generation
+
Generate text in streaming
@@ -689,9 +670,9 @@
diff --git a/latest/examples/llm_inference_async_streaming.html b/latest/examples/llm_inference_async_streaming.html
index 8e555eadc7..33ff6aebb8 100644
--- a/latest/examples/llm_inference_async_streaming.html
+++ b/latest/examples/llm_inference_async_streaming.html
@@ -9,7 +9,7 @@
-
Generate Text in Streaming — TensorRT-LLM
+
Generate text in streaming — TensorRT-LLM
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -511,72 +494,70 @@
-Generate Text in Streaming
+Generate text in streaming
Source NVIDIA/TensorRT-LLM .
- 1 ### Generate Text in Streaming
- 2 import asyncio
- 3
- 4 from tensorrt_llm import SamplingParams
- 5 from tensorrt_llm._tensorrt_engine import LLM
- 6
+ 1 import asyncio
+ 2
+ 3 from tensorrt_llm import LLM , SamplingParams
+ 4
+ 5
+ 6 def main ():
7
- 8 def main ():
- 9
-10 # model could accept HF model name or a path to local HF model.
-11 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" )
-12
-13 # Sample prompts.
-14 prompts = [
-15 "Hello, my name is" ,
-16 "The president of the United States is" ,
-17 "The capital of France is" ,
-18 "The future of AI is" ,
-19 ]
-20
-21 # Create a sampling params.
-22 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
-23
-24 # Async based on Python coroutines
-25 async def task ( id : int , prompt : str ):
-26
-27 # streaming=True is used to enable streaming generation.
-28 async for output in llm . generate_async ( prompt ,
-29 sampling_params ,
-30 streaming = True ):
-31 print ( f "Generation for prompt- { id } : { output . outputs [ 0 ] . text !r} " )
-32
-33 async def main ():
-34 tasks = [ task ( id , prompt ) for id , prompt in enumerate ( prompts )]
-35 await asyncio . gather ( * tasks )
+ 8 # model could accept HF model name or a path to local HF model.
+ 9 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" )
+10
+11 # Sample prompts.
+12 prompts = [
+13 "Hello, my name is" ,
+14 "The president of the United States is" ,
+15 "The capital of France is" ,
+16 "The future of AI is" ,
+17 ]
+18
+19 # Create a sampling params.
+20 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
+21
+22 # Async based on Python coroutines
+23 async def task ( id : int , prompt : str ):
+24
+25 # streaming=True is used to enable streaming generation.
+26 async for output in llm . generate_async ( prompt ,
+27 sampling_params ,
+28 streaming = True ):
+29 print ( f "Generation for prompt- { id } : { output . outputs [ 0 ] . text !r} " )
+30
+31 async def main ():
+32 tasks = [ task ( id , prompt ) for id , prompt in enumerate ( prompts )]
+33 await asyncio . gather ( * tasks )
+34
+35 asyncio . run ( main ())
36
-37 asyncio . run ( main ())
-38
-39 # Got output like follows:
-40 # Generation for prompt-0: '\n'
-41 # Generation for prompt-3: 'an'
-42 # Generation for prompt-2: 'Paris'
-43 # Generation for prompt-1: 'likely'
-44 # Generation for prompt-0: '\n\n'
-45 # Generation for prompt-3: 'an exc'
-46 # Generation for prompt-2: 'Paris.'
-47 # Generation for prompt-1: 'likely to'
-48 # Generation for prompt-0: '\n\nJ'
-49 # Generation for prompt-3: 'an exciting'
-50 # Generation for prompt-2: 'Paris.'
-51 # Generation for prompt-1: 'likely to nomin'
-52 # Generation for prompt-0: '\n\nJane'
-53 # Generation for prompt-3: 'an exciting time'
-54 # Generation for prompt-1: 'likely to nominate'
-55 # Generation for prompt-0: '\n\nJane Smith'
-56 # Generation for prompt-3: 'an exciting time for'
-57 # Generation for prompt-1: 'likely to nominate a'
-58 # Generation for prompt-0: '\n\nJane Smith.'
-59 # Generation for prompt-3: 'an exciting time for us'
-60 # Generation for prompt-1: 'likely to nominate a new'
-61
-62
-63 if __name__ == '__main__' :
-64 main ()
+37 # Got output like follows:
+38 # Generation for prompt-0: '\n'
+39 # Generation for prompt-3: 'an'
+40 # Generation for prompt-2: 'Paris'
+41 # Generation for prompt-1: 'likely'
+42 # Generation for prompt-0: '\n\n'
+43 # Generation for prompt-3: 'an exc'
+44 # Generation for prompt-2: 'Paris.'
+45 # Generation for prompt-1: 'likely to'
+46 # Generation for prompt-0: '\n\nJ'
+47 # Generation for prompt-3: 'an exciting'
+48 # Generation for prompt-2: 'Paris.'
+49 # Generation for prompt-1: 'likely to nomin'
+50 # Generation for prompt-0: '\n\nJane'
+51 # Generation for prompt-3: 'an exciting time'
+52 # Generation for prompt-1: 'likely to nominate'
+53 # Generation for prompt-0: '\n\nJane Smith'
+54 # Generation for prompt-3: 'an exciting time for'
+55 # Generation for prompt-1: 'likely to nominate a'
+56 # Generation for prompt-0: '\n\nJane Smith.'
+57 # Generation for prompt-3: 'an exciting time for us'
+58 # Generation for prompt-1: 'likely to nominate a new'
+59
+60
+61 if __name__ == '__main__' :
+62 main ()
@@ -592,20 +573,20 @@
previous
-
Generation with Quantization
+
Generate text asynchronously
next
-
Generate text with guided decoding
+
Distributed LLM Generation
@@ -709,9 +690,9 @@
diff --git a/latest/examples/llm_inference_customize.html b/latest/examples/llm_inference_customize.html
deleted file mode 100644
index e6f1d43927..0000000000
--- a/latest/examples/llm_inference_customize.html
+++ /dev/null
@@ -1,719 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
Generate text with customization — TensorRT-LLM
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Back to top
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Generate text with customization
-Source NVIDIA/TensorRT-LLM .
- 1 ### Generate text with customization
- 2 import tempfile
- 3
- 4 from tensorrt_llm._tensorrt_engine import LLM
- 5 from tensorrt_llm.llmapi import BuildConfig , KvCacheConfig , SamplingParams
- 6
- 7
- 8 def main ():
- 9 # The end user can customize the build configuration with the build_config class and other arguments borrowed from the lower-level APIs
-10 build_config = BuildConfig ()
-11 build_config . max_batch_size = 128
-12 build_config . max_num_tokens = 2048
-13
-14 build_config . max_beam_width = 4
-15
-16 # Model could accept HF model name or a path to local HF model.
-17
-18 llm = LLM (
-19 model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
-20 build_config = build_config ,
-21 kv_cache_config = KvCacheConfig (
-22 free_gpu_memory_fraction = 0.8
-23 ), # Similar to `build_config`, you can also customize the runtime configuration with the `kv_cache_config`, `runtime_config`, `peft_cache_config` or \
-24 # other arguments borrowed from the lower-level APIs.
-25 )
-26
-27 # You can save the engine to disk and load it back later, the LLM class can accept either a HF model or a TRT-LLM engine.
-28 llm . save ( tempfile . mkdtemp ())
-29
-30 # Sample prompts.
-31 prompts = [
-32 "Hello, my name is" ,
-33 "The president of the United States is" ,
-34 "The capital of France is" ,
-35 "The future of AI is" ,
-36 ]
-37
-38 # With SamplingParams, you can customize the sampling strategy, such as beam search, temperature, and so on.
-39 sampling_params = SamplingParams ( temperature = 0.8 ,
-40 top_p = 0.95 ,
-41 n = 4 ,
-42 use_beam_search = True )
-43
-44 for output in llm . generate ( prompts , sampling_params ):
-45 print (
-46 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
-47 )
-48
-49 # Got output like
-50 # Prompt: 'Hello, my name is', Generated text: '\n\nJane Smith. I am a student pursuing my degree in Computer Science at [university]. I enjoy learning new things, especially technology and programming'
-51 # Prompt: 'The president of the United States is', Generated text: 'likely to nominate a new Supreme Court justice to fill the seat vacated by the death of Antonin Scalia. The Senate should vote to confirm the'
-52 # Prompt: 'The capital of France is', Generated text: 'Paris.'
-53 # Prompt: 'The future of AI is', Generated text: 'an exciting time for us. We are constantly researching, developing, and improving our platform to create the most advanced and efficient model available. We are'
-54
-55
-56 if __name__ == '__main__' :
-57 main ()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/latest/examples/llm_inference_distributed.html b/latest/examples/llm_inference_distributed.html
index c7706a797c..471ac80d2e 100644
--- a/latest/examples/llm_inference_distributed.html
+++ b/latest/examples/llm_inference_distributed.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -513,51 +496,49 @@
Distributed LLM Generation
Source NVIDIA/TensorRT-LLM .
- 1 ### Distributed LLM Generation
- 2 from tensorrt_llm import SamplingParams
- 3 from tensorrt_llm._tensorrt_engine import LLM
- 4
- 5
- 6 def main ():
- 7 # model could accept HF model name or a path to local HF model.
- 8 llm = LLM (
- 9 model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
-10 # Enable 2-way tensor parallelism
-11 tensor_parallel_size = 2
-12 # Enable 2-way pipeline parallelism if needed
-13 # pipeline_parallel_size=2
-14 # Enable 2-way expert parallelism for MoE model's expert weights
-15 # moe_expert_parallel_size=2
-16 # Enable 2-way tensor parallelism for MoE model's expert weights
-17 # moe_tensor_parallel_size=2
-18 )
-19
-20 # Sample prompts.
-21 prompts = [
-22 "Hello, my name is" ,
-23 "The president of the United States is" ,
-24 "The capital of France is" ,
-25 "The future of AI is" ,
-26 ]
-27
-28 # Create a sampling params.
-29 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
-30
-31 for output in llm . generate ( prompts , sampling_params ):
-32 print (
-33 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
-34 )
-35
-36 # Got output like
-37 # Prompt: 'Hello, my name is', Generated text: '\n\nJane Smith. I am a student pursuing my degree in Computer Science at [university]. I enjoy learning new things, especially technology and programming'
-38 # Prompt: 'The president of the United States is', Generated text: 'likely to nominate a new Supreme Court justice to fill the seat vacated by the death of Antonin Scalia. The Senate should vote to confirm the'
-39 # Prompt: 'The capital of France is', Generated text: 'Paris.'
-40 # Prompt: 'The future of AI is', Generated text: 'an exciting time for us. We are constantly researching, developing, and improving our platform to create the most advanced and efficient model available. We are'
-41
-42
-43 # The entry point of the program need to be protected for spawning processes.
-44 if __name__ == '__main__' :
-45 main ()
+ 1 from tensorrt_llm import LLM , SamplingParams
+ 2
+ 3
+ 4 def main ():
+ 5 # model could accept HF model name or a path to local HF model.
+ 6 llm = LLM (
+ 7 model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
+ 8 # Enable 2-way tensor parallelism
+ 9 tensor_parallel_size = 2
+10 # Enable 2-way pipeline parallelism if needed
+11 # pipeline_parallel_size=2
+12 # Enable 2-way expert parallelism for MoE model's expert weights
+13 # moe_expert_parallel_size=2
+14 # Enable 2-way tensor parallelism for MoE model's expert weights
+15 # moe_tensor_parallel_size=2
+16 )
+17
+18 # Sample prompts.
+19 prompts = [
+20 "Hello, my name is" ,
+21 "The president of the United States is" ,
+22 "The capital of France is" ,
+23 "The future of AI is" ,
+24 ]
+25
+26 # Create a sampling params.
+27 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
+28
+29 for output in llm . generate ( prompts , sampling_params ):
+30 print (
+31 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
+32 )
+33
+34 # Got output like
+35 # Prompt: 'Hello, my name is', Generated text: '\n\nJane Smith. I am a student pursuing my degree in Computer Science at [university]. I enjoy learning new things, especially technology and programming'
+36 # Prompt: 'The president of the United States is', Generated text: 'likely to nominate a new Supreme Court justice to fill the seat vacated by the death of Antonin Scalia. The Senate should vote to confirm the'
+37 # Prompt: 'The capital of France is', Generated text: 'Paris.'
+38 # Prompt: 'The future of AI is', Generated text: 'an exciting time for us. We are constantly researching, developing, and improving our platform to create the most advanced and efficient model available. We are'
+39
+40
+41 # The entry point of the program need to be protected for spawning processes.
+42 if __name__ == '__main__' :
+43 main ()
@@ -573,20 +554,20 @@
previous
-
Generate Text Asynchronously
+
Generate text in streaming
next
-
Control generated text using logits processor
+
Generate text with guided decoding
@@ -690,9 +671,9 @@
diff --git a/latest/examples/llm_inference_kv_events.html b/latest/examples/llm_inference_kv_events.html
deleted file mode 100644
index 4ff4837ae7..0000000000
--- a/latest/examples/llm_inference_kv_events.html
+++ /dev/null
@@ -1,711 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
Get KV Cache Events — TensorRT-LLM
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Back to top
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Get KV Cache Events
-Source NVIDIA/TensorRT-LLM .
- 1 ### Get KV Cache Events
- 2
- 3 from tensorrt_llm import SamplingParams
- 4 from tensorrt_llm._tensorrt_engine import LLM
- 5 from tensorrt_llm.llmapi import KvCacheConfig
- 6
- 7
- 8 def main ():
- 9
-10 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
-11 tensor_parallel_size = 2 ,
-12 autotuner_enabled = False ,
-13 kv_cache_dtype = 'auto' ,
-14 kv_cache_config = KvCacheConfig ( enable_block_reuse = True ,
-15 event_buffer_max_size = 1024 ),
-16 backend = "pytorch" )
-17
-18 # Sample prompts having a common prefix.
-19 common_prefix = (
-20 "After the ghost's departure, Barnardo notes Horatio's pale appearance and asks if he's okay. "
-21 "Horatio concedes that he's shaken and confesses that, without witnessing the ghost himself, he wouldn't have believed it existed. "
-22 "He's also disturbed by the ghost's striking resemblance to the king. It even seems to be wearing the former king's armor. "
-23 "Horatio thinks the ghost's presence foretells that something is about to go wrong in Denmark. "
-24 "Marcellus concurs with Horatio, as he and the other guards have observed that their schedules have become more rigorous and have also noticed the preparations taking place within Elsinore, including the building of cannons, the storing of weapons, and the preparation of ships."
-25 )
-26 prompts = [
-27 common_prefix , common_prefix + " Marcellus also notes that the king's"
-28 ]
-29
-30 # Create a sampling params.
-31 sampling_params = SamplingParams ( temperature = 0.001 ,
-32 top_p = 0.001 ,
-33 max_tokens = 5 )
-34
-35 for output in llm . generate ( prompts , sampling_params = sampling_params ):
-36 print (
-37 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
-38 )
-39
-40 kv_events = llm . get_kv_cache_events ( 10 )
-41 print ( kv_events )
-42
-43 # Got output like follows:
-44 # [{'event_id': 0, 'data': {'type': 'created', 'num_blocks_per_cache_level': [101230, 0]}},
-45 # {'event_id': 1, 'data': {'type': 'stored', 'parent_hash': None, 'blocks': [{'type': 'stored_block', 'block_hash': 4203099703668305365, 'tokens': [{'type': 'unique_token', 'token_id': 1, 'token_extra_id': 0}, ...
-46
-47
-48 if __name__ == '__main__' :
-49 main ()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/latest/examples/llm_logits_processor.html b/latest/examples/llm_logits_processor.html
index d359b10746..2fb2e355b0 100644
--- a/latest/examples/llm_logits_processor.html
+++ b/latest/examples/llm_logits_processor.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -513,123 +496,131 @@
Control generated text using logits processor
Source NVIDIA/TensorRT-LLM .
- 1 ### Control generated text using logits processor
- 2 from typing import List , Optional
- 3
- 4 import torch
+ 1 from typing import List , Optional
+ 2
+ 3 import torch
+ 4 from transformers import PreTrainedTokenizer
5
- 6 from tensorrt_llm._tensorrt_engine import LLM
- 7 from tensorrt_llm.sampling_params import ( BatchedLogitsProcessor ,
- 8 LogitsProcessor , SamplingParams )
+ 6 from tensorrt_llm import LLM
+ 7 from tensorrt_llm.sampling_params import LogitsProcessor , SamplingParams
+ 8
9
- 10
- 11 # The recommended way to create a customized logits processor:
- 12 # * Subclass LogitsProcessor and implement the processing logics in the __call__ method.
- 13 # * Create an instance and pass to SamplingParams.
- 14 # Alternatively, you can create any callable with the same signature with the __call__ method.
- 15 # This simple callback will output a specific token at each step irrespective of prompt.
- 16 # Refer to ../bindings/executor/example_logits_processor.py for a more
- 17 # sophisticated callback that generates JSON structured output.
- 18 class MyLogitsProcessor ( LogitsProcessor ):
- 19
- 20 def __init__ ( self , allowed_token_id : int ):
- 21 self . allowed_token_id = allowed_token_id
- 22
- 23 def __call__ ( self , req_id : int , logits : torch . Tensor ,
- 24 token_ids : List [ List [ int ]], stream_ptr : int ,
- 25 client_id : Optional [ int ]):
- 26 mask = torch . full_like ( logits , fill_value = float ( "-inf" ), device = "cpu" )
- 27 mask [:, :, self . allowed_token_id ] = 0
- 28
- 29 stream = None if stream_ptr is None else torch . cuda . ExternalStream (
- 30 stream_ptr )
- 31 with torch . cuda . stream ( stream ):
- 32 mask = mask . to ( logits . device , non_blocking = True )
- 33 logits += mask
- 34
- 35
- 36 # The recommended way to create a customized batched logits processor:
- 37 # * Subclass BatchedLogitsProcessor and implement the processing logics in the __call__ method.
- 38 # * Create an instance and pass to LLM.
- 39 # Alternatively, you can create any callable with the same signature with the __call__ method.
- 40 # A batched logits processor's arguments for all requests in a batch are made available as lists.
- 41 # This helps user optimize the callback for large batch sizes. For example:
- 42 # 1. Process more work on host, e.g. running a JSON state machine, in parallel with model forward pass on device.
- 43 # 2. Coalesce H2D memory transfers for all requests into a single cudaMemcpyAsync call.
- 44 # 3. Launch a single batched kernel, e.g. for updating logits on device.
- 45 class MyBatchedLogitsProcessor ( BatchedLogitsProcessor ):
+ 10 def text_to_token ( tokenizer : PreTrainedTokenizer , text : str , last : bool ):
+ 11 tokens = tokenizer . encode ( text , add_special_tokens = False )
+ 12
+ 13 max_token_count = 1
+ 14 bos_token_added = getattr ( tokenizer , 'bos_token' , None ) and getattr (
+ 15 tokenizer , 'bos_token_id' , None ) in tokens
+ 16 prefix_token_added = getattr ( tokenizer , 'add_prefix_space' ,
+ 17 None ) is not False
+ 18 if bos_token_added or prefix_token_added :
+ 19 max_token_count = 2
+ 20
+ 21 if not last and len ( tokens ) > max_token_count :
+ 22 raise Exception (
+ 23 f "Can't convert { text } to token. It has { len ( tokens ) } tokens." )
+ 24
+ 25 return tokens [ - 1 ]
+ 26
+ 27
+ 28 # The recommended way to create a customized logits processor:
+ 29 # * Subclass LogitsProcessor and implement the processing logics in the __call__ method.
+ 30 # * Create an instance and pass to SamplingParams.
+ 31 # More LogitsProcessors references can be found at https://github.com/NVIDIA/logits-processor-zoo.
+ 32 class GenLengthLogitsProcessor ( LogitsProcessor ):
+ 33 """
+ 34 A logits processor that adjusts the likelihood of the end-of-sequence (EOS) token
+ 35 based on the length of the generated sequence, encouraging or discouraging shorter answers.
+ 36 WARNING: Create a new object before every model.generate call since token_count is accumulated.
+ 37
+ 38 Parameters
+ 39 ----------
+ 40 tokenizer: The tokenizer used by the LLM.
+ 41 boost_factor (float): A factor to boost the likelihood of the EOS token as the sequence length increases.
+ 42 Suggested value range is [-1.0, 1.0]. Negative values are used for the opposite effect.
+ 43 p (int, optional): The power to which the token count is raised when computing the boost value. Default is 2.
+ 44 complete_sentences (bool, optional): If True, boosts EOS token likelihood only when the last token is a full stop
+ 45 or a new line. Default is False.
46
- 47 def __init__ ( self , allowed_token_id : int ):
- 48 self . allowed_token_id = allowed_token_id
- 49
- 50 def __call__ ( self , req_ids : List [ int ], logits : List [ torch . Tensor ],
- 51 token_ids : List [ List [ List [ int ]]], stream_ptr : int ,
- 52 client_ids : List [ Optional [ int ]]):
- 53 # Generate masks for all requests on host
- 54 masks = []
- 55 for req_id , req_logits , req_token_ids , client_id in zip (
- 56 req_ids , logits , token_ids , client_ids ):
- 57 mask = torch . full_like ( req_logits ,
- 58 fill_value = float ( "-inf" ),
- 59 device = "cpu" )
- 60 mask [:, :, self . allowed_token_id ] = 0
- 61 masks . append ( mask )
- 62
- 63 # Move masks to device and add to logits using non-blocking operations
- 64 with torch . cuda . stream ( torch . cuda . ExternalStream ( stream_ptr )):
- 65 for req_logits , mask in zip ( logits , masks ):
- 66 req_logits += mask . to ( req_logits . device , non_blocking = True )
- 67
- 68
- 69 def main ():
+ 47 """
+ 48
+ 49 def __init__ ( self ,
+ 50 tokenizer ,
+ 51 boost_factor : float ,
+ 52 p : int = 2 ,
+ 53 complete_sentences : bool = False ):
+ 54 self . eos_token = tokenizer . eos_token_id
+ 55 self . boost_factor = boost_factor
+ 56 self . p = p
+ 57 self . token_count = 0
+ 58 self . full_stop_token = text_to_token ( tokenizer ,
+ 59 "It is a sentence." ,
+ 60 last = True )
+ 61 self . new_line_token = text_to_token ( tokenizer ,
+ 62 "It is a new line \n " ,
+ 63 last = True )
+ 64 self . complete_sentences = complete_sentences
+ 65
+ 66 def __call__ ( self , req_ids : int , logits : torch . Tensor , ids : List [ List [ int ]],
+ 67 stream_ptr , client_id : Optional [ int ]):
+ 68 boost_val = self . boost_factor * ( self . token_count ** self . p ) / ( 10 **
+ 69 self . p )
70
- 71 # Batched logits processor (only supported in TensorRT backend)
- 72 # should be specified when initializing LLM.
- 73 llm = LLM (
- 74 model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
- 75 batched_logits_processor = MyBatchedLogitsProcessor ( allowed_token_id = 42 ))
+ 71 stream = None if stream_ptr is None else torch . cuda . ExternalStream (
+ 72 stream_ptr )
+ 73
+ 74 with torch . cuda . stream ( stream ):
+ 75 ids = torch . LongTensor ( ids ) . to ( logits . device , non_blocking = True )
76
- 77 # Sample prompts
- 78 prompts = [
- 79 "Hello, my name is" ,
- 80 "The president of the United States is" ,
- 81 ]
- 82
- 83 # Generate text
- 84 for prompt_id , prompt in enumerate ( prompts ):
- 85 # Use non-batched logits processor callback only for odd-numbered prompts
- 86 if prompt_id % 2 == 0 :
- 87 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
- 88 else :
- 89 # Each prompt can be specified with a logits processor at runtime
- 90 sampling_params = SamplingParams (
- 91 temperature = 0.8 ,
- 92 top_p = 0.95 ,
- 93 logits_processor = MyLogitsProcessor ( allowed_token_id = 42 ))
- 94
- 95 for output in llm . generate ([ prompt ], sampling_params ):
- 96 print (
- 97 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
- 98 )
- 99
-100 # Got output like
-101 # Prompt: 'Hello, my name is', Generated text: '\n\nJane Smith. I am a student pursuing my degree in Computer Science at [university]. I enjoy learning new things, especially technology and programming'
-102 # Prompt: 'The president of the United States is', Generated text: "''''''''''''''''''''''''''''''''"
-103
-104 # Use batched processor with batch size = 2
-105 sampling_params = SamplingParams ( apply_batched_logits_processor = True )
-106 for output in llm . generate ( prompts , sampling_params ):
-107 print (
-108 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
-109 )
-110
-111 # Got output like
-112 # Prompt: 'Hello, my name is', Generated text: "''''''''''''''''''''''''''''''''"
-113 # Prompt: 'The president of the United States is', Generated text: "''''''''''''''''''''''''''''''''"
+ 77 if self . complete_sentences :
+ 78 enabled = ( ids [:, - 1 ] == self . full_stop_token ) | (
+ 79 ids [:, - 1 ] == self . new_line_token )
+ 80 logits [:, :, self . eos_token ] += enabled * boost_val
+ 81 else :
+ 82 logits [:, :, self . eos_token ] += boost_val
+ 83
+ 84 self . token_count += 1
+ 85
+ 86
+ 87 def main ():
+ 88
+ 89 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" )
+ 90
+ 91 # Sample prompts
+ 92 prompts = [
+ 93 "The future of AI is" ,
+ 94 "The future of AI is" ,
+ 95 ]
+ 96
+ 97 # Generate text
+ 98 for prompt_id , prompt in enumerate ( prompts ):
+ 99 if prompt_id % 2 == 0 :
+100 # Without logit processor
+101 sampling_params = SamplingParams ( top_p = 1 , max_tokens = 200 )
+102 else :
+103 # Each prompt can be specified with a logits processor at runtime
+104 sampling_params = SamplingParams (
+105 temperature = 0.8 ,
+106 top_p = 0.95 ,
+107 logits_processor = GenLengthLogitsProcessor (
+108 llm . tokenizer , boost_factor = 1 , complete_sentences = True ))
+109
+110 output = llm . generate ( prompt , sampling_params )
+111 print (
+112 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
+113 )
114
-115
-116 if __name__ == '__main__' :
-117 main ()
+115 # Got output like:
+116 # Prompt (original): "bright, and it's not just for big companies. Small businesses can also benefit from AI technology. Here are some ways:\n\n1. Improved customer service: AI can help businesses provide better customer service by analyzing customer data and providing personalized recommendations.
+117 # This can help businesses improve their customer experience and increase customer loyalty.\n\n2. Increased productivity: AI can help businesses automate repetitive tasks, freeing up employees to focus on more complex tasks. This can
+118 # help businesses increase productivity and reduce costs.\n\n3. Enhanced marketing: AI can help businesses create more personalized marketing campaigns by analyzing customer data and targeting specific audiences. This can help businesses
+119 # increase their marketing ROI and drive more sales.\n\n4. Improved supply chain management: AI can help businesses optimize their supply chain by analyzing data on demand,"'
+120 #
+121 # Prompt (with GenLenthLogitsProcesor): "bright, and it's not just for big companies. Small businesses can also benefit from AI technology."
+122
+123
+124 if __name__ == '__main__' :
+125 main ()
@@ -645,20 +636,20 @@
previous
-
Distributed LLM Generation
+
Generate text with guided decoding
next
-
Generate Text Using Eagle2 Decoding
+
Generate text with multiple LoRA adapters
@@ -762,9 +753,9 @@
diff --git a/latest/examples/llm_medusa_decoding.html b/latest/examples/llm_medusa_decoding.html
deleted file mode 100644
index 3909b68f13..0000000000
--- a/latest/examples/llm_medusa_decoding.html
+++ /dev/null
@@ -1,756 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
Generate Text Using Medusa Decoding — TensorRT-LLM
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Back to top
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Generate Text Using Medusa Decoding
-Source NVIDIA/TensorRT-LLM .
- 1 ### Generate Text Using Medusa Decoding
- 2 import argparse
- 3 from pathlib import Path
- 4
- 5 from tensorrt_llm._tensorrt_engine import LLM
- 6 from tensorrt_llm.llmapi import ( BuildConfig , KvCacheConfig ,
- 7 MedusaDecodingConfig , SamplingParams )
- 8
- 9
-10 def run_medusa_decoding ( use_modelopt_ckpt = False , model_dir = None ):
-11 # Sample prompts.
-12 prompts = [
-13 "Hello, my name is" ,
-14 "The president of the United States is" ,
-15 "The capital of France is" ,
-16 "The future of AI is" ,
-17 ]
-18 # The end user can customize the sampling configuration with the SamplingParams class
-19 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
-20
-21 # The end user can customize the build configuration with the BuildConfig class
-22 build_config = BuildConfig (
-23 max_batch_size = 1 ,
-24 max_seq_len = 1024 ,
-25 )
-26
-27 # The end user can customize the kv cache configuration with the KVCache class
-28 kv_cache_config = KvCacheConfig ( enable_block_reuse = True )
-29
-30 llm_kwargs = {}
-31
-32 if use_modelopt_ckpt :
-33 # This is a Llama-3.1-8B combined with Medusa heads provided by TensorRT Model Optimizer.
-34 # Both the base model (except lm_head) and Medusa heads have been quantized in FP8.
-35 model = model_dir or "nvidia/Llama-3.1-8B-Medusa-FP8"
-36
-37 # ModelOpt ckpt uses 3 Medusa heads
-38 speculative_config = MedusaDecodingConfig (
-39 max_draft_len = 63 ,
-40 num_medusa_heads = 3 ,
-41 medusa_choices = [[ 0 ], [ 0 , 0 ], [ 1 ], [ 0 , 1 ], [ 2 ], [ 0 , 0 , 0 ], [ 1 , 0 ], [ 0 , 2 ], [ 3 ], [ 0 , 3 ], \
-42 [ 4 ], [ 0 , 4 ], [ 2 , 0 ], [ 0 , 5 ], [ 0 , 0 , 1 ], [ 5 ], [ 0 , 6 ], [ 6 ], [ 0 , 7 ], [ 0 , 1 , 0 ], [ 1 , 1 ], \
-43 [ 7 ], [ 0 , 8 ], [ 0 , 0 , 2 ], [ 3 , 0 ], [ 0 , 9 ], [ 8 ], [ 9 ], [ 1 , 0 , 0 ], [ 0 , 2 , 0 ], [ 1 , 2 ], [ 0 , 0 , 3 ], \
-44 [ 4 , 0 ], [ 2 , 1 ], [ 0 , 0 , 4 ], [ 0 , 0 , 5 ], [ 0 , 1 , 1 ], [ 0 , 0 , 6 ], [ 0 , 3 , 0 ], [ 5 , 0 ], [ 1 , 3 ], [ 0 , 0 , 7 ], \
-45 [ 0 , 0 , 8 ], [ 0 , 0 , 9 ], [ 6 , 0 ], [ 0 , 4 , 0 ], [ 1 , 4 ], [ 7 , 0 ], [ 0 , 1 , 2 ], [ 2 , 0 , 0 ], [ 3 , 1 ], [ 2 , 2 ], [ 8 , 0 ], [ 0 , 5 , 0 ], [ 1 , 5 ], [ 1 , 0 , 1 ], [ 0 , 2 , 1 ], [ 9 , 0 ], [ 0 , 6 , 0 ], [ 1 , 6 ], [ 0 , 7 , 0 ]]
-46 )
-47 else :
-48 # In this path, base model and Medusa heads are stored and loaded separately.
-49 model = "lmsys/vicuna-7b-v1.3"
-50
-51 # The end user can customize the medusa decoding configuration by specifying the
-52 # speculative_model, max_draft_len, medusa heads num and medusa choices
-53 # with the MedusaDecodingConfig class
-54 speculative_config = MedusaDecodingConfig (
-55 speculative_model = "FasterDecoding/medusa-vicuna-7b-v1.3" ,
-56 max_draft_len = 63 ,
-57 num_medusa_heads = 4 ,
-58 medusa_choices = [[ 0 ], [ 0 , 0 ], [ 1 ], [ 0 , 1 ], [ 2 ], [ 0 , 0 , 0 ], [ 1 , 0 ], [ 0 , 2 ], [ 3 ], [ 0 , 3 ], [ 4 ], [ 0 , 4 ], [ 2 , 0 ], \
-59 [ 0 , 5 ], [ 0 , 0 , 1 ], [ 5 ], [ 0 , 6 ], [ 6 ], [ 0 , 7 ], [ 0 , 1 , 0 ], [ 1 , 1 ], [ 7 ], [ 0 , 8 ], [ 0 , 0 , 2 ], [ 3 , 0 ], \
-60 [ 0 , 9 ], [ 8 ], [ 9 ], [ 1 , 0 , 0 ], [ 0 , 2 , 0 ], [ 1 , 2 ], [ 0 , 0 , 3 ], [ 4 , 0 ], [ 2 , 1 ], [ 0 , 0 , 4 ], [ 0 , 0 , 5 ], \
-61 [ 0 , 0 , 0 , 0 ], [ 0 , 1 , 1 ], [ 0 , 0 , 6 ], [ 0 , 3 , 0 ], [ 5 , 0 ], [ 1 , 3 ], [ 0 , 0 , 7 ], [ 0 , 0 , 8 ], [ 0 , 0 , 9 ], \
-62 [ 6 , 0 ], [ 0 , 4 , 0 ], [ 1 , 4 ], [ 7 , 0 ], [ 0 , 1 , 2 ], [ 2 , 0 , 0 ], [ 3 , 1 ], [ 2 , 2 ], [ 8 , 0 ], \
-63 [ 0 , 5 , 0 ], [ 1 , 5 ], [ 1 , 0 , 1 ], [ 0 , 2 , 1 ], [ 9 , 0 ], [ 0 , 6 , 0 ], [ 0 , 0 , 0 , 1 ], [ 1 , 6 ], [ 0 , 7 , 0 ]]
-64 )
-65
-66 # Add 'tensor_parallel_size=2' if using ckpt for
-67 # a larger model like nvidia/Llama-3.1-70B-Medusa.
-68 llm = LLM ( model = model ,
-69 build_config = build_config ,
-70 kv_cache_config = kv_cache_config ,
-71 speculative_config = speculative_config ,
-72 ** llm_kwargs )
-73
-74 outputs = llm . generate ( prompts , sampling_params )
-75
-76 # Print the outputs.
-77 for output in outputs :
-78 prompt = output . prompt
-79 generated_text = output . outputs [ 0 ] . text
-80 print ( f "Prompt: { prompt !r} , Generated text: { generated_text !r} " )
-81
-82
-83 if __name__ == '__main__' :
-84 parser = argparse . ArgumentParser (
-85 description = "Generate text using Medusa decoding." )
-86 parser . add_argument (
-87 '--use_modelopt_ckpt' ,
-88 action = 'store_true' ,
-89 help = "Use FP8-quantized checkpoint from TensorRT Model Optimizer." )
-90 # TODO: remove this arg after ModelOpt ckpt is public on HF
-91 parser . add_argument ( '--model_dir' , type = Path , default = None )
-92 args = parser . parse_args ()
-93
-94 run_medusa_decoding ( args . use_modelopt_ckpt , args . model_dir )
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/latest/examples/llm_mgmn_llm_distributed.html b/latest/examples/llm_mgmn_llm_distributed.html
index bcffc7559a..9449fd323d 100644
--- a/latest/examples/llm_mgmn_llm_distributed.html
+++ b/latest/examples/llm_mgmn_llm_distributed.html
@@ -9,7 +9,7 @@
-
Llm Mgmn Llm Distributed — TensorRT-LLM
+
Run LLM-API with pytorch backend on Slurm — TensorRT-LLM
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -510,8 +493,8 @@
-
-Llm Mgmn Llm Distributed
+
+Run LLM-API with pytorch backend on Slurm
Source NVIDIA/TensorRT-LLM .
1 #!/bin/bash
2 #SBATCH -A <account> # parameter
@@ -523,49 +506,48 @@
8 #SBATCH -e logs/llmapi-distributed.err
9 #SBATCH -J llmapi-distributed-task
10
-11 ### Run LLM-API with pytorch backend on Slurm
-12
-13 # NOTE, this feature is experimental and may not work on all systems.
-14 # The trtllm-llmapi-launch is a script that launches the LLM-API code on
-15 # Slurm-like systems, and can support multi-node and multi-GPU setups.
-16
-17 # Note that, the number of MPI processes should be the same as the model world
-18 # size. e.g. For tensor_parallel_size=16, you may use 2 nodes with 8 gpus for
-19 # each, or 4 nodes with 4 gpus for each or other combinations.
-20
-21 # This docker image should have tensorrt_llm installed, or you need to install
-22 # it in the task.
-23
-24 # The following variables are expected to be set in the environment:
-25 # You can set them via --export in the srun/sbatch command.
-26 # CONTAINER_IMAGE: the docker image to use, you'd better install tensorrt_llm in it, or install it in the task.
-27 # MOUNT_DIR: the directory to mount in the container
-28 # MOUNT_DEST: the destination directory in the container
-29 # WORKDIR: the working directory in the container
-30 # SOURCE_ROOT: the path to the TensorRT-LLM source
-31 # PROLOGUE: the prologue to run before the script
-32 # LOCAL_MODEL: the local model directory to use, NOTE: downloading from HF is
-33 # not supported in Slurm mode, you need to download the model and put it in
-34 # the LOCAL_MODEL directory.
-35
-36 # Adjust the paths to run
-37 export script = $SOURCE_ROOT /examples/pytorch/quickstart_advanced.py
-38
-39 # Just launch the PyTorch example with trtllm-llmapi-launch command.
-40 srun -l \
-41 --container-image= ${ CONTAINER_IMAGE } \
-42 --container-mounts= ${ MOUNT_DIR } :${ MOUNT_DEST } \
-43 --container-workdir= ${ WORKDIR } \
-44 --export= ALL \
-45 --mpi= pmix \
-46 bash -c "
-47 $PROLOGUE
-48 export PATH= $PATH :~/.local/bin
-49 trtllm-llmapi-launch python3 $script \
-50 --model_dir $LOCAL_MODEL \
-51 --prompt 'Hello, how are you?' \
-52 --tp_size 2
-53 "
+11
+12 # NOTE, this feature is experimental and may not work on all systems.
+13 # The trtllm-llmapi-launch is a script that launches the LLM-API code on
+14 # Slurm-like systems, and can support multi-node and multi-GPU setups.
+15
+16 # Note that, the number of MPI processes should be the same as the model world
+17 # size. e.g. For tensor_parallel_size=16, you may use 2 nodes with 8 gpus for
+18 # each, or 4 nodes with 4 gpus for each or other combinations.
+19
+20 # This docker image should have tensorrt_llm installed, or you need to install
+21 # it in the task.
+22
+23 # The following variables are expected to be set in the environment:
+24 # You can set them via --export in the srun/sbatch command.
+25 # CONTAINER_IMAGE: the docker image to use, you'd better install tensorrt_llm in it, or install it in the task.
+26 # MOUNT_DIR: the directory to mount in the container
+27 # MOUNT_DEST: the destination directory in the container
+28 # WORKDIR: the working directory in the container
+29 # SOURCE_ROOT: the path to the TensorRT-LLM source
+30 # PROLOGUE: the prologue to run before the script
+31 # LOCAL_MODEL: the local model directory to use, NOTE: downloading from HF is
+32 # not supported in Slurm mode, you need to download the model and put it in
+33 # the LOCAL_MODEL directory.
+34
+35 # Adjust the paths to run
+36 export script = $SOURCE_ROOT /examples/pytorch/quickstart_advanced.py
+37
+38 # Just launch the PyTorch example with trtllm-llmapi-launch command.
+39 srun -l \
+40 --container-image= ${ CONTAINER_IMAGE } \
+41 --container-mounts= ${ MOUNT_DIR } :${ MOUNT_DEST } \
+42 --container-workdir= ${ WORKDIR } \
+43 --export= ALL \
+44 --mpi= pmix \
+45 bash -c "
+46 $PROLOGUE
+47 export PATH= $PATH :~/.local/bin
+48 trtllm-llmapi-launch python3 $script \
+49 --model_dir $LOCAL_MODEL \
+50 --prompt 'Hello, how are you?' \
+51 --tp_size 2
+52 "
@@ -581,12 +563,12 @@
previous
-
Automatic Parallelism with LLM
+
Generate text with multiple LoRA adapters
next
-
Llm Mgmn Trtllm Bench
+
Run trtllm-bench with pytorch backend on Slurm
@@ -698,9 +680,9 @@
diff --git a/latest/examples/llm_mgmn_trtllm_bench.html b/latest/examples/llm_mgmn_trtllm_bench.html
index cb6e79521e..4721691b0d 100644
--- a/latest/examples/llm_mgmn_trtllm_bench.html
+++ b/latest/examples/llm_mgmn_trtllm_bench.html
@@ -9,7 +9,7 @@
-
Llm Mgmn Trtllm Bench — TensorRT-LLM
+
Run trtllm-bench with pytorch backend on Slurm — TensorRT-LLM
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -510,8 +493,8 @@
-
-Llm Mgmn Trtllm Bench
+
+Run trtllm-bench with pytorch backend on Slurm
Source NVIDIA/TensorRT-LLM .
1 #!/bin/bash
2 #SBATCH -A <account>
@@ -523,90 +506,87 @@
8 #SBATCH -e logs/trtllm-bench.err
9 #SBATCH -J trtllm-bench
10
-11 ### Run trtllm-bench with pytorch backend on Slurm
-12
-13 # NOTE, this feature is experimental and may not work on all systems.
-14 # The trtllm-llmapi-launch is a script that launches the LLM-API code on
-15 # Slurm-like systems, and can support multi-node and multi-GPU setups.
-16
-17 # Note that, the number of MPI processes should be the same as the model world
-18 # size. e.g. For tensor_parallel_size=16, you may use 2 nodes with 8 gpus for
-19 # each, or 4 nodes with 4 gpus for each or other combinations.
-20
-21 # This docker image should have tensorrt_llm installed, or you need to install
-22 # it in the task.
-23
-24 # The following variables are expected to be set in the environment:
-25 # You can set them via --export in the srun/sbatch command.
-26 # CONTAINER_IMAGE: the docker image to use, you'd better install tensorrt_llm in it, or install it in the task.
-27 # MOUNT_DIR: the directory to mount in the container
-28 # MOUNT_DEST: the destination directory in the container
-29 # WORKDIR: the working directory in the container
-30 # SOURCE_ROOT: the path to the TensorRT-LLM source
-31 # PROLOGUE: the prologue to run before the script
-32 # LOCAL_MODEL: the local model directory to use, NOTE: downloading from HF is
-33 # not supported in Slurm mode, you need to download the model and put it in
-34 # the LOCAL_MODEL directory.
-35
-36 export prepare_dataset = " $SOURCE_ROOT /benchmarks/cpp/prepare_dataset.py"
-37 export data_path = " $WORKDIR /token-norm-dist.txt"
-38
-39 echo "Preparing dataset..."
-40 srun -l \
-41 -N 1 \
-42 -n 1 \
-43 --container-image= ${ CONTAINER_IMAGE } \
-44 --container-name= "prepare-name" \
-45 --container-mounts= ${ MOUNT_DIR } :${ MOUNT_DEST } \
-46 --container-workdir= ${ WORKDIR } \
-47 --export= ALL \
-48 --mpi= pmix \
-49 bash -c "
-50 $PROLOGUE
-51 python3 $prepare_dataset \
-52 --tokenizer= $LOCAL_MODEL \
-53 --stdout token-norm-dist \
-54 --num-requests=100 \
-55 --input-mean=128 \
-56 --output-mean=128 \
-57 --input-stdev=0 \
-58 --output-stdev=0 > $data_path
-59 "
-60
-61 echo "Running benchmark..."
-62 # Just launch trtllm-bench job with trtllm-llmapi-launch command.
-63
-64 srun -l \
-65 --container-image= ${ CONTAINER_IMAGE } \
-66 --container-mounts= ${ MOUNT_DIR } :${ MOUNT_DEST } \
-67 --container-workdir= ${ WORKDIR } \
-68 --export= ALL,PYTHONPATH= ${ SOURCE_ROOT } \
-69 --mpi= pmix \
-70 bash -c "
-71 set -ex
-72 $PROLOGUE
-73 export PATH= $PATH :~/.local/bin
-74
-75 # This is optional
-76 cat > /tmp/pytorch_extra_args.txt << EOF
-77 use_cuda_graph: false
-78 cuda_graph_padding_enabled: false
-79 print_iter_log: true
-80 enable_attention_dp: false
-81 EOF
-82
-83 # launch the benchmark
-84 trtllm-llmapi-launch \
-85 trtllm-bench \
-86 --model $MODEL_NAME \
-87 --model_path $LOCAL_MODEL \
-88 throughput \
-89 --dataset $data_path \
-90 --backend pytorch \
-91 --tp 16 \
-92 --extra_llm_api_options /tmp/pytorch_extra_args.txt \
-93 $EXTRA_ARGS
-94 "
+11
+12 # NOTE, this feature is experimental and may not work on all systems.
+13 # The trtllm-llmapi-launch is a script that launches the LLM-API code on
+14 # Slurm-like systems, and can support multi-node and multi-GPU setups.
+15
+16 # Note that, the number of MPI processes should be the same as the model world
+17 # size. e.g. For tensor_parallel_size=16, you may use 2 nodes with 8 gpus for
+18 # each, or 4 nodes with 4 gpus for each or other combinations.
+19
+20 # This docker image should have tensorrt_llm installed, or you need to install
+21 # it in the task.
+22
+23 # The following variables are expected to be set in the environment:
+24 # You can set them via --export in the srun/sbatch command.
+25 # CONTAINER_IMAGE: the docker image to use, you'd better install tensorrt_llm in it, or install it in the task.
+26 # MOUNT_DIR: the directory to mount in the container
+27 # MOUNT_DEST: the destination directory in the container
+28 # WORKDIR: the working directory in the container
+29 # SOURCE_ROOT: the path to the TensorRT-LLM source
+30 # PROLOGUE: the prologue to run before the script
+31 # LOCAL_MODEL: the local model directory to use, NOTE: downloading from HF is
+32 # not supported in Slurm mode, you need to download the model and put it in
+33 # the LOCAL_MODEL directory.
+34
+35 export prepare_dataset = " $SOURCE_ROOT /benchmarks/cpp/prepare_dataset.py"
+36 export data_path = " $WORKDIR /token-norm-dist.txt"
+37
+38 echo "Preparing dataset..."
+39 srun -l \
+40 -N 1 \
+41 -n 1 \
+42 --container-image= ${ CONTAINER_IMAGE } \
+43 --container-name= "prepare-name" \
+44 --container-mounts= ${ MOUNT_DIR } :${ MOUNT_DEST } \
+45 --container-workdir= ${ WORKDIR } \
+46 --export= ALL \
+47 --mpi= pmix \
+48 bash -c "
+49 $PROLOGUE
+50 python3 $prepare_dataset \
+51 --tokenizer= $LOCAL_MODEL \
+52 --stdout token-norm-dist \
+53 --num-requests=100 \
+54 --input-mean=128 \
+55 --output-mean=128 \
+56 --input-stdev=0 \
+57 --output-stdev=0 > $data_path
+58 "
+59
+60 echo "Running benchmark..."
+61 # Just launch trtllm-bench job with trtllm-llmapi-launch command.
+62
+63 srun -l \
+64 --container-image= ${ CONTAINER_IMAGE } \
+65 --container-mounts= ${ MOUNT_DIR } :${ MOUNT_DEST } \
+66 --container-workdir= ${ WORKDIR } \
+67 --export= ALL,PYTHONPATH= ${ SOURCE_ROOT } \
+68 --mpi= pmix \
+69 bash -c "
+70 set -ex
+71 $PROLOGUE
+72 export PATH= $PATH :~/.local/bin
+73
+74 # This is optional
+75 cat > /tmp/pytorch_extra_args.txt << EOF
+76 print_iter_log: true
+77 enable_attention_dp: false
+78 EOF
+79
+80 # launch the benchmark
+81 trtllm-llmapi-launch \
+82 trtllm-bench \
+83 --model $MODEL_NAME \
+84 --model_path $LOCAL_MODEL \
+85 throughput \
+86 --dataset $data_path \
+87 --backend pytorch \
+88 --tp 16 \
+89 --extra_llm_api_options /tmp/pytorch_extra_args.txt \
+90 $EXTRA_ARGS
+91 "
@@ -627,7 +607,7 @@
previous
-
Llm Mgmn Llm Distributed
+
Run LLM-API with pytorch backend on Slurm
next
-
Llm Mgmn Trtllm Serve
+
Run trtllm-serve with pytorch backend on Slurm
@@ -739,9 +719,9 @@
diff --git a/latest/examples/llm_mgmn_trtllm_serve.html b/latest/examples/llm_mgmn_trtllm_serve.html
index 90b29ee908..71585a394b 100644
--- a/latest/examples/llm_mgmn_trtllm_serve.html
+++ b/latest/examples/llm_mgmn_trtllm_serve.html
@@ -9,7 +9,7 @@
- Llm Mgmn Trtllm Serve — TensorRT-LLM
+ Run trtllm-serve with pytorch backend on Slurm — TensorRT-LLM
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -510,8 +493,8 @@
-
-Llm Mgmn Trtllm Serve
+
+Run trtllm-serve with pytorch backend on Slurm
Source NVIDIA/TensorRT-LLM .
1 #!/bin/bash
2 #SBATCH -A <account>
@@ -523,51 +506,50 @@
8 #SBATCH -e logs/trtllm-serve.err
9 #SBATCH -J trtllm-serve
10
-11 ### Run trtllm-serve with pytorch backend on Slurm
-12
-13 # NOTE, this feature is experimental and may not work on all systems.
-14 # The trtllm-llmapi-launch is a script that launches the LLM-API code on
-15 # Slurm-like systems, and can support multi-node and multi-GPU setups.
-16
-17 # Note that, the number of MPI processes should be the same as the model world
-18 # size. e.g. For tensor_parallel_size=16, you may use 2 nodes with 8 gpus for
-19 # each, or 4 nodes with 4 gpus for each or other combinations.
-20
-21 # This docker image should have tensorrt_llm installed, or you need to install
-22 # it in the task.
-23
-24 # The following variables are expected to be set in the environment:
-25 # You can set them via --export in the srun/sbatch command.
-26 # CONTAINER_IMAGE: the docker image to use, you'd better install tensorrt_llm in it, or install it in the task.
-27 # MOUNT_DIR: the directory to mount in the container
-28 # MOUNT_DEST: the destination directory in the container
-29 # WORKDIR: the working directory in the container
-30 # SOURCE_ROOT: the path to the TensorRT-LLM source
-31 # PROLOGUE: the prologue to run before the script
-32 # LOCAL_MODEL: the local model directory to use, NOTE: downloading from HF is
-33 # not supported in Slurm mode, you need to download the model and put it in
-34 # the LOCAL_MODEL directory.
-35
-36 echo "Starting trtllm-serve..."
-37 # Just launch trtllm-serve job with trtllm-llmapi-launch command.
-38 srun -l \
-39 --container-image= ${ CONTAINER_IMAGE } \
-40 --container-mounts= ${ MOUNT_DIR } :${ MOUNT_DEST } \
-41 --container-workdir= ${ WORKDIR } \
-42 --export= ALL,PYTHONPATH= ${ SOURCE_ROOT } \
-43 --mpi= pmix \
-44 bash -c "
-45 set -ex
-46 $PROLOGUE
-47 export PATH= $PATH :~/.local/bin
-48
-49 trtllm-llmapi-launch \
-50 trtllm-serve $LOCAL_MODEL \
-51 --tp_size 16 \
-52 --backend pytorch \
-53 --host 0.0.0.0 \
-54 ${ ADDITIONAL_OPTIONS }
-55 "
+11
+12 # NOTE, this feature is experimental and may not work on all systems.
+13 # The trtllm-llmapi-launch is a script that launches the LLM-API code on
+14 # Slurm-like systems, and can support multi-node and multi-GPU setups.
+15
+16 # Note that, the number of MPI processes should be the same as the model world
+17 # size. e.g. For tensor_parallel_size=16, you may use 2 nodes with 8 gpus for
+18 # each, or 4 nodes with 4 gpus for each or other combinations.
+19
+20 # This docker image should have tensorrt_llm installed, or you need to install
+21 # it in the task.
+22
+23 # The following variables are expected to be set in the environment:
+24 # You can set them via --export in the srun/sbatch command.
+25 # CONTAINER_IMAGE: the docker image to use, you'd better install tensorrt_llm in it, or install it in the task.
+26 # MOUNT_DIR: the directory to mount in the container
+27 # MOUNT_DEST: the destination directory in the container
+28 # WORKDIR: the working directory in the container
+29 # SOURCE_ROOT: the path to the TensorRT-LLM source
+30 # PROLOGUE: the prologue to run before the script
+31 # LOCAL_MODEL: the local model directory to use, NOTE: downloading from HF is
+32 # not supported in Slurm mode, you need to download the model and put it in
+33 # the LOCAL_MODEL directory.
+34
+35 echo "Starting trtllm-serve..."
+36 # Just launch trtllm-serve job with trtllm-llmapi-launch command.
+37 srun -l \
+38 --container-image= ${ CONTAINER_IMAGE } \
+39 --container-mounts= ${ MOUNT_DIR } :${ MOUNT_DEST } \
+40 --container-workdir= ${ WORKDIR } \
+41 --export= ALL,PYTHONPATH= ${ SOURCE_ROOT } \
+42 --mpi= pmix \
+43 bash -c "
+44 set -ex
+45 $PROLOGUE
+46 export PATH= $PATH :~/.local/bin
+47
+48 trtllm-llmapi-launch \
+49 trtllm-serve $LOCAL_MODEL \
+50 --tp_size 16 \
+51 --backend pytorch \
+52 --host 0.0.0.0 \
+53 ${ ADDITIONAL_OPTIONS }
+54 "
@@ -588,15 +570,15 @@
previous
-
Llm Mgmn Trtllm Bench
+
Run trtllm-bench with pytorch backend on Slurm
next
-
LLM Common Customizations
+
Online Serving Examples
@@ -700,9 +682,9 @@
diff --git a/latest/examples/llm_multilora.html b/latest/examples/llm_multilora.html
index 25a11b9bbe..3ccb4c7b21 100644
--- a/latest/examples/llm_multilora.html
+++ b/latest/examples/llm_multilora.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -513,66 +496,65 @@
Generate text with multiple LoRA adapters
Source NVIDIA/TensorRT-LLM .
- 1 ### Generate text with multiple LoRA adapters
- 2 from huggingface_hub import snapshot_download
- 3
- 4 from tensorrt_llm._tensorrt_engine import LLM
- 5 from tensorrt_llm.executor import LoRARequest
- 6 from tensorrt_llm.llmapi import BuildConfig
- 7 from tensorrt_llm.lora_manager import LoraConfig
+ 1 from huggingface_hub import snapshot_download
+ 2
+ 3 from tensorrt_llm import LLM
+ 4 from tensorrt_llm.executor import LoRARequest
+ 5 from tensorrt_llm.llmapi import BuildConfig
+ 6 from tensorrt_llm.lora_manager import LoraConfig
+ 7
8
- 9
-10 def main ():
-11
-12 # Download the LoRA adapters from huggingface hub.
-13 lora_dir1 = snapshot_download ( repo_id = "snshrivas10/sft-tiny-chatbot" )
-14 lora_dir2 = snapshot_download (
-15 repo_id = "givyboy/TinyLlama-1.1B-Chat-v1.0-mental-health-conversational" )
-16 lora_dir3 = snapshot_download ( repo_id = "barissglc/tinyllama-tarot-v1" )
-17
-18 # Currently, we need to pass at least one lora_dir to LLM constructor via build_config.lora_config.
-19 # This is necessary because it requires some configuration in the lora_dir to build the engine with LoRA support.
-20 build_config = BuildConfig ()
-21 build_config . lora_config = LoraConfig ( lora_dir = [ lora_dir1 ])
-22 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
-23 enable_lora = True ,
-24 max_lora_rank = 64 ,
-25 build_config = build_config )
-26
-27 # Sample prompts
-28 prompts = [
+ 9 def main ():
+10
+11 # Download the LoRA adapters from huggingface hub.
+12 lora_dir1 = snapshot_download ( repo_id = "snshrivas10/sft-tiny-chatbot" )
+13 lora_dir2 = snapshot_download (
+14 repo_id = "givyboy/TinyLlama-1.1B-Chat-v1.0-mental-health-conversational" )
+15 lora_dir3 = snapshot_download ( repo_id = "barissglc/tinyllama-tarot-v1" )
+16
+17 # Currently, we need to pass at least one lora_dir to LLM constructor via build_config.lora_config.
+18 # This is necessary because it requires some configuration in the lora_dir to build the engine with LoRA support.
+19 build_config = BuildConfig ()
+20 build_config . lora_config = LoraConfig ( lora_dir = [ lora_dir1 ])
+21 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
+22 enable_lora = True ,
+23 max_lora_rank = 64 ,
+24 build_config = build_config )
+25
+26 # Sample prompts
+27 prompts = [
+28 "Hello, tell me a story: " ,
29 "Hello, tell me a story: " ,
-30 "Hello, tell me a story: " ,
+30 "I've noticed you seem a bit down lately. Is there anything you'd like to talk about?" ,
31 "I've noticed you seem a bit down lately. Is there anything you'd like to talk about?" ,
-32 "I've noticed you seem a bit down lately. Is there anything you'd like to talk about?" ,
+32 "In this reading, the Justice card represents a situation where" ,
33 "In this reading, the Justice card represents a situation where" ,
-34 "In this reading, the Justice card represents a situation where" ,
-35 ]
-36
-37 # At runtime, multiple LoRA adapters can be specified via lora_request; None means no LoRA used.
-38 for output in llm . generate ( prompts ,
-39 lora_request = [
-40 None ,
-41 LoRARequest ( "chatbot" , 1 , lora_dir1 ), None ,
-42 LoRARequest ( "mental-health" , 2 , lora_dir2 ),
-43 None ,
-44 LoRARequest ( "tarot" , 3 , lora_dir3 )
-45 ]):
-46 prompt = output . prompt
-47 generated_text = output . outputs [ 0 ] . text
-48 print ( f "Prompt: { prompt !r} , Generated text: { generated_text !r} " )
-49
-50 # Got output like
-51 # Prompt: 'Hello, tell me a story: ', Generated text: '1. Start with a question: "What\'s your favorite color?" 2. Ask a question that leads to a story: "What\'s your'
-52 # Prompt: 'Hello, tell me a story: ', Generated text: '1. A person is walking down the street. 2. A person is sitting on a bench. 3. A person is reading a book.'
-53 # Prompt: "I've noticed you seem a bit down lately. Is there anything you'd like to talk about?", Generated text: "\n\nJASON: (smiling) No, I'm just feeling a bit overwhelmed lately. I've been trying to"
-54 # Prompt: "I've noticed you seem a bit down lately. Is there anything you'd like to talk about?", Generated text: "\n\nJASON: (sighs) Yeah, I've been struggling with some personal issues. I've been feeling like I'm"
-55 # Prompt: 'In this reading, the Justice card represents a situation where', Generated text: 'you are being asked to make a decision that will have a significant impact on your life. The card suggests that you should take the time to consider all the options'
-56 # Prompt: 'In this reading, the Justice card represents a situation where', Generated text: 'you are being asked to make a decision that will have a significant impact on your life. It is important to take the time to consider all the options and make'
+34 ]
+35
+36 # At runtime, multiple LoRA adapters can be specified via lora_request; None means no LoRA used.
+37 for output in llm . generate ( prompts ,
+38 lora_request = [
+39 None ,
+40 LoRARequest ( "chatbot" , 1 , lora_dir1 ), None ,
+41 LoRARequest ( "mental-health" , 2 , lora_dir2 ),
+42 None ,
+43 LoRARequest ( "tarot" , 3 , lora_dir3 )
+44 ]):
+45 prompt = output . prompt
+46 generated_text = output . outputs [ 0 ] . text
+47 print ( f "Prompt: { prompt !r} , Generated text: { generated_text !r} " )
+48
+49 # Got output like
+50 # Prompt: 'Hello, tell me a story: ', Generated text: '1. Start with a question: "What\'s your favorite color?" 2. Ask a question that leads to a story: "What\'s your'
+51 # Prompt: 'Hello, tell me a story: ', Generated text: '1. A person is walking down the street. 2. A person is sitting on a bench. 3. A person is reading a book.'
+52 # Prompt: "I've noticed you seem a bit down lately. Is there anything you'd like to talk about?", Generated text: "\n\nJASON: (smiling) No, I'm just feeling a bit overwhelmed lately. I've been trying to"
+53 # Prompt: "I've noticed you seem a bit down lately. Is there anything you'd like to talk about?", Generated text: "\n\nJASON: (sighs) Yeah, I've been struggling with some personal issues. I've been feeling like I'm"
+54 # Prompt: 'In this reading, the Justice card represents a situation where', Generated text: 'you are being asked to make a decision that will have a significant impact on your life. The card suggests that you should take the time to consider all the options'
+55 # Prompt: 'In this reading, the Justice card represents a situation where', Generated text: 'you are being asked to make a decision that will have a significant impact on your life. It is important to take the time to consider all the options and make'
+56
57
-58
-59 if __name__ == '__main__' :
-60 main ()
+58 if __name__ == '__main__' :
+59 main ()
@@ -588,20 +570,20 @@
previous
-
Generate Text Using Medusa Decoding
+
Control generated text using logits processor
next
-
Generate Text Using Eagle Decoding
+
Run LLM-API with pytorch backend on Slurm
@@ -705,9 +687,9 @@
diff --git a/latest/examples/llm_quantization.html b/latest/examples/llm_quantization.html
deleted file mode 100644
index c21c2bc380..0000000000
--- a/latest/examples/llm_quantization.html
+++ /dev/null
@@ -1,744 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
Generation with Quantization — TensorRT-LLM
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Back to top
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Generation with Quantization
-Source NVIDIA/TensorRT-LLM .
- 1 ### Generation with Quantization
- 2 import logging
- 3
- 4 import torch
- 5
- 6 from tensorrt_llm import SamplingParams
- 7 from tensorrt_llm._tensorrt_engine import LLM
- 8 from tensorrt_llm.llmapi import CalibConfig , QuantAlgo , QuantConfig
- 9
-10 major , minor = torch . cuda . get_device_capability ()
-11 enable_fp8 = major > 8 or ( major == 8 and minor >= 9 )
-12 enable_nvfp4 = major >= 10
-13
-14 quant_and_calib_configs = []
-15
-16 if not enable_nvfp4 :
-17 # Example 1: Specify int4 AWQ quantization to QuantConfig.
-18 # We can skip specifying CalibConfig or leave a None as the default value.
-19 quant_and_calib_configs . append (
-20 ( QuantConfig ( quant_algo = QuantAlgo . W4A16_AWQ ), None ))
-21
-22 if enable_fp8 :
-23 # Example 2: Specify FP8 quantization to QuantConfig.
-24 # We can create a CalibConfig to specify the calibration dataset and other details.
-25 # Note that the calibration dataset could be either HF dataset name or a path to local HF dataset.
-26 quant_and_calib_configs . append (
-27 ( QuantConfig ( quant_algo = QuantAlgo . FP8 ,
-28 kv_cache_quant_algo = QuantAlgo . FP8 ),
-29 CalibConfig ( calib_dataset = 'cnn_dailymail' ,
-30 calib_batches = 256 ,
-31 calib_max_seq_length = 256 )))
-32 else :
-33 logging . error (
-34 "FP8 quantization only works on post-ada GPUs. Skipped in the example." )
-35
-36 if enable_nvfp4 :
-37 # Example 3: Specify NVFP4 quantization to QuantConfig.
-38 quant_and_calib_configs . append (
-39 ( QuantConfig ( quant_algo = QuantAlgo . NVFP4 ,
-40 kv_cache_quant_algo = QuantAlgo . FP8 ),
-41 CalibConfig ( calib_dataset = 'cnn_dailymail' ,
-42 calib_batches = 256 ,
-43 calib_max_seq_length = 256 )))
-44 else :
-45 logging . error (
-46 "NVFP4 quantization only works on Blackwell. Skipped in the example." )
-47
-48
-49 def main ():
-50
-51 for quant_config , calib_config in quant_and_calib_configs :
-52 # The built-in end-to-end quantization is triggered according to the passed quant_config.
-53 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
-54 quant_config = quant_config ,
-55 calib_config = calib_config )
-56
-57 # Sample prompts.
-58 prompts = [
-59 "Hello, my name is" ,
-60 "The president of the United States is" ,
-61 "The capital of France is" ,
-62 "The future of AI is" ,
-63 ]
-64
-65 # Create a sampling params.
-66 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
-67
-68 for output in llm . generate ( prompts , sampling_params ):
-69 print (
-70 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
-71 )
-72 llm . shutdown ()
-73
-74 # Got output like
-75 # Prompt: 'Hello, my name is', Generated text: 'Jane Smith. I am a resident of the city. Can you tell me more about the public services provided in the area?'
-76 # Prompt: 'The president of the United States is', Generated text: 'considered the head of state, and the vice president of the United States is considered the head of state. President and Vice President of the United States (US)'
-77 # Prompt: 'The capital of France is', Generated text: 'located in Paris, France. The population of Paris, France, is estimated to be 2 million. France is home to many famous artists, including Picasso'
-78 # Prompt: 'The future of AI is', Generated text: 'an open and collaborative project. The project is an ongoing effort, and we invite participation from members of the community.\n\nOur community is'
-79
-80
-81 if __name__ == '__main__' :
-82 main ()
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/latest/examples/openai_chat_client.html b/latest/examples/openai_chat_client.html
index 1135866bf3..6473e9319f 100644
--- a/latest/examples/openai_chat_client.html
+++ b/latest/examples/openai_chat_client.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -514,27 +497,26 @@
OpenAI Chat Client
Refer to the trtllm-serve documentation for starting a server.
Source NVIDIA/TensorRT-LLM .
- 1 ### OpenAI Chat Client
- 2
- 3 from openai import OpenAI
- 4
- 5 client = OpenAI (
- 6 base_url = "http://localhost:8000/v1" ,
- 7 api_key = "tensorrt_llm" ,
- 8 )
- 9
-10 response = client . chat . completions . create (
-11 model = "TinyLlama-1.1B-Chat-v1.0" ,
-12 messages = [{
-13 "role" : "system" ,
-14 "content" : "you are a helpful assistant"
-15 }, {
-16 "role" : "user" ,
-17 "content" : "Where is New York?"
-18 }],
-19 max_tokens = 20 ,
-20 )
-21 print ( response )
+ 1
+ 2 from openai import OpenAI
+ 3
+ 4 client = OpenAI (
+ 5 base_url = "http://localhost:8000/v1" ,
+ 6 api_key = "tensorrt_llm" ,
+ 7 )
+ 8
+ 9 response = client . chat . completions . create (
+10 model = "TinyLlama-1.1B-Chat-v1.0" ,
+11 messages = [{
+12 "role" : "system" ,
+13 "content" : "you are a helpful assistant"
+14 }, {
+15 "role" : "user" ,
+16 "content" : "Where is New York?"
+17 }],
+18 max_tokens = 20 ,
+19 )
+20 print ( response )
@@ -563,7 +545,7 @@
title="next page">
next
-
OpenAI Chat Client
+
OpenAI Chat Client for Multimodal
@@ -667,9 +649,9 @@
diff --git a/latest/examples/openai_chat_client_for_multimodal.html b/latest/examples/openai_chat_client_for_multimodal.html
index 47b8d9561a..bb598dfcf7 100644
--- a/latest/examples/openai_chat_client_for_multimodal.html
+++ b/latest/examples/openai_chat_client_for_multimodal.html
@@ -9,7 +9,7 @@
- OpenAI Chat Client — TensorRT-LLM
+ OpenAI Chat Client for Multimodal — TensorRT-LLM
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -510,124 +493,123 @@
-
-OpenAI Chat Client
+
+OpenAI Chat Client for Multimodal
Refer to the trtllm-serve documentation for starting a server.
Source NVIDIA/TensorRT-LLM .
- 1 ### OpenAI Chat Client
- 2
- 3 from openai import OpenAI
- 4
- 5 from tensorrt_llm.inputs import encode_base64_content_from_url
- 6
- 7 client = OpenAI (
- 8 base_url = "http://localhost:8000/v1" ,
- 9 api_key = "tensorrt_llm" ,
- 10 )
- 11
- 12 # SINGLE IMAGE INFERENCE
- 13 response = client . chat . completions . create (
- 14 model = "Qwen2.5-VL-3B-Instruct" ,
- 15 messages = [{
- 16 "role" : "system" ,
- 17 "content" : "you are a helpful assistant"
- 18 }, {
- 19 "role" :
- 20 "user" ,
- 21 "content" : [{
- 22 "type" : "text" ,
- 23 "text" : "Describe the natural environment in the image."
- 24 }, {
- 25 "type" : "image_url" ,
- 26 "image_url" : {
- 27 "url" :
- 28 "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png"
- 29 }
- 30 }]
- 31 }],
- 32 max_tokens = 64 ,
- 33 )
- 34 print ( response )
- 35
- 36 # MULTI IMAGE INFERENCE
- 37 response = client . chat . completions . create (
- 38 model = "Qwen2.5-VL-3B-Instruct" ,
- 39 messages = [{
- 40 "role" : "system" ,
- 41 "content" : "you are a helpful assistant"
- 42 }, {
- 43 "role" :
- 44 "user" ,
- 45 "content" : [{
- 46 "type" : "text" ,
- 47 "text" : "Tell me the difference between two images"
- 48 }, {
- 49 "type" : "image_url" ,
- 50 "image_url" : {
- 51 "url" :
- 52 "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png"
- 53 }
- 54 }, {
- 55 "type" : "image_url" ,
- 56 "image_url" : {
- 57 "url" :
- 58 "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png"
- 59 }
- 60 }]
- 61 }],
- 62 max_tokens = 64 ,
- 63 )
- 64 print ( response )
- 65
- 66 # SINGLE VIDEO INFERENCE
- 67 response = client . chat . completions . create (
- 68 model = "Qwen2.5-VL-3B-Instruct" ,
- 69 messages = [{
- 70 "role" : "system" ,
- 71 "content" : "you are a helpful assistant"
- 72 }, {
- 73 "role" :
- 74 "user" ,
- 75 "content" : [{
- 76 "type" : "text" ,
- 77 "text" : "Tell me what you see in the video briefly."
- 78 }, {
- 79 "type" : "video_url" ,
- 80 "video_url" : {
- 81 "url" :
- 82 "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/OAI-sora-tokyo-walk.mp4"
- 83 }
- 84 }]
- 85 }],
- 86 max_tokens = 64 ,
- 87 )
- 88 print ( response )
- 89
- 90 # IMAGE EMBED INFERENCE
- 91 image64 = encode_base64_content_from_url (
- 92 "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png"
- 93 )
- 94 response = client . chat . completions . create (
- 95 model = "Qwen2.5-VL-3B-Instruct" ,
- 96 messages = [{
- 97 "role" : "system" ,
- 98 "content" : "you are a helpful assistant"
- 99 }, {
-100 "role" :
-101 "user" ,
-102 "content" : [{
-103 "type" : "text" ,
-104 "text" : "Describe the natural environment in the image."
-105 }, {
-106 "type" : "image_url" ,
-107 "image_url" : {
-108 "url" : "data:image/png;base64," + image64
-109 }
-110 }]
-111 }],
-112 max_tokens = 64 ,
-113 )
-114 print ( response )
+ 1
+ 2 from openai import OpenAI
+ 3
+ 4 from tensorrt_llm.inputs import encode_base64_content_from_url
+ 5
+ 6 client = OpenAI (
+ 7 base_url = "http://localhost:8000/v1" ,
+ 8 api_key = "tensorrt_llm" ,
+ 9 )
+ 10
+ 11 # SINGLE IMAGE INFERENCE
+ 12 response = client . chat . completions . create (
+ 13 model = "Qwen2.5-VL-3B-Instruct" ,
+ 14 messages = [{
+ 15 "role" : "system" ,
+ 16 "content" : "you are a helpful assistant"
+ 17 }, {
+ 18 "role" :
+ 19 "user" ,
+ 20 "content" : [{
+ 21 "type" : "text" ,
+ 22 "text" : "Describe the natural environment in the image."
+ 23 }, {
+ 24 "type" : "image_url" ,
+ 25 "image_url" : {
+ 26 "url" :
+ 27 "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png"
+ 28 }
+ 29 }]
+ 30 }],
+ 31 max_tokens = 64 ,
+ 32 )
+ 33 print ( response )
+ 34
+ 35 # MULTI IMAGE INFERENCE
+ 36 response = client . chat . completions . create (
+ 37 model = "Qwen2.5-VL-3B-Instruct" ,
+ 38 messages = [{
+ 39 "role" : "system" ,
+ 40 "content" : "you are a helpful assistant"
+ 41 }, {
+ 42 "role" :
+ 43 "user" ,
+ 44 "content" : [{
+ 45 "type" : "text" ,
+ 46 "text" : "Tell me the difference between two images"
+ 47 }, {
+ 48 "type" : "image_url" ,
+ 49 "image_url" : {
+ 50 "url" :
+ 51 "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inpaint.png"
+ 52 }
+ 53 }, {
+ 54 "type" : "image_url" ,
+ 55 "image_url" : {
+ 56 "url" :
+ 57 "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png"
+ 58 }
+ 59 }]
+ 60 }],
+ 61 max_tokens = 64 ,
+ 62 )
+ 63 print ( response )
+ 64
+ 65 # SINGLE VIDEO INFERENCE
+ 66 response = client . chat . completions . create (
+ 67 model = "Qwen2.5-VL-3B-Instruct" ,
+ 68 messages = [{
+ 69 "role" : "system" ,
+ 70 "content" : "you are a helpful assistant"
+ 71 }, {
+ 72 "role" :
+ 73 "user" ,
+ 74 "content" : [{
+ 75 "type" : "text" ,
+ 76 "text" : "Tell me what you see in the video briefly."
+ 77 }, {
+ 78 "type" : "video_url" ,
+ 79 "video_url" : {
+ 80 "url" :
+ 81 "https://huggingface.co/datasets/Efficient-Large-Model/VILA-inference-demos/resolve/main/OAI-sora-tokyo-walk.mp4"
+ 82 }
+ 83 }]
+ 84 }],
+ 85 max_tokens = 64 ,
+ 86 )
+ 87 print ( response )
+ 88
+ 89 # IMAGE EMBED INFERENCE
+ 90 image64 = encode_base64_content_from_url (
+ 91 "https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/seashore.png"
+ 92 )
+ 93 response = client . chat . completions . create (
+ 94 model = "Qwen2.5-VL-3B-Instruct" ,
+ 95 messages = [{
+ 96 "role" : "system" ,
+ 97 "content" : "you are a helpful assistant"
+ 98 }, {
+ 99 "role" :
+100 "user" ,
+101 "content" : [{
+102 "type" : "text" ,
+103 "text" : "Describe the natural environment in the image."
+104 }, {
+105 "type" : "image_url" ,
+106 "image_url" : {
+107 "url" : "data:image/png;base64," + image64
+108 }
+109 }]
+110 }],
+111 max_tokens = 64 ,
+112 )
+113 print ( response )
@@ -760,9 +742,9 @@
diff --git a/latest/examples/openai_completion_client.html b/latest/examples/openai_completion_client.html
index 9a3a0a0cab..3b02a30cb6 100644
--- a/latest/examples/openai_completion_client.html
+++ b/latest/examples/openai_completion_client.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -514,21 +497,20 @@
OpenAI Completion Client
Refer to the trtllm-serve documentation for starting a server.
Source NVIDIA/TensorRT-LLM .
- 1 ### OpenAI Completion Client
- 2
- 3 from openai import OpenAI
- 4
- 5 client = OpenAI (
- 6 base_url = "http://localhost:8000/v1" ,
- 7 api_key = "tensorrt_llm" ,
- 8 )
- 9
-10 response = client . completions . create (
-11 model = "TinyLlama-1.1B-Chat-v1.0" ,
-12 prompt = "Where is New York?" ,
-13 max_tokens = 20 ,
-14 )
-15 print ( response )
+ 1
+ 2 from openai import OpenAI
+ 3
+ 4 client = OpenAI (
+ 5 base_url = "http://localhost:8000/v1" ,
+ 6 api_key = "tensorrt_llm" ,
+ 7 )
+ 8
+ 9 response = client . completions . create (
+10 model = "TinyLlama-1.1B-Chat-v1.0" ,
+11 prompt = "Where is New York?" ,
+12 max_tokens = 20 ,
+13 )
+14 print ( response )
@@ -549,15 +531,15 @@
previous
-
OpenAI Chat Client
+
OpenAI Chat Client for Multimodal
next
-
Layers
+
Openai Completion Client For Lora
@@ -661,9 +643,9 @@
diff --git a/latest/examples/llm_auto_parallel.html b/latest/examples/openai_completion_client_for_lora.html
similarity index 70%
rename from latest/examples/llm_auto_parallel.html
rename to latest/examples/openai_completion_client_for_lora.html
index 3dfa4b8237..b44e958f71 100644
--- a/latest/examples/llm_auto_parallel.html
+++ b/latest/examples/openai_completion_client_for_lora.html
@@ -9,7 +9,7 @@
- Automatic Parallelism with LLM — TensorRT-LLM
+ Openai Completion Client For Lora — TensorRT-LLM
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
-
+
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,61 +321,35 @@
Installation
LLM API
Examples
@@ -510,45 +493,40 @@
-
-Automatic Parallelism with LLM
-Source NVIDIA/TensorRT-LLM .
- 1 ### Automatic Parallelism with LLM
- 2 from tensorrt_llm import SamplingParams
- 3 from tensorrt_llm._tensorrt_engine import LLM
- 4
+
+Openai Completion Client For Lora
+Refer to the trtllm-serve documentation for starting a server.
+Source NVIDIA/TensorRT-LLM .
+ 1 ### OpenAI Completion Client
+ 2
+ 3 import os
+ 4 from pathlib import Path
5
- 6 def main ():
- 7 llm = LLM (
- 8 model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
- 9
-10 # Enable auto parallelism
-11 auto_parallel = True ,
-12 auto_parallel_world_size = 2 )
-13
-14 prompts = [
-15 "Hello, my name is" ,
-16 "The president of the United States is" ,
-17 "The capital of France is" ,
-18 "The future of AI is" ,
-19 ]
-20
-21 sampling_params = SamplingParams ( temperature = 0.8 , top_p = 0.95 )
-22
-23 for output in llm . generate ( prompts , sampling_params ):
-24 print (
-25 f "Prompt: { output . prompt !r} , Generated text: { output . outputs [ 0 ] . text !r} "
-26 )
-27
-28 # Got output like
-29 # Prompt: 'Hello, my name is', Generated text: '\n\nJane Smith. I am a student pursuing my degree in Computer Science at [university]. I enjoy learning new things, especially technology and programming'
-30 # Prompt: 'The president of the United States is', Generated text: 'likely to nominate a new Supreme Court justice to fill the seat vacated by the death of Antonin Scalia. The Senate should vote to confirm the'
-31 # Prompt: 'The capital of France is', Generated text: 'Paris.'
-32 # Prompt: 'The future of AI is', Generated text: 'an exciting time for us. We are constantly researching, developing, and improving our platform to create the most advanced and efficient model available. We are'
-33
-34
-35 if __name__ == '__main__' :
-36 main ()
+ 6 from openai import OpenAI
+ 7
+ 8 client = OpenAI (
+ 9 base_url = "http://localhost:8000/v1" ,
+10 api_key = "tensorrt_llm" ,
+11 )
+12
+13 lora_path = Path (
+14 os . environ . get ( "LLM_MODELS_ROOT" )) / "llama-models" / "luotuo-lora-7b-0.1"
+15 assert lora_path . exists (), f "Lora path { lora_path } does not exist"
+16
+17 response = client . completions . create (
+18 model = "llama-7b-hf" ,
+19 prompt = "美国的首都在哪里? \n 答案:" ,
+20 max_tokens = 20 ,
+21 extra_body = {
+22 "lora_request" : {
+23 "lora_name" : "luotuo-lora-7b-0.1" ,
+24 "lora_int_id" : 0 ,
+25 "lora_path" : str ( lora_path )
+26 }
+27 },
+28 )
+29
+30 print ( response )
@@ -564,20 +542,20 @@
previous
-
Generate text with customization
+
OpenAI Completion Client
next
-
Llm Mgmn Llm Distributed
+
Layers
@@ -681,9 +659,9 @@
diff --git a/latest/examples/trtllm_serve_examples.html b/latest/examples/trtllm_serve_examples.html
index 84881088a9..7e338820b2 100644
--- a/latest/examples/trtllm_serve_examples.html
+++ b/latest/examples/trtllm_serve_examples.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -59,11 +66,11 @@
-
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
@@ -510,7 +493,6 @@
@@ -536,12 +519,12 @@
diff --git a/latest/genindex.html b/latest/genindex.html
index e7010cf9ad..9e649804be 100644
--- a/latest/genindex.html
+++ b/latest/genindex.html
@@ -34,6 +34,7 @@
+
@@ -46,11 +47,17 @@
+
+
+
+
+
+
@@ -60,7 +67,7 @@
-
+
@@ -311,58 +318,32 @@
Installation
LLM API
Examples
Blogs
- cuda_graph_padding_enabled (tensorrt_llm.llmapi.TorchLlmArgs attribute)
-
cuda_stream_guard() (tensorrt_llm.runtime.GenerationSession method)
cuda_stream_sync() (in module tensorrt_llm.functional)
+
+ CudaGraphConfig (class in tensorrt_llm.llmapi)
cumsum() (in module tensorrt_llm.functional)
@@ -1388,13 +1373,15 @@
enable_max_num_tokens_tuning (tensorrt_llm.llmapi.DynamicBatchConfig attribute)
-
-
+
+ max_prompt_adapter_token (tensorrt_llm.llmapi.TrtLlmArgs attribute)
+
max_prompt_embedding_table_size (tensorrt_llm.llmapi.BuildConfig attribute)
padding (tensorrt_llm.functional.AttentionMaskType attribute)
+
+ padding_enabled (tensorrt_llm.llmapi.CudaGraphConfig attribute)
paged_kv_cache (tensorrt_llm.runtime.GenerationSession property)
@@ -4871,7 +4868,9 @@
tensorrt_llm::executor::KVCacheEvent::eventId (C++ member)
- tensorrt_llm::executor::KVCacheEvent::KVCacheEvent (C++ function)
+ tensorrt_llm::executor::KVCacheEvent::KVCacheEvent (C++ function)
+
+ tensorrt_llm::executor::KVCacheEvent::windowSize (C++ member)
tensorrt_llm::executor::KVCacheEventData (C++ type)
@@ -6186,11 +6185,11 @@
tensorrt_llm::runtime::BufferManager::pinnedPool (C++ function) , [1]
tensorrt_llm::runtime::BufferManager::setMem (C++ function)
-
- tensorrt_llm::runtime::BufferManager::setZero (C++ function)
+ tensorrt_llm::runtime::BufferManager::setZero (C++ function)
+
tensorrt_llm::runtime::BufferManager::~BufferManager (C++ function)
tensorrt_llm::runtime::BufferRange (C++ class)
@@ -6353,9 +6352,7 @@
tensorrt_llm::runtime::decoder::DecoderState (C++ class)
- tensorrt_llm::runtime::decoder::DecoderState::allocateSpeculativeDecodingBuffers (C++ function)
-
- tensorrt_llm::runtime::decoder::DecoderState::DecoderState (C++ function)
+ tensorrt_llm::runtime::decoder::DecoderState::DecoderState (C++ function)
tensorrt_llm::runtime::decoder::DecoderState::DecodingInputPtr (C++ type)
@@ -6388,6 +6385,8 @@
tensorrt_llm::runtime::decoder::DecoderState::getFinishReasons (C++ function)
tensorrt_llm::runtime::decoder::DecoderState::getGatheredIds (C++ function) , [1]
+
+ tensorrt_llm::runtime::decoder::DecoderState::getGenerationSteps (C++ function)
tensorrt_llm::runtime::decoder::DecoderState::getIds (C++ function) , [1]
@@ -6448,14 +6447,28 @@
tensorrt_llm::runtime::decoder::DecoderState::mSpeculativeDecodingMode (C++ member)
tensorrt_llm::runtime::decoder::DecoderState::RequestVector (C++ type)
+
+ tensorrt_llm::runtime::decoder::DecoderState::reshapeBuffers (C++ function)
+
+ tensorrt_llm::runtime::decoder::DecoderState::reshapeCacheIndirectionBuffers (C++ function)
+
+ tensorrt_llm::runtime::decoder::DecoderState::reshapeSpeculativeDecodingBuffers (C++ function)
+
+ tensorrt_llm::runtime::decoder::DecoderState::setGenerationSteps (C++ function)
tensorrt_llm::runtime::decoder::DecoderState::setNumDecodingEngineTokens (C++ function)
- tensorrt_llm::runtime::decoder::DecoderState::setup (C++ function)
+ tensorrt_llm::runtime::decoder::DecoderState::setup (C++ function)
- tensorrt_llm::runtime::decoder::DecoderState::setupCacheIndirection (C++ function)
+ tensorrt_llm::runtime::decoder::DecoderState::setupBuffers (C++ function)
- tensorrt_llm::runtime::decoder::DecoderState::setupSpeculativeDecoding (C++ function)
+ tensorrt_llm::runtime::decoder::DecoderState::setupCacheIndirection (C++ function)
+
+ tensorrt_llm::runtime::decoder::DecoderState::setupCacheIndirectionBuffers (C++ function)
+
+ tensorrt_llm::runtime::decoder::DecoderState::setupSpeculativeDecoding (C++ function)
+
+ tensorrt_llm::runtime::decoder::DecoderState::setupSpeculativeDecodingBuffers (C++ function)
tensorrt_llm::runtime::decoder::DecoderState::TensorPtr (C++ type)
@@ -6464,26 +6477,12 @@
tensorrt_llm::runtime::decoder_batch::Input (C++ class)
tensorrt_llm::runtime::decoder_batch::Input::batchSlots (C++ member)
-
- tensorrt_llm::runtime::decoder_batch::Input::batchSlotsRequestOrder (C++ member)
-
- tensorrt_llm::runtime::decoder_batch::Input::eagleInputs (C++ member)
-
- tensorrt_llm::runtime::decoder_batch::Input::eagleLastInputs (C++ member)
-
- tensorrt_llm::runtime::decoder_batch::Input::explicitDraftTokensInputs (C++ member)
-
- tensorrt_llm::runtime::decoder_batch::Input::explicitDraftTokensLastInputs (C++ member)
-
- tensorrt_llm::runtime::decoder_batch::Input::generationSteps (C++ member)
tensorrt_llm::runtime::decoder_batch::Input::Input (C++ function) , [1]
tensorrt_llm::runtime::decoder_batch::Input::logits (C++ member)
tensorrt_llm::runtime::decoder_batch::Input::maxDecoderSteps (C++ member)
-
- tensorrt_llm::runtime::decoder_batch::Input::predictedDraftLogits (C++ member)
tensorrt_llm::runtime::decoder_batch::Input::TensorConstPtr (C++ type)
@@ -6543,7 +6542,7 @@
tensorrt_llm::runtime::DecodingInput::cacheIndirection (C++ member)
- tensorrt_llm::runtime::DecodingInput::DecodingInput (C++ function)
+ tensorrt_llm::runtime::DecodingInput::DecodingInput (C++ function)
tensorrt_llm::runtime::DecodingInput::eagleInputs (C++ member)
@@ -6556,8 +6555,6 @@
tensorrt_llm::runtime::DecodingInput::EagleInputs::acceptedTokens (C++ member)
tensorrt_llm::runtime::DecodingInput::EagleInputs::chunkedContextNextTokens (C++ member)
-
- tensorrt_llm::runtime::DecodingInput::EagleInputs::EagleInputs (C++ function)
tensorrt_llm::runtime::DecodingInput::EagleInputs::lastDraftLens (C++ member)
@@ -6642,8 +6639,6 @@
tensorrt_llm::runtime::DecodingInput::generationSteps (C++ member)
tensorrt_llm::runtime::DecodingInput::lengths (C++ member)
-
- tensorrt_llm::runtime::DecodingInput::logits (C++ member)
tensorrt_llm::runtime::DecodingInput::logitsVec (C++ member)
@@ -6729,7 +6724,7 @@
tensorrt_llm::runtime::DecodingOutput::cumLogProbs (C++ member)
- tensorrt_llm::runtime::DecodingOutput::DecodingOutput (C++ function)
+ tensorrt_llm::runtime::DecodingOutput::DecodingOutput (C++ function)
tensorrt_llm::runtime::DecodingOutput::eagleBuffers (C++ member)
@@ -8983,8 +8978,6 @@
use_beam_hyps (tensorrt_llm.runtime.SamplingConfig attribute)
use_beam_search (tensorrt_llm.llmapi.SamplingParams attribute)
-
- use_cuda_graph (tensorrt_llm.llmapi.TorchLlmArgs attribute)
use_dynamic_tree (tensorrt_llm.llmapi.EagleDecodingConfig attribute)
@@ -9048,7 +9041,7 @@
validate_cuda_graph_config() (tensorrt_llm.llmapi.TorchLlmArgs method)
- validate_cuda_graph_max_batch_size() (tensorrt_llm.llmapi.TorchLlmArgs class method)
+ validate_cuda_graph_max_batch_size() (tensorrt_llm.llmapi.CudaGraphConfig class method)
validate_enable_build_cache() (tensorrt_llm.llmapi.TrtLlmArgs method)
@@ -9278,9 +9271,9 @@
diff --git a/latest/index.html b/latest/index.html
index ad107e1ca4..5b87d58e28 100644
--- a/latest/index.html
+++ b/latest/index.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -62,7 +69,7 @@
-
+
@@ -317,58 +324,32 @@
Installation
LLM API
Examples
Blogs
Quick Start Guide
+Installation
LLM API
Deploy with trtllm-serve
Model Definition API
@@ -507,8 +491,7 @@
Key Features
PyTorch Backend
Quick Start
-Quantization
-Sampling
+Features
Developer Guide
Key Components
Known Issues
@@ -537,14 +520,14 @@
@@ -916,9 +905,9 @@
diff --git a/latest/installation/build-from-source-linux.html b/latest/installation/build-from-source-linux.html
index a94761ee90..04e538d05e 100644
--- a/latest/installation/build-from-source-linux.html
+++ b/latest/installation/build-from-source-linux.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
-
+
+
-
+
@@ -318,58 +325,32 @@
Installation
LLM API
Examples
Blogs
@@ -517,6 +500,7 @@
Prerequisites
Use Docker to build and run TensorRT-LLM. Instructions to install an environment to run Docker containers for the NVIDIA platform can be found here .
+If you intend to build any TensortRT-LLM artifacts, such as any of the container images (note that there exist pre-built develop and release container images in NGC), or the TensorRT-LLM Python wheel, you first need to clone the TensorRT-LLM repository:
# TensorRT-LLM uses git-lfs, which needs to be installed in advance.
apt-get update && apt-get -y install git git-lfs
git lfs install
@@ -533,7 +517,11 @@ git lfs pull
There are two options to create a TensorRT-LLM Docker image. The approximate disk space required to build the image is 63 GB.
Option 1: Build TensorRT-LLM in One Step
-TensorRT-LLM contains a simple command to create a Docker image. Note that if you plan to develop on TensorRT-LLM, we recommend using Option 2: Build TensorRT-LLM Step-By-Step .
+
+TensorRT-LLM contains a simple command to create a Docker image. Note that if you plan to develop on TensorRT-LLM, we recommend using Option 2: Build TensorRT-LLM Step-By-Step .
make -C docker release_build
@@ -549,11 +537,15 @@ make -C docker The make command supports the LOCAL_USER=1 argument to switch to the local user account instead of root inside the container. The examples of TensorRT-LLM are installed in the /app/tensorrt_llm/examples directory.
Since TensorRT-LLM has been built and installed, you can skip the remaining steps.
-
-Option 2: Build TensorRT-LLM Step-by-Step
+
+Option 2: Container for building TensorRT-LLM Step-by-Step
If you are looking for more flexibility, TensorRT-LLM has commands to create and run a development container in which TensorRT-LLM can be built.
-
-Create the Container
+
+
Tip
+
As an alternative to building the container image following the instructions below,
+you can pull a pre-built TensorRT-LLM Develop container image from NGC (see here for information on container tags).
+Follow the linked catalog entry to enter a new container based on the pre-built container image, with the TensorRT source repository mounted into it. You can then skip this section and continue straight to building TensorRT-LLM .
+
On systems with GNU make
Create a Docker image for development. The image will be tagged locally with tensorrt_llm/devel:latest .
@@ -595,6 +587,10 @@ make -C docker
Once inside the container, follow the next steps to build TensorRT-LLM from source.
+
+Advanced topics
+For more information on building and running various TensorRT-LLM container images,
+check NVIDIA/TensorRT-LLM .
@@ -678,7 +674,7 @@ pip install ./build/tensorrt_llm*.
TRTLLM_USE_PRECOMPILED = 1 pip install -e .
-Setting TRTLLM_USE_PRECOMPILED=1 enables downloading a prebuilt wheel of the version specified in tensorrt_llm/version.py , extracting compiled libraries into your current directory, thus skipping C++ compilation.
+Setting TRTLLM_USE_PRECOMPILED=1 enables downloading a prebuilt wheel of the version specified in tensorrt_llm/version.py , extracting compiled libraries into your current directory, thus skipping C++ compilation. This version can be overridden by specifying TRTLLM_USE_PRECOMPILED=x.y.z .
You can specify a custom URL or local path for downloading using TRTLLM_PRECOMPILED_LOCATION . For example, to use version 0.16.0 from PyPI:
TRTLLM_PRECOMPILED_LOCATION = https://pypi.nvidia.com/tensorrt-llm/tensorrt_llm-0.16.0-cp312-cp312-linux_x86_64.whl pip install -e .
@@ -707,15 +703,15 @@ pip
install
./build/tensorrt_llm*.
previous
-
Installing on Linux
+
Installing on Linux via pip
next
-
Installing on Grace Hopper
+
LLM API Introduction
@@ -743,10 +739,8 @@ pip
install
./build/tensorrt_llm*.
Prerequisites
Building a TensorRT-LLM Docker Image
Build TensorRT-LLM
@@ -856,9 +850,9 @@ pip install ./build/tensorrt_llm*.
diff --git a/latest/installation/grace-hopper.html b/latest/installation/containers.html
similarity index 71%
rename from latest/installation/grace-hopper.html
rename to latest/installation/containers.html
index 379fe2de4b..f503368756 100644
--- a/latest/installation/grace-hopper.html
+++ b/latest/installation/containers.html
@@ -9,7 +9,7 @@
- Installing on Grace Hopper — TensorRT-LLM
+ Pre-built release container images on NGC — TensorRT-LLM
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
-
+
+
+
+
+
+
+
-
-
+
+
-
+
@@ -314,58 +321,32 @@
Installation
LLM API
Examples
Blogs
-
Installing on Grace Hopper
+
Pre-built release container images on NGC
@@ -507,50 +490,24 @@
-
-Installing on Grace Hopper
-
-Install TensorRT-LLM (tested on Ubuntu 24.04).
-
After the server starts, you can access familiar OpenAI endpoints such as v1/chat/completions .
-You can run inference such as the following example:
+You can run inference such as the following example from another terminal:
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
@@ -599,7 +592,14 @@ You can run inference such as the following example:
}
-For examples and command syntax, refer to the trtllm-serve section.
+For detailed examples and command syntax, refer to the trtllm-serve section. If you are running trtllm-server inside a Docker container, you have two options for sending API requests:
+
+Expose port 8000 to access the server from outside the container.
+Open a new terminal and use the following command to directly attach to the running container:
+
+ docker exec -it <container_id> bash
+
+
Model Definition API
@@ -620,7 +620,7 @@ You can run inference such as the following example:
Use the Llama model definition from the examples/models/core/llama directory of the GitHub repository.
The model definition is a minimal example that shows some of the optimizations available in TensorRT-LLM.
# From the root of the cloned repository, start the TensorRT-LLM container
-make -C docker release_run LOCAL_USER=1
+make -C docker ngc-release_run LOCAL_USER=1 IMAGE_TAG=x.y.z
# Log in to huggingface-cli
# You can get your token from huggingface.co/settings/token
@@ -638,6 +638,17 @@ The model definition is a minimal example that shows some of the optimizations a
--output_dir ./llama-3.1-8b-engine
+
+
Container image tags
+
In the example shell commands, x.y.z corresponds to the TensorRT-LLM container
+version to use. If omitted, IMAGE_TAG will default to tensorrt_llm.__version__
+(e.g., this documentation was generated from the 1.0.0rc2 source tree).
+If this does not work, e.g., because a container for the version you are
+currently working with has not been released yet, you can try using a
+container published for a previous
+GitHub pre-release or release
+(see also NGC Catalog ).
+
When you create a model definition with the TensorRT-LLM API, you build a graph of operations from NVIDIA TensorRT primitives that form the layers of your neural network. These operations map to specific kernels; prewritten programs for the GPU.
In this example, we included the gpt_attention plugin, which implements a FlashAttention-like fused attention kernel, and the gemm plugin, that performs matrix multiplication with FP32 accumulation. We also called out the desired precision for the full model as FP16, matching the default precision of the weights that you downloaded from Hugging Face. For more information about plugins and quantizations, refer to the Llama example and Numerical Precision section.
@@ -735,6 +746,7 @@ The model definition is a minimal example that shows some of the optimizations a
+Installation
LLM API
Deploy with trtllm-serve
Model Definition API
@@ -840,9 +852,9 @@ The model definition is a minimal example that shows some of the optimizations a
diff --git a/latest/reference/ci-overview.html b/latest/reference/ci-overview.html
index ebdc883189..826111ce83 100644
--- a/latest/reference/ci-overview.html
+++ b/latest/reference/ci-overview.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
+
-
+
@@ -318,58 +325,32 @@
Installation
LLM API
Examples
Blogs
@@ -637,11 +620,11 @@ selective keeps CI turnaround fast and conserves hardware resources.
next
-
H100 has 4.6x A100 Performance in TensorRT-LLM, achieving 10,000 tok/s at 100ms to first token
+
Using Dev Containers
@@ -775,9 +758,9 @@ selective keeps CI turnaround fast and conserves hardware resources.
diff --git a/latest/examples/llm_lookahead_decoding.html b/latest/reference/dev-containers.html
similarity index 59%
rename from latest/examples/llm_lookahead_decoding.html
rename to latest/reference/dev-containers.html
index 5345b86820..bc425bf8ff 100644
--- a/latest/examples/llm_lookahead_decoding.html
+++ b/latest/reference/dev-containers.html
@@ -9,7 +9,7 @@
- Generate Text Using Lookahead Decoding — TensorRT-LLM
+ Using Dev Containers — TensorRT-LLM
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
-
+
+
+
+
+
+
+
-
-
+
+
-
+
@@ -207,6 +214,10 @@
+
+
@@ -314,70 +325,45 @@
Installation
LLM API
Examples
-
-LLM Examples Introduction
-Generate Text Using Medusa Decoding
-Generate text with multiple LoRA adapters
-Generate Text Using Eagle Decoding
-Generate Text Asynchronously
-Distributed LLM Generation
-Control generated text using logits processor
-Generate Text Using Eagle2 Decoding
-Get KV Cache Events
-Generate Text Using Lookahead Decoding
-Generation with Quantization
-Generate Text in Streaming
-Generate text with guided decoding
-Generate text
-Generate text with customization
-Automatic Parallelism with LLM
-Llm Mgmn Llm Distributed
-Llm Mgmn Trtllm Bench
-Llm Mgmn Trtllm Serve
+
@@ -510,49 +494,99 @@
-
-Generate Text Using Lookahead Decoding
-Source NVIDIA/TensorRT-LLM .
- 1 ### Generate Text Using Lookahead Decoding
- 2 from tensorrt_llm._tensorrt_engine import LLM
- 3 from tensorrt_llm.llmapi import ( BuildConfig , KvCacheConfig ,
- 4 LookaheadDecodingConfig , SamplingParams )
- 5
- 6
- 7 def main ():
- 8
- 9 # The end user can customize the build configuration with the build_config class
-10 build_config = BuildConfig ()
-11 build_config . max_batch_size = 32
-12
-13 # The configuration for lookahead decoding
-14 lookahead_config = LookaheadDecodingConfig ( max_window_size = 4 ,
-15 max_ngram_size = 4 ,
-16 max_verification_set_size = 4 )
-17
-18 kv_cache_config = KvCacheConfig ( free_gpu_memory_fraction = 0.4 )
-19 llm = LLM ( model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0" ,
-20 kv_cache_config = kv_cache_config ,
-21 build_config = build_config ,
-22 speculative_config = lookahead_config )
-23
-24 prompt = "NVIDIA is a great company because"
-25 print ( f "Prompt: { prompt !r} " )
-26
-27 sampling_params = SamplingParams ( lookahead_config = lookahead_config )
-28
-29 output = llm . generate ( prompt , sampling_params = sampling_params )
-30 print ( output )
-31
-32 #Output should be similar to:
-33 # Prompt: 'NVIDIA is a great company because'
-34 #RequestOutput(request_id=2, prompt='NVIDIA is a great company because', prompt_token_ids=[1, 405, 13044, 10764, 338, 263, 2107, 5001, 1363], outputs=[CompletionOutput(index=0, text='they are always pushing the envelope. They are always trying to make the best graphics cards and the best processors. They are always trying to make the best', token_ids=[896, 526, 2337, 27556, 278, 427, 21367, 29889, 2688, 526, 2337, 1811, 304, 1207, 278, 1900, 18533, 15889, 322, 278, 1900, 1889, 943, 29889, 2688, 526, 2337, 1811, 304, 1207, 278, 1900], cumulative_logprob=None, logprobs=[], finish_reason='length', stop_reason=None, generation_logits=None)], finished=True)
-35
-36
-37 if __name__ == '__main__' :
-38 main ()
+
+Using Dev Containers
+The TensorRT-LLM repository contains a Dev Containers
+configuration in .devcontainer . These files are intended for
+use with Visual Studio Code .
+Due to the various container options supported by TensorRT-LLM (see
+Building from Source Code on Linux and
+NVIDIA/TensorRT-LLM ), the Dev
+Container configuration also offers some degree of customization.
+Generally, the initializeCommand in devcontainer.json will run
+make_env.py to generate an
+.env file for docker-compose .
+Most importantly, the docker-compose.yml uses ${DEV_CONTAINER_IMAGE}
+as base image.
+The generated .devcontainer/.env is not tracked by Git and combines
+data from the following sources:
+
+jenkins/current_image_tags.properties which contains the image tags
+currently used by CI.
+.devcontainer/devcontainer.env which contains common configuration
+settings and is tracked by Git.
+.devcontainer/devcontainer.env.user (optional) which is ignored by
+Git and can be edited to customize the Dev Container behavior.
+
+The source files are processed using sh , in the order in which they
+are listed above. Thus, features like command substitution are supported.
+The following sections provide more detail on particular Dev Container
+configuration parameters which can be customized.
+
+
Note
+
After editing any of the configuration files, it may be necessary
+to execute the “Dev Containers: Reopen Folder in SSH” (if applicable) and
+“Dev Containers: Rebuild and Reopen in Container” Visual Studio Code
+commands.
+
+
+Container image selection
+By default, make_env.py will attempt to auto-select a suitable container
+image as follows:
+
+Reuse the development container image used by CI. This requires access
+to the NVIDIA internal artifact repository.
+Use the most recent
+NGC Development container image
+associated with a Git tag which is reachable from the currently checked
+out commit.
+Build a development image locally.
+
+Set DEV_CONTAINER_IMAGE=<some_uri> to bypass the aforementioned discovery
+mechanism if desired.
+By setting LOCAL_BUILD=0 , the local image build can be disabled. In this
+case, execution fails if no suitable pre-built image is found.
+Setting LOCAL_BUILD=1 forces building of a local image, even if a pre-built
+image is available.
+
+
+Volume Mounts
+Docker volume mounts can be customized by editing
+docker-compose.yml , which allows using any variables defined in .env .
+By default, the Dev Container configuration mounts the VS Code workspace into
+/workspaces/tensorrt_llm and ~/.cache/huggingface into /huggingface .
+The source paths can be overridden by setting SOURCE_DIR and HOME_DIR
+in .devcontainer/devcontainer.env.user , respectively. This is of
+particular relevance when using
+Docker Rootless Mode ,
+which requires configuring UID/GID translation using a tool like bindfs .
+The Dev Container scripts contain heuristics to detect Docker Rootless
+Mode and will issue an error if these variables are not set.
+An analogous logic is applied to HF_HOME .
+
+
+Overriding Docker Compose configuration
+When starting the container, .devcontainer/docker-compose.yml
+is merged with
+.devcontainer/docker-compose.override.yml . The latter file is not
+tracked by Git and will be created by make_env.py if it does not exist.
+This mechanism can be used, e.g., to add custom volume mounts:
+# Example .devcontainer/docker-compose.override.yml
+version : "3.9"
+services :
+ tensorrt_llm - dev :
+ volumes :
+ # Uncomment the following lines to enable
+ # # Mount TRTLLM data volume:
+ # - /home/scratch.trt_llm_data/:/home/scratch.trt_llm_data/:ro
+It is possible to conditionally mount volumes by combining, e.g.,
+[this method] (https://stackoverflow.com/a/61954812) and shell command
+substitution in .devcontainer/devcontainer.env.user .
+If no .devcontainer/docker-compose.override.yml file is found, the Dev Container
+initialization script will create one with the contents listed above.
+
@@ -566,20 +600,20 @@
previous
-
Get KV Cache Events
+
Continuous Integration Overview
next
-
Generation with Quantization
+
H100 has 4.6x A100 Performance in TensorRT-LLM, achieving 10,000 tok/s at 100ms to first token
@@ -590,9 +624,27 @@
-
+
+
+
+
@@ -683,9 +735,9 @@
diff --git a/latest/reference/memory.html b/latest/reference/memory.html
index a653bfab3f..84edac5225 100644
--- a/latest/reference/memory.html
+++ b/latest/reference/memory.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -318,58 +325,32 @@
Installation
LLM API
Examples
Blogs
@@ -786,9 +769,9 @@ Here some explanations on how these values affect the memory:
diff --git a/latest/reference/precision.html b/latest/reference/precision.html
index 6fff505f83..1990bc96bf 100644
--- a/latest/reference/precision.html
+++ b/latest/reference/precision.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -318,58 +325,32 @@
Installation
LLM API
Examples
Blogs
@@ -1282,9 +1265,9 @@ are:
diff --git a/latest/reference/support-matrix.html b/latest/reference/support-matrix.html
index 6419210337..2bd3d0ff88 100644
--- a/latest/reference/support-matrix.html
+++ b/latest/reference/support-matrix.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -318,58 +325,32 @@
Installation
LLM API
Examples
Blogs
@@ -942,9 +925,9 @@ In addition, older architectures can have limitations for newer software release
diff --git a/latest/reference/troubleshooting.html b/latest/reference/troubleshooting.html
index 6457872f7c..e0d6ab5326 100644
--- a/latest/reference/troubleshooting.html
+++ b/latest/reference/troubleshooting.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -63,7 +70,7 @@
-
+
@@ -318,58 +325,32 @@
Installation
LLM API
Examples
Blogs
@@ -972,9 +955,9 @@ dedicated MPI environment, not the one provided by your Slurm allocation.
diff --git a/latest/release-notes.html b/latest/release-notes.html
index 0a750eb68a..145daa0e00 100644
--- a/latest/release-notes.html
+++ b/latest/release-notes.html
@@ -35,6 +35,7 @@
+
@@ -47,23 +48,29 @@
+
+
+
+
+
+
-
+
-
+
@@ -318,58 +325,32 @@
Installation
LLM API
Examples
Blogs
next
-
Installing on Linux
+
Pre-built release container images on NGC
@@ -2049,9 +2032,9 @@
diff --git a/latest/scripts/disaggregated/README.html b/latest/scripts/disaggregated/README.html
index 498a039ee0..16020d92f2 100644
--- a/latest/scripts/disaggregated/README.html
+++ b/latest/scripts/disaggregated/README.html
@@ -35,6 +35,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -61,7 +68,7 @@
-
+
@@ -316,58 +323,32 @@
Installation
LLM API
Examples
Blogs
@@ -738,9 +721,9 @@
diff --git a/latest/search.html b/latest/search.html
index bc51a6f995..e962f91225 100644
--- a/latest/search.html
+++ b/latest/search.html
@@ -34,6 +34,7 @@
+
@@ -47,11 +48,17 @@
+
+
+
+
+
+
@@ -69,7 +76,7 @@
-
+
@@ -320,58 +327,32 @@
Installation
LLM API
Examples
Blogs
@@ -627,9 +610,9 @@
diff --git a/latest/searchindex.js b/latest/searchindex.js
index f7717fcfa7..e6e32e0b9d 100644
--- a/latest/searchindex.js
+++ b/latest/searchindex.js
@@ -1 +1 @@
-Search.setIndex({"alltitles": {"1. Download TensorRT-LLM": [[21, "download-tensorrt-llm"]], "1. Weights size": [[93, "weights-size"]], "2. Activation size": [[93, "activation-size"]], "2. Download the DeepSeek R1 models": [[21, "download-the-deepseek-r1-models"]], "3. Build and run TensorRT-LLM container": [[21, "build-and-run-tensorrt-llm-container"]], "3. I/O tensors": [[93, "i-o-tensors"]], "3.1 Runtime and decoder buffers except KV cache tensor": [[93, "runtime-and-decoder-buffers-except-kv-cache-tensor"]], "3.2 KV cache tensor": [[93, "kv-cache-tensor"]], "4. Compile and Install TensorRT-LLM": [[21, "compile-and-install-tensorrt-llm"]], "5. Optional: Tune GPU clocks": [[21, "optional-tune-gpu-clocks"]], "6. Dataset preparation": [[21, "dataset-preparation"]], "@record_signature to Decorate Functionals Requiring FLayerInfo": [[7, "record-signature-to-decorate-functionals-requiring-flayerinfo"]], "ALiBi": [[5, "alibi"]], "API": [[3, "api"]], "API Changes": [[14, "api-changes"], [97, "api-changes"], [97, "id9"], [97, "id14"], [97, "id19"], [97, "id24"], [97, "id31"], [97, "id36"], [97, "id42"], [97, "id48"], [97, "id54"]], "API Introduction": [[72, null]], "API Reference": [[73, null]], "AWQ Quantization Scaling Factors": [[16, "awq-quantization-scaling-factors"]], "About": [[33, "about"]], "About Speculative Sampling": [[13, "about-speculative-sampling"]], "About TensorRT-LLM": [[74, "about-tensorrt-llm"]], "Accuracy": [[26, "accuracy"]], "Accuracy studies for Relaxed Acceptance": [[28, "accuracy-studies-for-relaxed-acceptance"]], "Achieving speedup with MTP speculative decoding": [[28, "achieving-speedup-with-mtp-speculative-decoding"]], "Acknowledgement": [[30, "acknowledgement"], [31, "acknowledgement"]], "Acknowledgment": [[27, "acknowledgment"], [28, "acknowledgment"], [29, "acknowledgment"]], "Activation": [[86, "module-tensorrt_llm.layers.activation"]], "Adding a Model": [[15, null]], "Adding a New Model in PyTorch Backend": [[100, null]], "Advanced": [[67, null]], "Algorithm": [[11, "algorithm"]], "Announcements": [[97, "announcements"], [97, "id52"]], "Architecture": [[67, null]], "Architecture Ovewiew": [[101, null]], "Asyncio-Based Generation": [[39, "asyncio-based-generation"]], "Attention": [[86, "module-tensorrt_llm.layers.attention"], [102, null]], "Attention Backends": [[102, "attention-backends"]], "Attention Kernel": [[27, "attention-kernel"]], "Attention Weights": [[16, "attention-weights"]], "Attention for MTP": [[28, "attention-for-mtp"]], "Auto parallel arguments": [[32, "tensorrt_llm.commands.build-parse_arguments-auto-parallel-arguments"]], "Automatic Parallelism with LLM": [[45, null]], "Autoregressive MTP Layers": [[27, "autoregressive-mtp-layers"]], "Avoiding unnecessary --disable-fail-fast usage": [[92, "avoiding-unnecessary-disable-fail-fast-usage"]], "B200 max-throughput for R1 with FP16 KV cache": [[21, "b200-max-throughput-for-r1-with-fp16-kv-cache"]], "B200 max-throughput for R1-0528 with FP8 KV cache": [[21, "b200-max-throughput-for-r1-0528-with-fp8-kv-cache"]], "B200 min-latency": [[21, "b200-min-latency"]], "Background": [[27, "background"], [28, "background"]], "Basic Implementation": [[28, "basic-implementation"]], "Beam-Search": [[5, "beam-search"]], "Before Benchmarking": [[76, "before-benchmarking"]], "Before You Begin: TensorRT-LLM LLM-API": [[78, "before-you-begin-tensorrt-llm-llm-api"]], "Benchmark": [[21, "benchmark"], [21, "id1"], [26, "benchmark"], [33, "benchmark"]], "Benchmarking Default Performance": [[78, null]], "Benchmarking a non-Medusa Low Latency Engine": [[76, "benchmarking-a-non-medusa-low-latency-engine"]], "Benchmarking with LoRA Adapters in PyTorch workflow": [[76, "benchmarking-with-lora-adapters-in-pytorch-workflow"]], "Benchmarking with trtllm-bench": [[78, "benchmarking-with-trtllm-bench"]], "Benchmarks": [[2, "benchmarks"]], "Best practices to choose the right quantization methods": [[26, "best-practices-to-choose-the-right-quantization-methods"]], "Block": [[8, "block"]], "Boost settings": [[76, "boost-settings"]], "Build APIs": [[20, "build-apis"]], "Build Checkpoint into TensorRT Engine": [[16, "build-checkpoint-into-tensorrt-engine"]], "Build Configuration": [[39, "build-configuration"]], "Build TensorRT-LLM": [[68, "build-tensorrt-llm"]], "Build the TensorRT-LLM Docker Image": [[34, null]], "Build the TensorRT-LLM Docker Image and Upload to DockerHub": [[34, "build-the-tensorrt-llm-docker-image-and-upload-to-dockerhub"], [35, "build-the-tensorrt-llm-docker-image-and-upload-to-dockerhub"]], "Building a Benchmark Engine": [[76, "building-a-benchmark-engine"]], "Building a Medusa Low-Latency Engine": [[76, "building-a-medusa-low-latency-engine"]], "Building a TensorRT-LLM Docker Image": [[68, "building-a-tensorrt-llm-docker-image"]], "Building and Saving Engines via CLI": [[78, "building-and-saving-engines-via-cli"]], "Building and Saving the Engine": [[78, "building-and-saving-the-engine"]], "Building from Source Code on Linux": [[68, null]], "Building the Python Bindings for the C++ Runtime": [[68, "building-the-python-bindings-for-the-c-runtime"]], "C++ Executor API Example": [[3, "c-executor-api-example"]], "C++ GPT Runtime": [[6, null]], "C++ extension": [[30, "c-extension"]], "C++ runtime": [[93, "c-runtime"], [93, "id1"]], "CI pipelines": [[92, "ci-pipelines"]], "CLI Tools": [[20, "cli-tools"]], "CUDA Graph & Programmatic Dependent Launch": [[27, "cuda-graph-programmatic-dependent-launch"]], "CUTLASS Backend (default backend)": [[27, "cutlass-backend-default-backend"]], "Cache Layout Transformation": [[31, "cache-layout-transformation"]], "Capacity Scheduler Policy": [[84, "capacity-scheduler-policy"]], "Cast": [[86, "module-tensorrt_llm.layers.cast"]], "Chat API": [[33, "chat-api"]], "Chunked Context": [[5, "chunked-context"]], "Classical Workflow": [[7, "classical-workflow"]], "Closing": [[22, "closing"], [25, "closing"]], "Collect PyTorch profiler results": [[75, "collect-pytorch-profiler-results"]], "Command Overview": [[77, "command-overview"]], "Common LLM Support": [[74, "common-llm-support"]], "Communication Kernel": [[27, "communication-kernel"]], "Compilation": [[17, "compilation"]], "Compile the Model into a TensorRT Engine": [[91, "compile-the-model-into-a-tensorrt-engine"]], "Completions API": [[33, "completions-api"], [33, "id1"]], "Conclusion": [[80, "conclusion"], [82, "conclusion"], [83, "conclusion"]], "Config": [[16, "config"]], "Configure SSH Key": [[35, "configure-ssh-key"]], "Configure The Executor": [[3, "configure-the-executor"]], "Connect to the Pod": [[35, "connect-to-the-pod"]], "Context Chunking Policy": [[84, "context-chunking-policy"]], "Context Phase": [[5, "context-phase"]], "Context and Generation Phases": [[5, "context-and-generation-phases"]], "Contiguous KV Cache": [[5, "contiguous-kv-cache"]], "Continuous Integration Overview": [[92, null]], "Control generated text using logits processor": [[55, null]], "Controlling output with Logits Post-Processor": [[3, "controlling-output-with-logits-post-processor"]], "Conv": [[86, "module-tensorrt_llm.layers.conv"]], "Conversion APIs": [[20, "conversion-apis"]], "Coordinating with NVIDIA Nsight Systems Launch": [[75, "coordinating-with-nvidia-nsight-systems-launch"]], "Coordinating with PyTorch profiler (PyTorch workflow only)": [[75, "coordinating-with-pytorch-profiler-pytorch-workflow-only"]], "Core Models": [[100, "core-models"]], "Core implementations of the GPU logic": [[30, "core-implementations-of-the-gpu-logic"]], "Core implementations of the host logic": [[30, "core-implementations-of-the-host-logic"]], "Create a Pod Template": [[35, "create-a-pod-template"]], "Create a Runpod account": [[35, "create-a-runpod-account"]], "Create the Container": [[68, "create-the-container"]], "Cross Attention": [[5, "cross-attention"]], "Curl Chat Client": [[36, null]], "Curl Chat Client For Multimodal": [[37, null]], "Curl Completion Client": [[38, null]], "Customize KV Cache Manager": [[103, "customize-kv-cache-manager"]], "Customize Your Own Scheduler": [[104, "customize-your-own-scheduler"]], "Data Parallel for Attention module (ADP)": [[29, "data-parallel-for-attention-module-adp"]], "Debug Execution Errors": [[96, "debug-execution-errors"]], "Debug on E2E Models": [[96, "debug-on-e2e-models"]], "Debug on Unit Tests": [[96, "debug-on-unit-tests"]], "Debugging FAQs": [[2, "debugging-faqs"]], "Deciding Model Sharding Strategy": [[79, null]], "Decoder": [[101, "decoder"]], "DeepSeek R1": [[31, "deepseek-r1"]], "DeepSeek R1 MTP Implementation and Optimization": [[28, null]], "Deepseek R1 Reasoning Parser": [[40, null]], "Default Build Behavior": [[76, "default-build-behavior"]], "Dense GEMM optimization": [[27, "dense-gemm-optimization"]], "Deploy with Triton Inference Server": [[91, "deploy-with-triton-inference-server"]], "Deploy with trtllm-serve": [[91, "deploy-with-trtllm-serve"]], "Develop TensorRT-LLM on Runpod": [[35, null]], "Developer Guide": [[99, "developer-guide"]], "Disable Tokenizer": [[39, "disable-tokenizer"]], "Disaggregated Inference Benchmark Scripts": [[98, null]], "Disaggregated Serving in TensorRT-LLM": [[31, null], [31, "id1"]], "Disaggregated-Service (experimental)": [[2, null]], "Distributed LLM Generation": [[53, null]], "DoRA": [[10, "dora"]], "Documentation": [[97, "documentation"], [97, "id28"]], "Draft-Target-Model": [[13, "draft-target-model"]], "Dynamo": [[31, "dynamo"]], "E2E evaluation": [[30, "e2e-evaluation"]], "EAGLE": [[13, "eagle"]], "EP Load Balancer": [[30, "ep-load-balancer"]], "EP communication kernels": [[30, "ep-communication-kernels"]], "EP communication kernels implementation": [[30, "ep-communication-kernels-implementation"]], "Eagle3 support": [[28, "eagle3-support"]], "Embedding": [[86, "module-tensorrt_llm.layers.embedding"]], "Enable GIL information in NVTX markers": [[75, "enable-gil-information-in-nvtx-markers"]], "Enable garbage collection (GC) NVTX markers": [[75, "enable-garbage-collection-gc-nvtx-markers"]], "Enable kv cache reuse for p-tuning": [[9, "enable-kv-cache-reuse-for-p-tuning"]], "Enable more NVTX markers for debugging": [[75, "enable-more-nvtx-markers-for-debugging"]], "Enable ssh access to the container": [[34, "enable-ssh-access-to-the-container"]], "Enabling GEMM + SwiGLU Fusion": [[80, "enabling-gemm-swiglu-fusion"]], "Enabling GEMM Plugin": [[83, "enabling-gemm-plugin"]], "Enabling Low Latency GEMM plugin": [[80, "enabling-low-latency-gemm-plugin"]], "Enabling Paged Context Attention": [[83, "enabling-paged-context-attention"]], "Enabling Quantization": [[80, "enabling-quantization"]], "Enabling Quantized KV Cache": [[80, "enabling-quantized-kv-cache"]], "Enabling Reduce Norm Fusion Plugin": [[83, "enabling-reduce-norm-fusion-plugin"]], "Enabling Reduce Norm Fusion with User Buffers": [[80, "enabling-reduce-norm-fusion-with-user-buffers"]], "Enabling building with multiple profiles": [[83, "enabling-building-with-multiple-profiles"]], "Environment Variables": [[2, "environment-variables"]], "Evaluation": [[28, "evaluation"]], "Events in KVCacheEventManager": [[8, "events-in-kvcacheeventmanager"]], "Everything in One Diagram": [[27, "everything-in-one-diagram"]], "Example": [[2, "example"], [16, "example"], [92, "example"]], "Example LoRA tensors": [[10, "example-lora-tensors"]], "Example of Build Subcommand Output:": [[76, "example-of-build-subcommand-output"]], "Examples": [[17, "examples"], [18, "examples"], [75, "examples"]], "Executor": [[0, null]], "Executor API": [[3, null]], "Expanded thoughts": [[30, "expanded-thoughts"]], "Expected Result Format": [[21, "expected-result-format"], [21, "id2"], [21, "id3"], [21, "id4"]], "Expected Results": [[21, "expected-results"]], "Expert Parallelism in TensorRT-LLM": [[4, null]], "Expert parallel for MoE (EP)": [[29, "expert-parallel-for-moe-ep"]], "Exploring more ISL/OSL combinations": [[21, "exploring-more-isl-osl-combinations"]], "FAQ": [[93, "faq"]], "FLayerInfo for Retrieving High-Level Information for a Functional": [[7, "flayerinfo-for-retrieving-high-level-information-for-a-functional"]], "FP32, FP16 and BF16": [[94, "fp32-fp16-and-bf16"]], "FP4 Models:": [[77, "fp4-models"]], "FP8 (Hopper)": [[94, "fp8-hopper"]], "FP8 Context FMHA": [[5, "fp8-context-fmha"]], "FP8 Models:": [[77, "fp8-models"]], "FP8 Quantization": [[80, null]], "FP8 Quantization Scaling Factors": [[16, "fp8-quantization-scaling-factors"]], "FP8 Support": [[74, "fp8-support"]], "FP8 \u201cBaseline\u201d Performance": [[80, "fp8-baseline-performance"]], "Falcon-180B on a single H200 GPU with INT4 AWQ, and 6.7x faster Llama-70B over A100": [[22, null]], "Falcon-180B on a single H200 with INT4 AWQ": [[22, "falcon-180b-on-a-single-h200-with-int4-awq"]], "Feature Descriptions": [[75, "feature-descriptions"]], "File Descriptions": [[98, "file-descriptions"]], "Finding the stage for a test": [[92, "finding-the-stage-for-a-test"]], "Fixed Issues": [[97, "fixed-issues"], [97, "id11"], [97, "id15"], [97, "id21"], [97, "id26"], [97, "id33"], [97, "id38"], [97, "id44"], [97, "id50"], [97, "id56"], [97, "id61"]], "Fully customized": [[18, "fully-customized"]], "Functionals": [[85, null]], "Fuse_A_GEMM": [[27, "fuse-a-gemm"]], "Future Work": [[31, "future-work"]], "Future Works": [[27, "future-works"], [28, "future-works"], [29, "future-works"]], "Future-Style Generation": [[39, "future-style-generation"]], "GEMM + SwiGLU Fusion in Gated-MLP": [[80, "gemm-swiglu-fusion-in-gated-mlp"]], "GEMM Plugin": [[83, "gemm-plugin"]], "GPTQ and AWQ (W4A16)": [[94, "gptq-and-awq-w4a16"]], "GPU Clock Management": [[76, "gpu-clock-management"]], "Genai Perf Client": [[41, null]], "Genai Perf Client For Multimodal": [[42, null]], "General FAQs": [[2, "general-faqs"]], "Generate Text Asynchronously": [[50, null]], "Generate Text Using Eagle Decoding": [[47, null]], "Generate Text Using Eagle2 Decoding": [[46, null]], "Generate Text Using Lookahead Decoding": [[56, null]], "Generate Text Using Medusa Decoding": [[57, null]], "Generate Text in Streaming": [[51, null]], "Generate text": [[49, null]], "Generate text with customization": [[52, null]], "Generate text with guided decoding": [[48, null]], "Generate text with multiple LoRA adapters": [[61, null]], "Generation": [[39, "generation"]], "Generation Phase": [[5, "generation-phase"]], "Generation with Quantization": [[62, null]], "Get KV Cache Events": [[54, null]], "Getting Started": [[67, null]], "Graph Rewriting APIs": [[7, "graph-rewriting-apis"]], "Graph Rewriting Module": [[7, null]], "Grouped GEMM": [[27, "grouped-gemm"]], "H100 has 4.6x A100 Performance in TensorRT-LLM, achieving 10,000 tok/s at 100ms to first token": [[23, null]], "H200 achieves nearly 12,000 tokens/sec on Llama2-13B with TensorRT-LLM": [[24, null]], "H200 max-throughput": [[21, "h200-max-throughput"]], "H200 min-latency": [[21, "h200-min-latency"]], "H200 vs H100": [[24, "h200-vs-h100"]], "Hardware": [[95, "hardware"]], "Hierarchy: Pool, Block, and Page": [[8, "hierarchy-pool-block-and-page"]], "High-level design introduction": [[30, "high-level-design-introduction"]], "How the Benchmarker Works": [[76, "how-the-benchmarker-works"]], "How to Enable": [[4, "how-to-enable"]], "How to Think about Model Sharding: Communication is Key": [[79, "how-to-think-about-model-sharding-communication-is-key"]], "How to change Max Batch Size": [[82, "how-to-change-max-batch-size"]], "How to change Max Num Tokens": [[82, "how-to-change-max-num-tokens"]], "How to enable kv cache reuse": [[9, "how-to-enable-kv-cache-reuse"]], "How to get best performance on DeepSeek-R1 in TensorRT-LLM": [[21, null]], "How to reproduce": [[27, "how-to-reproduce"], [29, "how-to-reproduce"]], "How to run DeepSeek models with MTP": [[28, "how-to-run-deepseek-models-with-mtp"]], "How to run the DeepSeek-R1 model with Relaxed Acceptance": [[28, "how-to-run-the-deepseek-r1-model-with-relaxed-acceptance"]], "How to set Tensor Parallelism and Pipeline Parallelism": [[79, "how-to-set-tensor-parallelism-and-pipeline-parallelism"]], "Hugging Face Hub": [[72, "hugging-face-hub"]], "INT4 and INT8 Weight-Only (W4A16 and W8A16)": [[94, "int4-and-int8-weight-only-w4a16-and-w8a16"]], "INT8 SmoothQuant (W8A8)": [[94, "int8-smoothquant-w8a8"]], "INT8/FP8 KV Caches": [[5, "int8-fp8-kv-caches"]], "ISL 4096 - OSL 1024 (Machine Translation Dataset)": [[31, "isl-4096-osl-1024-machine-translation-dataset"]], "ISL 4400 - OSL 1200 (Machine Translation Dataset)": [[31, "isl-4400-osl-1200-machine-translation-dataset"]], "ISL 8192 - OSL 256 (Synthetic Dataset)": [[31, "isl-8192-osl-256-synthetic-dataset"]], "Implement AttentionBackend": [[102, "implement-attentionbackend"]], "Implement AttentionMetadata": [[102, "implement-attentionmetadata"]], "Implement a New Attention Backend": [[102, "implement-a-new-attention-backend"]], "Implementation Configuration": [[27, "implementation-configuration"]], "Important Note": [[5, "important-note"]], "In-Flight Batching and Paged Attention": [[74, "in-flight-batching-and-paged-attention"]], "In-flight Batching": [[5, "in-flight-batching"]], "In-flight Batching with the Triton Inference Server": [[3, "in-flight-batching-with-the-triton-inference-server"]], "Indices and tables": [[67, "indices-and-tables"]], "Inference Endpoints": [[33, "inference-endpoints"]], "Infrastructure Changes": [[97, "infrastructure-changes"], [97, "id4"], [97, "id7"], [97, "id12"], [97, "id16"], [97, "id22"], [97, "id27"], [97, "id34"], [97, "id39"], [97, "id45"]], "Infrastructure changes": [[97, "id51"]], "Input QKV tensor": [[5, "input-qkv-tensor"]], "Installation": [[67, null]], "Installation Errors": [[96, "installation-errors"]], "Installing on Grace Hopper": [[69, null]], "Installing on Linux": [[70, null]], "Interfaces": [[103, "interfaces"]], "Internal Components": [[6, "internal-components"]], "Introduction": [[29, "introduction"], [100, "introduction"]], "Jenkins stage names": [[92, "jenkins-stage-names"]], "KV Cache": [[5, "kv-cache"]], "KV Cache Exchange": [[31, "kv-cache-exchange"]], "KV Cache Management: Pools, Blocks, and Events": [[8, null]], "KV Cache Manager": [[103, null]], "KV Cache Manager Introduction": [[103, "kv-cache-manager-introduction"]], "KV Cache Pool Management": [[8, "kv-cache-pool-management"]], "KV Cache Quantization Scaling Factors": [[16, "kv-cache-quantization-scaling-factors"]], "KV cache reuse": [[9, null]], "KVCacheManager": [[101, "kvcachemanager"]], "Kernel Level optimizations": [[27, "kernel-level-optimizations"]], "Kernel fusion": [[27, "kernel-fusion"]], "Key Components": [[99, "key-components"]], "Key Features": [[71, null]], "Key Features and Enhancements": [[97, "key-features-and-enhancements"], [97, "id2"], [97, "id3"], [97, "id5"], [97, "id8"], [97, "id13"], [97, "id18"], [97, "id23"], [97, "id30"], [97, "id35"], [97, "id41"], [97, "id47"], [97, "id53"], [97, "id57"], [97, "id59"]], "Key Optimizations": [[27, "key-optimizations"]], "Known Issues": [[93, "known-issues"], [97, "known-issues"], [97, "id6"], [97, "id10"], [97, "id17"], [97, "id29"], [97, "id40"], [97, "id46"], [97, "id62"], [99, "known-issues"]], "Known Limitations": [[68, "known-limitations"]], "LLM API": [[91, "llm-api"]], "LLM API Examples": [[43, null]], "LLM Common Customizations": [[39, null]], "LLM Examples": [[44, null]], "LLM Examples Introduction": [[43, null]], "LLM Models": [[95, "llm-models"]], "Latest GPU Support": [[74, "latest-gpu-support"]], "Latest HBM Memory": [[24, "latest-hbm-memory"]], "LayerNorm Weights": [[16, "layernorm-weights"]], "Layers": [[86, null]], "Limitations": [[13, "limitations"], [97, "limitations"]], "Limitations and Caveats": [[76, "limitations-and-caveats"]], "Linear": [[86, "module-tensorrt_llm.layers.linear"]], "Linking with the TensorRT-LLM C++ Runtime": [[68, "linking-with-the-tensorrt-llm-c-runtime"]], "Llama 3.1 405B": [[17, "llama-3-1-405b"]], "Llama 3.1 405B FP4": [[77, "llama-3-1-405b-fp4"]], "Llama 3.1 405B FP8": [[77, "llama-3-1-405b-fp8"]], "Llama 3.1 70B": [[17, "llama-3-1-70b"]], "Llama 3.1 70B FP8": [[77, "llama-3-1-70b-fp8"]], "Llama 3.1 8B FP8": [[77, "llama-3-1-8b-fp8"]], "Llama 3.3 70B FP4": [[77, "llama-3-3-70b-fp4"]], "Llama-70B on H200 up to 2.4x increased throughput with XQA within same latency budget": [[25, "llama-70b-on-h200-up-to-2-4x-increased-throughput-with-xqa-within-same-latency-budget"]], "Llama-70B on H200 up to 6.7x A100": [[22, "llama-70b-on-h200-up-to-6-7x-a100"]], "Llm Mgmn Llm Distributed": [[58, null]], "Llm Mgmn Trtllm Bench": [[59, null]], "Llm Mgmn Trtllm Serve": [[60, null]], "LoRA Module id mapping": [[10, "lora-module-id-mapping"]], "LoRA arguments": [[32, "tensorrt_llm.commands.build-parse_arguments-lora-arguments"]], "LoRA tensor format details": [[10, "lora-tensor-format-details"]], "LoRA with tensor parallel": [[10, "lora-with-tensor-parallel"]], "Loading function": [[18, "loading-function"]], "Local Hugging Face Models": [[72, "local-hugging-face-models"]], "Local TensorRT-LLM Engine": [[72, "local-tensorrt-llm-engine"]], "Logits arguments": [[32, "tensorrt_llm.commands.build-parse_arguments-logits-arguments"]], "Lookahead Decoding": [[13, "lookahead-decoding"]], "LoraCache configuration": [[10, "loracache-configuration"]], "Low Latency Benchmark": [[76, "low-latency-benchmark"]], "Low Latency GEMM Plugin": [[80, "low-latency-gemm-plugin"]], "Low Latency TensorRT-LLM Engine for Llama-3 70B": [[76, "low-latency-tensorrt-llm-engine-for-llama-3-70b"]], "Low-Precision-AllReduce": [[11, null]], "MLA Layers Optimizations": [[29, "mla-layers-optimizations"]], "MLP": [[86, "module-tensorrt_llm.layers.mlp"]], "MLP Weights": [[16, "mlp-weights"]], "MLPerf on H100 with FP8": [[23, "mlperf-on-h100-with-fp8"]], "MTP": [[27, "mtp"]], "MTP Eagle": [[28, "mtp-eagle"]], "MTP Modules": [[28, "mtp-modules"]], "MTP Vanilla": [[28, "mtp-vanilla"]], "MTP for inference": [[28, "mtp-for-inference"]], "MTP implementation in TensorRT-LLM": [[28, "mtp-implementation-in-tensorrt-llm"]], "MTP optimization - Relaxed Acceptance": [[28, "mtp-optimization-relaxed-acceptance"]], "Make Evaluation": [[16, "make-evaluation"]], "Mark Tensors As Output": [[3, "mark-tensors-as-output"]], "Max Throughput Benchmark": [[76, "max-throughput-benchmark"]], "Max Tokens in Paged KV Cache and KV Cache Free GPU Memory Fraction": [[84, "max-tokens-in-paged-kv-cache-and-kv-cache-free-gpu-memory-fraction"]], "Maximum Attention Window Size": [[84, "maximum-attention-window-size"]], "Measurement Methodology": [[31, "measurement-methodology"]], "Medusa": [[13, "medusa"]], "Medusa Tree": [[13, "medusa-tree"]], "Memory Usage of TensorRT-LLM": [[93, null]], "Memory pool": [[93, "memory-pool"]], "Metrics Endpoint": [[33, "metrics-endpoint"]], "Miscellaneous": [[30, "miscellaneous"]], "Mixed ETP": [[27, "mixed-etp"]], "Mixture of Experts (MoE)": [[4, "mixture-of-experts-moe"]], "MoE Layers Optimizations": [[29, "moe-layers-optimizations"]], "Model Architecture": [[27, "model-architecture"]], "Model Configuration": [[6, "model-configuration"], [100, "model-configuration"]], "Model Definition": [[17, null], [100, "model-definition"]], "Model Definition API": [[91, "model-definition-api"]], "Model Engine": [[17, "model-engine"], [101, "model-engine"]], "Model Preparation": [[72, "model-preparation"]], "Model Registration": [[100, "model-registration"]], "Model Updates": [[97, "model-updates"], [97, "id20"], [97, "id25"], [97, "id32"], [97, "id37"], [97, "id43"], [97, "id49"], [97, "id55"], [97, "id58"], [97, "id60"]], "Model Weights": [[19, "model-weights"]], "Models": [[87, null]], "Models (PyTorch Backend)": [[95, "models-pytorch-backend"]], "Models (TensorRT Backend)": [[95, "models-tensorrt-backend"]], "Models with customized key names": [[18, "models-with-customized-key-names"]], "Models with customized weight layout": [[18, "models-with-customized-weight-layout"]], "Motivation": [[31, "motivation"]], "Motivation for large-scale EP": [[30, "motivation-for-large-scale-ep"]], "Motivation of EP communication kernels for GB200": [[30, "motivation-of-ep-communication-kernels-for-gb200"]], "Multi-GPU Multi-Node Inference": [[74, "multi-gpu-multi-node-inference"]], "Multi-GPU and Multi-Node Support": [[17, "multi-gpu-and-multi-node-support"]], "Multi-Head, Multi-Query, and Group-Query Attention": [[5, null]], "Multi-Modal Models 3": [[95, "multi-modal-models"]], "Multi-backend Support": [[31, "multi-backend-support"]], "Multi-node Serving with Slurm": [[33, "multi-node-serving-with-slurm"]], "Multi-streams": [[27, "multi-streams"]], "Multimodal Serving": [[33, "multimodal-serving"]], "Multiple Profiles": [[83, "multiple-profiles"]], "NVFP4 (Blackwell)": [[94, "nvfp4-blackwell"]], "Named Arguments": [[32, "tensorrt_llm.commands.build-parse_arguments-named-arguments"]], "Native Windows Support": [[74, "native-windows-support"]], "Natively supported models": [[18, "natively-supported-models"]], "New XQA-kernel provides 2.4x more Llama-70B throughput within the same latency budget": [[25, null]], "Next Steps": [[91, "next-steps"]], "Normalization": [[86, "module-tensorrt_llm.layers.normalization"]], "Note on context outputs": [[3, "note-on-context-outputs"]], "Numerical Precision": [[94, null]], "Observation over GSM8K dataset": [[30, "observation-over-gsm8k-dataset"]], "Observations over one machine translation dataset": [[30, "observations-over-one-machine-translation-dataset"]], "Obtaining Arbitrary Output Tensors": [[3, "obtaining-arbitrary-output-tensors"]], "Offline EP Load Balancer": [[30, "offline-ep-load-balancer"], [30, "id1"]], "Offloading to host memory": [[9, "offloading-to-host-memory"]], "Online EP Load Balancer": [[30, "online-ep-load-balancer"], [30, "id2"]], "Online Serving Examples": [[66, null]], "Only collect specific iterations": [[75, "only-collect-specific-iterations"]], "OpenAI Chat Client": [[63, null], [64, null]], "OpenAI Completion Client": [[65, null]], "Optimizing DeepSeek R1 Throughput on NVIDIA Blackwell GPUs: A Deep Dive for Developers": [[29, null]], "Option 1: Build TensorRT-LLM in One Step": [[68, "option-1-build-tensorrt-llm-in-one-step"]], "Option 1: Full Build with C++ Compilation": [[68, "option-1-full-build-with-c-compilation"]], "Option 2: Build TensorRT-LLM Step-by-Step": [[68, "option-2-build-tensorrt-llm-step-by-step"]], "Option 2: Python-Only Build without C++ Compilation": [[68, "option-2-python-only-build-without-c-compilation"]], "Other Build Modes": [[76, "other-build-modes"]], "Out of memory issues": [[21, "out-of-memory-issues"]], "Out-of-Tree Models": [[100, "out-of-tree-models"]], "Overlap Optimization": [[31, "overlap-optimization"]], "Overview": [[6, "overview"], [16, "overview"], [18, "overview"], [20, "overview"], [74, null], [77, null], [98, "overview"]], "Padded and Packed Tensors": [[5, "padded-and-packed-tensors"]], "Page": [[8, "page"]], "Paged Context Attention": [[83, "paged-context-attention"]], "Paged KV Cache": [[5, "paged-kv-cache"]], "Parallel strategy": [[29, "parallel-strategy"]], "Parallelism Mapping Support": [[76, "parallelism-mapping-support"]], "Parallelism Strategy": [[27, "parallelism-strategy"]], "Pattern and Pattern Manager": [[7, "pattern-and-pattern-manager"]], "Pattern-Matching and Fusion": [[17, "pattern-matching-and-fusion"]], "Performance": [[26, "performance"], [67, null], [83, "performance"]], "Performance Analysis": [[75, null]], "Performance Improvements": [[13, "performance-improvements"]], "Performance Studies": [[31, "performance-studies"]], "Performance Tuning Guide": [[81, null]], "Performance and Accuracy Considerations": [[11, "performance-and-accuracy-considerations"]], "Performance expectations": [[9, "performance-expectations"]], "Performance study": [[30, "performance-study"]], "Performance with GEMM + SwiGLU Fusion": [[80, "performance-with-gemm-swiglu-fusion"]], "Performance with GEMM Plugin": [[83, "performance-with-gemm-plugin"]], "Performance with Low Latency GEMM plugin": [[80, "performance-with-low-latency-gemm-plugin"]], "Performance with Quantized KV Cache": [[80, "performance-with-quantized-kv-cache"]], "Performance with Reduce Norm Fusion": [[83, "performance-with-reduce-norm-fusion"]], "Performance with Reduce Norm Fusion + User Buffers:": [[80, "performance-with-reduce-norm-fusion-user-buffers"]], "Performance with multiple profiles": [[83, "performance-with-multiple-profiles"]], "Persistence mode": [[76, "persistence-mode"]], "Pipeline Parallel Reduce Scatter Optimization": [[83, "pipeline-parallel-reduce-scatter-optimization"]], "Plugin": [[88, null]], "Plugin config arguments": [[32, "tensorrt_llm.commands.build-parse_arguments-plugin-config-arguments"]], "Plugins": [[17, "plugins"]], "Pool": [[8, "pool"]], "Pooling": [[86, "module-tensorrt_llm.layers.pooling"]], "Postprocessing functions": [[18, "postprocessing-functions"]], "Precision Strategy": [[27, "precision-strategy"]], "Precision strategy": [[29, "precision-strategy"]], "Prepare": [[35, "prepare"]], "Prepare Dataset": [[78, "prepare-dataset"]], "Prepare the TensorRT-LLM Checkpoint": [[16, "prepare-the-tensorrt-llm-checkpoint"]], "Preparing a Dataset": [[76, "preparing-a-dataset"], [77, "preparing-a-dataset"]], "Prerequisite Knowledge": [[81, "prerequisite-knowledge"]], "Prerequisites": [[68, "prerequisites"], [91, "prerequisites"], [100, "prerequisites"]], "Prerequisites: Install TensorRT-LLM and download models": [[21, "prerequisites-install-tensorrt-llm-and-download-models"]], "Profiling specific iterations on a trtllm-bench/trtllm-serve run": [[75, "profiling-specific-iterations-on-a-trtllm-bench-trtllm-serve-run"]], "Prompt-Lookup-Decoding": [[13, "prompt-lookup-decoding"]], "Pushing Latency Boundaries: Optimizing DeepSeek-R1 Performance on NVIDIA B200 GPUs": [[27, null]], "PyExecutor": [[101, "pyexecutor"]], "PyTorch Backend": [[99, null]], "Python Bindings for the Executor API": [[3, "python-bindings-for-the-executor-api"]], "Python Interface": [[30, "python-interface"]], "Python runtime (Not recommended to be used)": [[93, "python-runtime-not-recommended-to-be-used"]], "Quantization": [[39, "quantization"], [89, null], [99, "quantization"]], "Quantization APIs": [[20, "quantization-apis"]], "Quantization and Dequantization (Q/DQ)": [[94, "quantization-and-dequantization-q-dq"]], "Quantization in TensorRT-LLM": [[26, "quantization-in-tensorrt-llm"]], "Quantization in the PyTorch Flow": [[76, "quantization-in-the-pytorch-flow"]], "Quantized KV-Cache": [[80, "quantized-kv-cache"]], "Quick Start": [[99, "quick-start"]], "Quick Start Guide": [[91, null]], "Quickstart": [[76, "quickstart"]], "Rank Weights": [[16, "rank-weights"]], "Re-balanced the sparse experts": [[27, "re-balanced-the-sparse-experts"]], "ReDrafter": [[13, "redrafter"]], "Reduce Norm Fusion Plugin for Llama models:": [[83, "reduce-norm-fusion-plugin-for-llama-models"]], "Reduce Norm Fusion with User Buffers for Llama Models": [[80, "reduce-norm-fusion-with-user-buffers-for-llama-models"]], "Reference": [[15, "reference"], [67, null]], "Related Information": [[91, "related-information"]], "Relative Attention Bias (RAB)": [[5, "relative-attention-bias-rab"]], "Relax Acceptance Verification": [[27, "relax-acceptance-verification"]], "Relaxed Acceptance": [[28, "relaxed-acceptance"]], "Release Notes": [[97, null]], "Reproducing Benchmarked Results": [[77, "reproducing-benchmarked-results"]], "Reproducing Steps": [[31, "reproducing-steps"]], "Reproducing steps": [[21, "reproducing-steps"], [30, "reproducing-steps"]], "Request Additional Output": [[3, "request-additional-output"]], "ResourceManager": [[101, "resourcemanager"]], "Results": [[78, "results"]], "Revisiting Paged Context Attention and Context Chunking": [[82, "revisiting-paged-context-attention-and-context-chunking"]], "Rotary Positional Embedding (RoPE)": [[5, "rotary-positional-embedding-rope"]], "RouterGEMM": [[27, "routergemm"]], "Run gpt-2b + LoRA using Executor / cpp runtime": [[10, null]], "Run the Model": [[91, "run-the-model"]], "Running Throughput and Latency Benchmarks": [[78, "running-throughput-and-latency-benchmarks"]], "Running With Weight Streaming to Reduce GPU Memory Consumption": [[14, null]], "Running multi-modal models in the PyTorch Workflow": [[76, "running-multi-modal-models-in-the-pytorch-workflow"]], "Running the Benchmark": [[77, "running-the-benchmark"]], "Running with the PyTorch Workflow": [[76, "running-with-the-pytorch-workflow"]], "Runtime": [[1, null], [17, "runtime"], [90, null]], "Runtime Customization": [[39, "runtime-customization"]], "Runtime Optimizations": [[29, "runtime-optimizations"]], "Sampling": [[39, "sampling"], [99, "sampling"]], "Sampling Parameters": [[6, "sampling-parameters"]], "Scaling Expert Parallelism in TensorRT-LLM (Part 1: Design and Implementation of Large-scale EP)": [[30, null]], "Scaling factor(s)": [[5, "scaling-factor-s"]], "Scheduler": [[101, "scheduler"], [104, null]], "Scheduler Introduction": [[104, "scheduler-introduction"]], "Scripts": [[44, null], [66, null]], "Sending Requests with Different Beam Widths": [[3, "sending-requests-with-different-beam-widths"]], "Set power limits": [[76, "set-power-limits"]], "Situations that can prevent kv cache reuse": [[9, "situations-that-can-prevent-kv-cache-reuse"]], "Sliding Window Attention, Cyclic (Rolling Buffer) KV Cache": [[5, "sliding-window-attention-cyclic-rolling-buffer-kv-cache"]], "Smart Router": [[27, "smart-router"]], "Software": [[95, "software"]], "Sparse Experts as GEMMs (only works when moe_backend=CUTLASS)": [[27, "sparse-experts-as-gemms-only-works-when-moe-backend-cutlass"]], "Speculative Sampling": [[13, null]], "Speculative decoding arguments": [[32, "tensorrt_llm.commands.build-parse_arguments-speculative-decoding-arguments"]], "Speed up inference with SOTA quantization techniques in TRT-LLM": [[26, null]], "Starting a Server": [[33, "starting-a-server"]], "Step 1. Write Modeling Part": [[15, "step-1-write-modeling-part"]], "Step 1: Run inference and collect statistics": [[30, "step-1-run-inference-and-collect-statistics"]], "Step 2. Implement Weight Conversion": [[15, "step-2-implement-weight-conversion"]], "Step 2: Generate the EPLB configuration": [[30, "step-2-generate-the-eplb-configuration"]], "Step 3. Register New Model": [[15, "step-3-register-new-model"]], "Step 3: Run inference with the EPLB configuration": [[30, "step-3-run-inference-with-the-eplb-configuration"]], "Step 4. Verify New Model": [[15, "step-4-verify-new-model"]], "Step-by-Step Guide": [[100, "step-by-step-guide"]], "StreamingLLM": [[5, "streamingllm"]], "Structured output with guided decoding": [[3, "structured-output-with-guided-decoding"]], "Summary": [[76, "summary"]], "Summary of Configuration Option Recommendations:": [[80, "summary-of-configuration-option-recommendations"], [83, "summary-of-configuration-option-recommendations"]], "Support Matrix": [[95, null]], "Support matrix": [[94, "support-matrix"]], "Supported C++ Header Files": [[68, "supported-c-header-files"]], "Supported Models": [[72, "supported-models"]], "Supported Quantization Modes": [[76, "supported-quantization-modes"]], "Syntax": [[33, "syntax"]], "System Level optimizations": [[27, "system-level-optimizations"]], "TRTLLM Backend": [[27, "trtllm-backend"]], "Table of Contents": [[21, "table-of-contents"], [27, "table-of-contents"], [28, "table-of-contents"], [29, "table-of-contents"], [30, "table-of-contents"], [81, "table-of-contents"], [92, "table-of-contents"], [100, "table-of-contents"]], "Technical Detail: The QuantMode Flags": [[94, "technical-detail-the-quantmode-flags"]], "Tensor Parallel vs Expert Parallel": [[4, "tensor-parallel-vs-expert-parallel"]], "Tensor-Related Methods": [[7, "tensor-related-methods"]], "TensorRT Compiler": [[17, "tensorrt-compiler"]], "TensorRT-LLM Architecture": [[19, null]], "TensorRT-LLM Benchmarking": [[76, null]], "TensorRT-LLM Build Workflow": [[20, null]], "TensorRT-LLM Checkpoint": [[16, null]], "TensorRT-LLM Model Weights Loader": [[18, null]], "TensorRT-LLM Release 0.10.0": [[97, "tensorrt-llm-release-0-10-0"]], "TensorRT-LLM Release 0.11.0": [[97, "tensorrt-llm-release-0-11-0"]], "TensorRT-LLM Release 0.12.0": [[97, "tensorrt-llm-release-0-12-0"]], "TensorRT-LLM Release 0.13.0": [[97, "tensorrt-llm-release-0-13-0"]], "TensorRT-LLM Release 0.14.0": [[97, "tensorrt-llm-release-0-14-0"]], "TensorRT-LLM Release 0.15.0": [[97, "tensorrt-llm-release-0-15-0"]], "TensorRT-LLM Release 0.16.0": [[97, "tensorrt-llm-release-0-16-0"]], "TensorRT-LLM Release 0.17.0": [[97, "tensorrt-llm-release-0-17-0"]], "TensorRT-LLM Release 0.18.0": [[97, "tensorrt-llm-release-0-18-0"]], "TensorRT-LLM Release 0.18.1": [[97, "tensorrt-llm-release-0-18-1"]], "TensorRT-LLM Release 0.18.2": [[97, "tensorrt-llm-release-0-18-2"]], "TensorRT-LLM Release 0.19.0": [[97, "tensorrt-llm-release-0-19-0"]], "TensorRT-LLM Release 0.7.1": [[97, "tensorrt-llm-release-0-7-1"]], "TensorRT-LLM Release 0.8.0": [[97, "tensorrt-llm-release-0-8-0"]], "TensorRT-LLM Release 0.9.0": [[97, "tensorrt-llm-release-0-9-0"]], "Test definitions": [[92, "test-definitions"]], "The Executor Class": [[3, "the-executor-class"]], "The Request Class": [[3, "the-request-class"]], "The Response Class": [[3, "the-response-class"]], "The Result Class": [[3, "the-result-class"]], "The effect of EP Load Balancer": [[30, "the-effect-of-ep-load-balancer"], [30, "id3"]], "Throughput Benchmarking": [[76, "throughput-benchmarking"]], "Throughput Measurements": [[77, "throughput-measurements"]], "Tips": [[96, "tips"]], "Tips and Troubleshooting": [[72, "tips-and-troubleshooting"]], "Tokenizer Customization": [[39, "tokenizer-customization"]], "Top Level API": [[101, "top-level-api"]], "Topology Requirements": [[11, "topology-requirements"]], "Translator": [[18, "translator"]], "Tree-based speculative decoding support": [[28, "tree-based-speculative-decoding-support"]], "Triggering CI Best Practices": [[92, "triggering-ci-best-practices"]], "Triggering Post-merge tests": [[92, "triggering-post-merge-tests"]], "Triton Inference Server": [[31, "triton-inference-server"]], "Trouble shooting": [[18, "trouble-shooting"]], "Troubleshooting": [[96, null]], "Troubleshooting Tips and Pitfalls To Avoid": [[78, "troubleshooting-tips-and-pitfalls-to-avoid"]], "Troubleshooting and FAQ": [[2, "troubleshooting-and-faq"]], "Tuning Case Study": [[82, "tuning-case-study"], [82, "id2"]], "Tuning Max Batch Size": [[82, "tuning-max-batch-size"]], "Tuning Max Batch Size and Max Num Tokens": [[82, null]], "Tuning Max Num Tokens": [[82, "tuning-max-num-tokens"]], "Types of Events": [[8, "types-of-events"]], "Understand inference time GPU memory usage": [[93, "understand-inference-time-gpu-memory-usage"]], "Understanding the TensorRT-LLM scheduler": [[82, "understanding-the-tensorrt-llm-scheduler"]], "Unit tests": [[92, "unit-tests"]], "Upload the Docker Image to DockerHub": [[34, "upload-the-docker-image-to-dockerhub"]], "Usage": [[2, "usage"], [11, "usage"]], "Useful Build-Time Flags": [[83, null]], "Useful Runtime Options": [[84, null]], "Using Medusa with TensorRT-LLM": [[13, "using-medusa-with-tensorrt-llm"]], "Validated Networks for Benchmarking": [[76, "validated-networks-for-benchmarking"]], "Variables": [[77, "variables"]], "Visualize the PyTorch profiler results": [[75, "visualize-the-pytorch-profiler-results"]], "WIP: Chunked context support on DeepSeek models": [[21, "wip-chunked-context-support-on-deepseek-models"]], "WIP: Enable more features by default": [[21, "wip-enable-more-features-by-default"]], "Waiving tests": [[92, "waiving-tests"]], "Weight Bindings": [[17, "weight-bindings"]], "Weight Loading": [[100, "weight-loading"]], "Weights absorb and MQA": [[29, "weights-absorb-and-mqa"]], "Welcome to TensorRT-LLM\u2019s Documentation!": [[67, null]], "What Can You Do With TensorRT-LLM?": [[74, "what-can-you-do-with-tensorrt-llm"]], "What Triggers an Event?": [[8, "what-triggers-an-event"]], "What is H100 FP8?": [[23, "what-is-h100-fp8"]], "What\u2019s coming next": [[26, "whats-coming-next"]], "When to Use Graph Rewriting?": [[7, "when-to-use-graph-rewriting"]], "WindowBlockManager/BlockManager": [[8, "windowblockmanager-blockmanager"]], "Workflow": [[18, "workflow"], [76, "workflow"], [98, "workflow"]], "Workload Profile": [[27, "workload-profile"]], "World Configuration": [[6, "world-configuration"]], "XQA Optimization": [[5, "xqa-optimization"]], "bufferManager.h": [[1, "buffermanager-h"]], "cacheCommunicator.h": [[0, "cachecommunicator-h"]], "common.h": [[1, "common-h"]], "cudaEvent.h": [[1, "cudaevent-h"]], "cudaStream.h": [[1, "cudastream-h"]], "dataTransceiverState.h": [[0, "datatransceiverstate-h"]], "decoderState.h": [[1, "decoderstate-h"]], "decodingInput.h": [[1, "decodinginput-h"]], "decodingOutput.h": [[1, "decodingoutput-h"]], "disaggServerUtil.h": [[0, "disaggserverutil-h"]], "disaggr_torch.slurm": [[98, "disaggr-torch-slurm"]], "disaggregated": [[33, "trtllm-serve-disaggregated"]], "disaggregated_mpi_worker": [[33, "trtllm-serve-disaggregated-mpi-worker"]], "eagleBuffers.h": [[1, "eaglebuffers-h"]], "eagleModule.h": [[1, "eaglemodule-h"]], "executor.h": [[0, "executor-h"]], "explicitDraftTokensBuffers.h": [[1, "explicitdrafttokensbuffers-h"]], "gen_yaml.py": [[98, "gen-yaml-py"]], "gptDecoder.h": [[1, "gptdecoder-h"]], "gptDecoderBatched.h": [[1, "gptdecoderbatched-h"]], "gptJsonConfig.h": [[1, "gptjsonconfig-h"]], "iBuffer.h": [[1, "ibuffer-h"]], "iGptDecoderBatched.h": [[1, "igptdecoderbatched-h"]], "iTensor.h": [[1, "itensor-h"]], "ipcNvlsMemory.h": [[1, "ipcnvlsmemory-h"]], "ipcUtils.h": [[1, "ipcutils-h"]], "lookaheadBuffers.h": [[1, "lookaheadbuffers-h"]], "lookaheadModule.h": [[1, "lookaheadmodule-h"]], "loraCache.h": [[1, "loracache-h"]], "loraCachePageManagerConfig.h": [[1, "loracachepagemanagerconfig-h"]], "loraModule.h": [[1, "loramodule-h"]], "medusaModule.h": [[1, "medusamodule-h"]], "memoryCounters.h": [[1, "memorycounters-h"]], "modelConfig.h": [[1, "modelconfig-h"]], "promptTuningParams.h": [[1, "prompttuningparams-h"]], "rawEngine.h": [[1, "rawengine-h"]], "request.h": [[1, "request-h"]], "run_benchmark.sh": [[98, "run-benchmark-sh"]], "runtimeDefaults.h": [[1, "runtimedefaults-h"]], "samplingConfig.h": [[1, "samplingconfig-h"]], "serialization.h": [[0, "serialization-h"]], "serve": [[33, "trtllm-serve-serve"]], "speculativeDecodingMode.h": [[1, "speculativedecodingmode-h"]], "speculativeDecodingModule.h": [[1, "speculativedecodingmodule-h"]], "start_worker.sh": [[98, "start-worker-sh"]], "submit.sh": [[98, "submit-sh"]], "tensor.h": [[0, "tensor-h"]], "tllmLogger.h": [[1, "tllmlogger-h"]], "transferAgent.h": [[0, "transferagent-h"]], "trtllm-build": [[32, null]], "trtllm-serve": [[31, "trtllm-serve"], [33, null], [33, "trtllm-serve"]], "types.h": [[0, "types-h"]], "worldConfig.h": [[1, "worldconfig-h"]]}, "docnames": ["_cpp_gen/executor", "_cpp_gen/runtime", "advanced/disaggregated-service", "advanced/executor", "advanced/expert-parallelism", "advanced/gpt-attention", "advanced/gpt-runtime", "advanced/graph-rewriting", "advanced/kv-cache-management", "advanced/kv-cache-reuse", "advanced/lora", "advanced/lowprecision-pcie-allreduce", "advanced/open-sourced-cutlass-kernels", "advanced/speculative-decoding", "advanced/weight-streaming", "architecture/add-model", "architecture/checkpoint", "architecture/core-concepts", "architecture/model-weights-loader", "architecture/overview", "architecture/workflow", "blogs/Best_perf_practice_on_DeepSeek-R1_in_TensorRT-LLM", "blogs/Falcon180B-H200", "blogs/H100vsA100", "blogs/H200launch", "blogs/XQA-kernel", "blogs/quantization-in-TRT-LLM", "blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs", "blogs/tech_blog/blog2_DeepSeek_R1_MTP_Implementation_and_Optimization", "blogs/tech_blog/blog3_Optimizing_DeepSeek_R1_Throughput_on_NVIDIA_Blackwell_GPUs", "blogs/tech_blog/blog4_Scaling_Expert_Parallelism_in_TensorRT-LLM", "blogs/tech_blog/blog5_Disaggregated_Serving_in_TensorRT-LLM", "commands/trtllm-build", "commands/trtllm-serve", "dev-on-cloud/build-image-to-dockerhub", "dev-on-cloud/dev-on-runpod", "examples/curl_chat_client", "examples/curl_chat_client_for_multimodal", "examples/curl_completion_client", "examples/customization", "examples/deepseek_r1_reasoning_parser", "examples/genai_perf_client", "examples/genai_perf_client_for_multimodal", "examples/index", "examples/llm_api_examples", "examples/llm_auto_parallel", "examples/llm_eagle2_decoding", "examples/llm_eagle_decoding", "examples/llm_guided_decoding", "examples/llm_inference", "examples/llm_inference_async", "examples/llm_inference_async_streaming", "examples/llm_inference_customize", "examples/llm_inference_distributed", "examples/llm_inference_kv_events", "examples/llm_logits_processor", "examples/llm_lookahead_decoding", "examples/llm_medusa_decoding", "examples/llm_mgmn_llm_distributed", "examples/llm_mgmn_trtllm_bench", "examples/llm_mgmn_trtllm_serve", "examples/llm_multilora", "examples/llm_quantization", "examples/openai_chat_client", "examples/openai_chat_client_for_multimodal", "examples/openai_completion_client", "examples/trtllm_serve_examples", "index", "installation/build-from-source-linux", "installation/grace-hopper", "installation/linux", "key-features", "llm-api/index", "llm-api/reference", "overview", "performance/perf-analysis", "performance/perf-benchmarking", "performance/perf-overview", "performance/performance-tuning-guide/benchmarking-default-performance", "performance/performance-tuning-guide/deciding-model-sharding-strategy", "performance/performance-tuning-guide/fp8-quantization", "performance/performance-tuning-guide/index", "performance/performance-tuning-guide/tuning-max-batch-size-and-max-num-tokens", "performance/performance-tuning-guide/useful-build-time-flags", "performance/performance-tuning-guide/useful-runtime-flags", "python-api/tensorrt_llm.functional", "python-api/tensorrt_llm.layers", "python-api/tensorrt_llm.models", "python-api/tensorrt_llm.plugin", "python-api/tensorrt_llm.quantization", "python-api/tensorrt_llm.runtime", "quick-start-guide", "reference/ci-overview", "reference/memory", "reference/precision", "reference/support-matrix", "reference/troubleshooting", "release-notes", "scripts/disaggregated/README", "torch", "torch/adding_new_model", "torch/arch_overview", "torch/attention", "torch/kv_cache_manager", "torch/scheduler"], "envversion": {"sphinx": 62, "sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2, "sphinx.ext.todo": 2, "sphinx.ext.viewcode": 1}, "filenames": ["_cpp_gen/executor.rst", "_cpp_gen/runtime.rst", "advanced/disaggregated-service.md", "advanced/executor.md", "advanced/expert-parallelism.md", "advanced/gpt-attention.md", "advanced/gpt-runtime.md", "advanced/graph-rewriting.md", "advanced/kv-cache-management.md", "advanced/kv-cache-reuse.md", "advanced/lora.md", "advanced/lowprecision-pcie-allreduce.md", "advanced/open-sourced-cutlass-kernels.md", "advanced/speculative-decoding.md", "advanced/weight-streaming.md", "architecture/add-model.md", "architecture/checkpoint.md", "architecture/core-concepts.md", "architecture/model-weights-loader.md", "architecture/overview.md", "architecture/workflow.md", "blogs/Best_perf_practice_on_DeepSeek-R1_in_TensorRT-LLM.md", "blogs/Falcon180B-H200.md", "blogs/H100vsA100.md", "blogs/H200launch.md", "blogs/XQA-kernel.md", "blogs/quantization-in-TRT-LLM.md", "blogs/tech_blog/blog1_Pushing_Latency_Boundaries_Optimizing_DeepSeek-R1_Performance_on_NVIDIA_B200_GPUs.md", "blogs/tech_blog/blog2_DeepSeek_R1_MTP_Implementation_and_Optimization.md", "blogs/tech_blog/blog3_Optimizing_DeepSeek_R1_Throughput_on_NVIDIA_Blackwell_GPUs.md", "blogs/tech_blog/blog4_Scaling_Expert_Parallelism_in_TensorRT-LLM.md", "blogs/tech_blog/blog5_Disaggregated_Serving_in_TensorRT-LLM.md", "commands/trtllm-build.rst", "commands/trtllm-serve.rst", "dev-on-cloud/build-image-to-dockerhub.md", "dev-on-cloud/dev-on-runpod.md", "examples/curl_chat_client.rst", "examples/curl_chat_client_for_multimodal.rst", "examples/curl_completion_client.rst", "examples/customization.md", "examples/deepseek_r1_reasoning_parser.rst", "examples/genai_perf_client.rst", "examples/genai_perf_client_for_multimodal.rst", "examples/index.rst", "examples/llm_api_examples.rst", "examples/llm_auto_parallel.rst", "examples/llm_eagle2_decoding.rst", "examples/llm_eagle_decoding.rst", "examples/llm_guided_decoding.rst", "examples/llm_inference.rst", "examples/llm_inference_async.rst", "examples/llm_inference_async_streaming.rst", "examples/llm_inference_customize.rst", "examples/llm_inference_distributed.rst", "examples/llm_inference_kv_events.rst", "examples/llm_logits_processor.rst", "examples/llm_lookahead_decoding.rst", "examples/llm_medusa_decoding.rst", "examples/llm_mgmn_llm_distributed.rst", "examples/llm_mgmn_trtllm_bench.rst", "examples/llm_mgmn_trtllm_serve.rst", "examples/llm_multilora.rst", "examples/llm_quantization.rst", "examples/openai_chat_client.rst", "examples/openai_chat_client_for_multimodal.rst", "examples/openai_completion_client.rst", "examples/trtllm_serve_examples.rst", "index.rst", "installation/build-from-source-linux.md", "installation/grace-hopper.md", "installation/linux.md", "key-features.md", "llm-api/index.md", "llm-api/reference.rst", "overview.md", "performance/perf-analysis.md", "performance/perf-benchmarking.md", "performance/perf-overview.md", "performance/performance-tuning-guide/benchmarking-default-performance.md", "performance/performance-tuning-guide/deciding-model-sharding-strategy.md", "performance/performance-tuning-guide/fp8-quantization.md", "performance/performance-tuning-guide/index.rst", "performance/performance-tuning-guide/tuning-max-batch-size-and-max-num-tokens.md", "performance/performance-tuning-guide/useful-build-time-flags.md", "performance/performance-tuning-guide/useful-runtime-flags.md", "python-api/tensorrt_llm.functional.rst", "python-api/tensorrt_llm.layers.rst", "python-api/tensorrt_llm.models.rst", "python-api/tensorrt_llm.plugin.rst", "python-api/tensorrt_llm.quantization.rst", "python-api/tensorrt_llm.runtime.rst", "quick-start-guide.md", "reference/ci-overview.md", "reference/memory.md", "reference/precision.md", "reference/support-matrix.md", "reference/troubleshooting.md", "release-notes.md", "scripts/disaggregated/README.md", "torch.md", "torch/adding_new_model.md", "torch/arch_overview.md", "torch/attention.md", "torch/kv_cache_manager.md", "torch/scheduler.md"], "indexentries": {"--backend": [[33, "cmdoption-trtllm-serve-serve-backend", false]], "--cluster_size": [[33, "cmdoption-trtllm-serve-serve-cluster_size", false]], "--config_file": [[33, "cmdoption-trtllm-serve-disaggregated-c", false], [33, "cmdoption-trtllm-serve-disaggregated_mpi_worker-c", false]], "--ep_size": [[33, "cmdoption-trtllm-serve-serve-ep_size", false]], "--extra_llm_api_options": [[33, "cmdoption-trtllm-serve-serve-extra_llm_api_options", false]], "--gpus_per_node": [[33, "cmdoption-trtllm-serve-serve-gpus_per_node", false]], "--host": [[33, "cmdoption-trtllm-serve-serve-host", false]], "--kv_cache_free_gpu_memory_fraction": [[33, "cmdoption-trtllm-serve-serve-kv_cache_free_gpu_memory_fraction", false]], "--log_level": [[33, "cmdoption-trtllm-serve-disaggregated-l", false], [33, "cmdoption-trtllm-serve-disaggregated_mpi_worker-log_level", false], [33, "cmdoption-trtllm-serve-serve-log_level", false]], "--max_batch_size": [[33, "cmdoption-trtllm-serve-serve-max_batch_size", false]], "--max_beam_width": [[33, "cmdoption-trtllm-serve-serve-max_beam_width", false]], "--max_num_tokens": [[33, "cmdoption-trtllm-serve-serve-max_num_tokens", false]], "--max_seq_len": [[33, "cmdoption-trtllm-serve-serve-max_seq_len", false]], "--metadata_server_config_file": [[33, "cmdoption-trtllm-serve-disaggregated-m", false], [33, "cmdoption-trtllm-serve-serve-metadata_server_config_file", false]], "--num_postprocess_workers": [[33, "cmdoption-trtllm-serve-serve-num_postprocess_workers", false]], "--port": [[33, "cmdoption-trtllm-serve-serve-port", false]], "--pp_size": [[33, "cmdoption-trtllm-serve-serve-pp_size", false]], "--reasoning_parser": [[33, "cmdoption-trtllm-serve-serve-reasoning_parser", false]], "--request_timeout": [[33, "cmdoption-trtllm-serve-disaggregated-r", false]], "--server_role": [[33, "cmdoption-trtllm-serve-serve-server_role", false]], "--server_start_timeout": [[33, "cmdoption-trtllm-serve-disaggregated-t", false]], "--tokenizer": [[33, "cmdoption-trtllm-serve-serve-tokenizer", false]], "--tp_size": [[33, "cmdoption-trtllm-serve-serve-tp_size", false]], "--trust_remote_code": [[33, "cmdoption-trtllm-serve-serve-trust_remote_code", false]], "-c": [[33, "cmdoption-trtllm-serve-disaggregated-c", false], [33, "cmdoption-trtllm-serve-disaggregated_mpi_worker-c", false]], "-l": [[33, "cmdoption-trtllm-serve-disaggregated-l", false]], "-m": [[33, "cmdoption-trtllm-serve-disaggregated-m", false]], "-r": [[33, "cmdoption-trtllm-serve-disaggregated-r", false]], "-t": [[33, "cmdoption-trtllm-serve-disaggregated-t", false]], "__init__() (tensorrt_llm.llmapi.buildcacheconfig method)": [[73, "tensorrt_llm.llmapi.BuildCacheConfig.__init__", false]], "__init__() (tensorrt_llm.llmapi.buildconfig method)": [[73, "tensorrt_llm.llmapi.BuildConfig.__init__", false]], "__init__() (tensorrt_llm.llmapi.completionoutput method)": [[73, "tensorrt_llm.llmapi.CompletionOutput.__init__", false]], "__init__() (tensorrt_llm.llmapi.disaggregatedparams method)": [[73, "tensorrt_llm.llmapi.DisaggregatedParams.__init__", false]], "__init__() (tensorrt_llm.llmapi.guideddecodingparams method)": [[73, "tensorrt_llm.llmapi.GuidedDecodingParams.__init__", false]], "__init__() (tensorrt_llm.llmapi.kvcacheretentionconfig method)": [[73, "tensorrt_llm.llmapi.KvCacheRetentionConfig.__init__", false]], "__init__() (tensorrt_llm.llmapi.kvcacheretentionconfig.tokenrangeretentionconfig method)": [[73, "tensorrt_llm.llmapi.KvCacheRetentionConfig.TokenRangeRetentionConfig.__init__", false]], "__init__() (tensorrt_llm.llmapi.lookaheaddecodingconfig method)": [[73, "tensorrt_llm.llmapi.LookaheadDecodingConfig.__init__", false]], "__init__() (tensorrt_llm.llmapi.mpicommsession method)": [[73, "tensorrt_llm.llmapi.MpiCommSession.__init__", false]], "__init__() (tensorrt_llm.llmapi.quantconfig method)": [[73, "tensorrt_llm.llmapi.QuantConfig.__init__", false]], "__init__() (tensorrt_llm.llmapi.requestoutput method)": [[73, "tensorrt_llm.llmapi.RequestOutput.__init__", false]], "__init__() (tensorrt_llm.llmapi.samplingparams method)": [[73, "tensorrt_llm.llmapi.SamplingParams.__init__", false]], "abort() (tensorrt_llm.llmapi.mpicommsession method)": [[73, "tensorrt_llm.llmapi.MpiCommSession.abort", false]], "abs() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.abs", false]], "abs() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.abs", false]], "activation() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.activation", false]], "adalayernorm (class in tensorrt_llm.layers.normalization)": [[86, "tensorrt_llm.layers.normalization.AdaLayerNorm", false]], "adalayernormcontinuous (class in tensorrt_llm.layers.normalization)": [[86, "tensorrt_llm.layers.normalization.AdaLayerNormContinuous", false]], "adalayernormzero (class in tensorrt_llm.layers.normalization)": [[86, "tensorrt_llm.layers.normalization.AdaLayerNormZero", false]], "adalayernormzerosingle (class in tensorrt_llm.layers.normalization)": [[86, "tensorrt_llm.layers.normalization.AdaLayerNormZeroSingle", false]], "add() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.add", false]], "add_input() (tensorrt_llm.functional.conditional method)": [[85, "tensorrt_llm.functional.Conditional.add_input", false]], "add_output() (tensorrt_llm.functional.conditional method)": [[85, "tensorrt_llm.functional.Conditional.add_output", false]], "add_sequence() (tensorrt_llm.runtime.kvcachemanager method)": [[90, "tensorrt_llm.runtime.KVCacheManager.add_sequence", false]], "add_special_tokens (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.add_special_tokens", false]], "additional_model_outputs (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.additional_model_outputs", false]], "alibi (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.alibi", false]], "alibi_with_scale (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.alibi_with_scale", false]], "allgather() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.allgather", false]], "allreduce() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.allreduce", false]], "allreducefusionop (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.AllReduceFusionOp", false]], "allreduceparams (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.AllReduceParams", false]], "allreducestrategy (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.AllReduceStrategy", false]], "apply_batched_logits_processor (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.apply_batched_logits_processor", false]], "apply_llama3_scaling() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.apply_llama3_scaling", false]], "apply_rotary_pos_emb() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.apply_rotary_pos_emb", false]], "apply_rotary_pos_emb_chatglm() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.apply_rotary_pos_emb_chatglm", false]], "apply_rotary_pos_emb_cogvlm() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.apply_rotary_pos_emb_cogvlm", false]], "arange() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.arange", false]], "argmax() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.argmax", false]], "assert_valid_quant_algo() (tensorrt_llm.models.gemmaforcausallm class method)": [[87, "tensorrt_llm.models.GemmaForCausalLM.assert_valid_quant_algo", false]], "assertion() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.assertion", false]], "attention (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.Attention", false]], "attentionmaskparams (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.AttentionMaskParams", false]], "attentionmasktype (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.AttentionMaskType", false]], "attentionparams (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.AttentionParams", false]], "attn_backend (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.attn_backend", false]], "attn_processors (tensorrt_llm.models.sd3transformer2dmodel property)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.attn_processors", false]], "audio_engine_dir (tensorrt_llm.runtime.multimodalmodelrunner property)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.audio_engine_dir", false]], "auto (tensorrt_llm.functional.allreducestrategy attribute)": [[85, "tensorrt_llm.functional.AllReduceStrategy.AUTO", false]], "auto_parallel (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.auto_parallel", false]], "auto_parallel_config (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.auto_parallel_config", false]], "auto_parallel_config (tensorrt_llm.llmapi.trtllmargs property)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.auto_parallel_config", false]], "auto_parallel_world_size (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.auto_parallel_world_size", false]], "autotuner_enabled (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.autotuner_enabled", false]], "avg_pool2d() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.avg_pool2d", false]], "avgpool2d (class in tensorrt_llm.layers.pooling)": [[86, "tensorrt_llm.layers.pooling.AvgPool2d", false]], "axes (tensorrt_llm.functional.sliceinputtype attribute)": [[85, "tensorrt_llm.functional.SliceInputType.axes", false]], "bad (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.bad", false]], "bad_token_ids (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.bad_token_ids", false]], "bad_words_list (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.bad_words_list", false]], "baichuanforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.BaichuanForCausalLM", false]], "batch_size (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.batch_size", false]], "batchingtype (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.BatchingType", false]], "beam_search_diversity_rate (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.beam_search_diversity_rate", false]], "beam_search_diversity_rate (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.beam_search_diversity_rate", false]], "beam_width_array (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.beam_width_array", false]], "bert_attention() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.bert_attention", false]], "bertattention (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.BertAttention", false]], "bertforquestionanswering (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.BertForQuestionAnswering", false]], "bertforsequenceclassification (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.BertForSequenceClassification", false]], "bertmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.BertModel", false]], "best_of (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.best_of", false]], "bidirectional (tensorrt_llm.functional.attentionmasktype attribute)": [[85, "tensorrt_llm.functional.AttentionMaskType.bidirectional", false]], "bidirectionalglm (tensorrt_llm.functional.attentionmasktype attribute)": [[85, "tensorrt_llm.functional.AttentionMaskType.bidirectionalglm", false]], "blocksparse (tensorrt_llm.functional.attentionmasktype attribute)": [[85, "tensorrt_llm.functional.AttentionMaskType.blocksparse", false]], "blocksparseattnparams (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.BlockSparseAttnParams", false]], "bloomforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.BloomForCausalLM", false]], "bloommodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.BloomModel", false]], "broadcast_helper() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.broadcast_helper", false]], "buffer_allocated (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.buffer_allocated", false]], "build_config (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.build_config", false]], "build_config (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.build_config", false]], "buildcacheconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.BuildCacheConfig", false]], "buildconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.BuildConfig", false]], "cache_root (tensorrt_llm.llmapi.buildcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildCacheConfig.cache_root", false]], "cache_root (tensorrt_llm.llmapi.buildcacheconfig property)": [[73, "id7", false]], "cachetransceiverconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.CacheTransceiverConfig", false]], "calculate_speculative_resource() (tensorrt_llm.llmapi.lookaheaddecodingconfig method)": [[73, "tensorrt_llm.llmapi.LookaheadDecodingConfig.calculate_speculative_resource", false]], "calib_batch_size (tensorrt_llm.llmapi.calibconfig attribute)": [[73, "tensorrt_llm.llmapi.CalibConfig.calib_batch_size", false]], "calib_batches (tensorrt_llm.llmapi.calibconfig attribute)": [[73, "tensorrt_llm.llmapi.CalibConfig.calib_batches", false]], "calib_config (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.calib_config", false]], "calib_dataset (tensorrt_llm.llmapi.calibconfig attribute)": [[73, "tensorrt_llm.llmapi.CalibConfig.calib_dataset", false]], "calib_max_seq_length (tensorrt_llm.llmapi.calibconfig attribute)": [[73, "tensorrt_llm.llmapi.CalibConfig.calib_max_seq_length", false]], "calibconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.CalibConfig", false]], "capacity_scheduler_policy (tensorrt_llm.llmapi.schedulerconfig attribute)": [[73, "tensorrt_llm.llmapi.SchedulerConfig.capacity_scheduler_policy", false]], "capacityschedulerpolicy (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.CapacitySchedulerPolicy", false]], "cast (class in tensorrt_llm.layers.cast)": [[86, "tensorrt_llm.layers.cast.Cast", false]], "cast() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.cast", false]], "cast() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.cast", false]], "categorical_sample() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.categorical_sample", false]], "causal (tensorrt_llm.functional.attentionmasktype attribute)": [[85, "tensorrt_llm.functional.AttentionMaskType.causal", false]], "chatglm (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.chatglm", false]], "chatglmconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.ChatGLMConfig", false]], "chatglmforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.ChatGLMForCausalLM", false]], "chatglmgenerationsession (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.ChatGLMGenerationSession", false]], "chatglmmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.ChatGLMModel", false]], "check_config() (tensorrt_llm.models.decodermodel method)": [[87, "tensorrt_llm.models.DecoderModel.check_config", false]], "check_config() (tensorrt_llm.models.dit method)": [[87, "tensorrt_llm.models.DiT.check_config", false]], "check_config() (tensorrt_llm.models.encodermodel method)": [[87, "tensorrt_llm.models.EncoderModel.check_config", false]], "check_config() (tensorrt_llm.models.falconforcausallm method)": [[87, "tensorrt_llm.models.FalconForCausalLM.check_config", false]], "check_config() (tensorrt_llm.models.mptforcausallm method)": [[87, "tensorrt_llm.models.MPTForCausalLM.check_config", false]], "check_config() (tensorrt_llm.models.optforcausallm method)": [[87, "tensorrt_llm.models.OPTForCausalLM.check_config", false]], "check_config() (tensorrt_llm.models.phiforcausallm method)": [[87, "tensorrt_llm.models.PhiForCausalLM.check_config", false]], "check_config() (tensorrt_llm.models.pretrainedmodel method)": [[87, "tensorrt_llm.models.PretrainedModel.check_config", false]], "choices() (tensorrt_llm.functional.positionembeddingtype static method)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.choices", false]], "chunk() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.chunk", false]], "clamp_val (tensorrt_llm.llmapi.quantconfig attribute)": [[73, "tensorrt_llm.llmapi.QuantConfig.clamp_val", false]], "clip() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.clip", false]], "clipvisiontransformer (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.CLIPVisionTransformer", false]], "cogvlmattention (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.CogVLMAttention", false]], "cogvlmconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.CogVLMConfig", false]], "cogvlmforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.CogVLMForCausalLM", false]], "cohereforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.CohereForCausalLM", false]], "collect_and_bias() (tensorrt_llm.layers.linear.linear method)": [[86, "tensorrt_llm.layers.linear.Linear.collect_and_bias", false]], "collect_and_bias() (tensorrt_llm.layers.linear.linearbase method)": [[86, "tensorrt_llm.layers.linear.LinearBase.collect_and_bias", false]], "collect_and_bias() (tensorrt_llm.layers.linear.rowlinear method)": [[86, "tensorrt_llm.layers.linear.RowLinear.collect_and_bias", false]], "columnlinear (in module tensorrt_llm.layers.linear)": [[86, "tensorrt_llm.layers.linear.ColumnLinear", false]], "combinedtimesteplabelembeddings (class in tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.CombinedTimestepLabelEmbeddings", false]], "combinedtimesteptextprojembeddings (class in tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.CombinedTimestepTextProjEmbeddings", false]], "completionoutput (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.CompletionOutput", false]], "compute_relative_bias() (in module tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.compute_relative_bias", false]], "concat() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.concat", false]], "conditional (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.Conditional", false]], "config_class (tensorrt_llm.models.baichuanforcausallm attribute)": [[87, "tensorrt_llm.models.BaichuanForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.chatglmforcausallm attribute)": [[87, "tensorrt_llm.models.ChatGLMForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.cogvlmforcausallm attribute)": [[87, "tensorrt_llm.models.CogVLMForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.cohereforcausallm attribute)": [[87, "tensorrt_llm.models.CohereForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.dbrxforcausallm attribute)": [[87, "tensorrt_llm.models.DbrxForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.deepseekforcausallm attribute)": [[87, "tensorrt_llm.models.DeepseekForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.deepseekv2forcausallm attribute)": [[87, "tensorrt_llm.models.DeepseekV2ForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.eagleforcausallm attribute)": [[87, "tensorrt_llm.models.EagleForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.falconforcausallm attribute)": [[87, "tensorrt_llm.models.FalconForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.gemmaforcausallm attribute)": [[87, "tensorrt_llm.models.GemmaForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.gptforcausallm attribute)": [[87, "tensorrt_llm.models.GPTForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.gptjforcausallm attribute)": [[87, "tensorrt_llm.models.GPTJForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.llamaforcausallm attribute)": [[87, "tensorrt_llm.models.LLaMAForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.mambaforcausallm attribute)": [[87, "tensorrt_llm.models.MambaForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.medusaforcausallm attribute)": [[87, "tensorrt_llm.models.MedusaForCausalLm.config_class", false]], "config_class (tensorrt_llm.models.mllamaforcausallm attribute)": [[87, "tensorrt_llm.models.MLLaMAForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.phi3forcausallm attribute)": [[87, "tensorrt_llm.models.Phi3ForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.phiforcausallm attribute)": [[87, "tensorrt_llm.models.PhiForCausalLM.config_class", false]], "config_class (tensorrt_llm.models.sd3transformer2dmodel attribute)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.config_class", false]], "constant() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.constant", false]], "constant_to_tensor_() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.constant_to_tensor_", false]], "constants_to_tensors_() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.constants_to_tensors_", false]], "context (tensorrt_llm.runtime.session property)": [[90, "tensorrt_llm.runtime.Session.context", false]], "context_chunking_policy (tensorrt_llm.llmapi.schedulerconfig attribute)": [[73, "tensorrt_llm.llmapi.SchedulerConfig.context_chunking_policy", false]], "context_logits (tensorrt_llm.llmapi.requestoutput attribute)": [[73, "tensorrt_llm.llmapi.RequestOutput.context_logits", false]], "context_mem_size (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.context_mem_size", false]], "context_mem_size (tensorrt_llm.runtime.session property)": [[90, "tensorrt_llm.runtime.Session.context_mem_size", false]], "contextchunkingpolicy (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.ContextChunkingPolicy", false]], "conv1d (class in tensorrt_llm.layers.conv)": [[86, "tensorrt_llm.layers.conv.Conv1d", false]], "conv1d() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.conv1d", false]], "conv2d (class in tensorrt_llm.layers.conv)": [[86, "tensorrt_llm.layers.conv.Conv2d", false]], "conv2d() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.conv2d", false]], "conv3d (class in tensorrt_llm.layers.conv)": [[86, "tensorrt_llm.layers.conv.Conv3d", false]], "conv3d() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.conv3d", false]], "conv_kernel (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.conv_kernel", false]], "conv_kernel (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.conv_kernel", false]], "conv_transpose2d() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.conv_transpose2d", false]], "convert_load_format() (tensorrt_llm.llmapi.torchllmargs class method)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.convert_load_format", false]], "convtranspose2d (class in tensorrt_llm.layers.conv)": [[86, "tensorrt_llm.layers.conv.ConvTranspose2d", false]], "copy_on_partial_reuse (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.copy_on_partial_reuse", false]], "cos() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.cos", false]], "cp_split_plugin() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.cp_split_plugin", false]], "cpp_e2e (tensorrt_llm.runtime.multimodalmodelrunner property)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.cpp_e2e", false]], "cpp_llm_only (tensorrt_llm.runtime.multimodalmodelrunner property)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.cpp_llm_only", false]], "create_allreduce_plugin() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.create_allreduce_plugin", false]], "create_attention_const_params() (tensorrt_llm.layers.attention.attention static method)": [[86, "tensorrt_llm.layers.attention.Attention.create_attention_const_params", false]], "create_fake_weight() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.create_fake_weight", false]], "create_runtime_defaults() (tensorrt_llm.models.pretrainedconfig static method)": [[87, "tensorrt_llm.models.PretrainedConfig.create_runtime_defaults", false]], "create_sinusoidal_positions() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.create_sinusoidal_positions", false]], "create_sinusoidal_positions_for_attention_plugin() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.create_sinusoidal_positions_for_attention_plugin", false]], "create_sinusoidal_positions_for_cogvlm_attention_plugin() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.create_sinusoidal_positions_for_cogvlm_attention_plugin", false]], "create_sinusoidal_positions_long_rope() (tensorrt_llm.functional.ropeembeddingutils method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.create_sinusoidal_positions_long_rope", false]], "create_sinusoidal_positions_yarn() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.create_sinusoidal_positions_yarn", false]], "cropped_pos_embed() (tensorrt_llm.layers.embedding.sd3patchembed method)": [[86, "tensorrt_llm.layers.embedding.SD3PatchEmbed.cropped_pos_embed", false]], "cross_attention (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.cross_attention", false]], "cross_attention (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.cross_attention", false]], "cross_kv_cache_fraction (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.cross_kv_cache_fraction", false]], "ctx_request_id (tensorrt_llm.llmapi.disaggregatedparams attribute)": [[73, "tensorrt_llm.llmapi.DisaggregatedParams.ctx_request_id", false]], "cuda_graph_batch_sizes (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.cuda_graph_batch_sizes", false]], "cuda_graph_cache_size (tensorrt_llm.llmapi.extendedruntimeperfknobconfig attribute)": [[73, "tensorrt_llm.llmapi.ExtendedRuntimePerfKnobConfig.cuda_graph_cache_size", false]], "cuda_graph_max_batch_size (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.cuda_graph_max_batch_size", false]], "cuda_graph_mode (tensorrt_llm.llmapi.extendedruntimeperfknobconfig attribute)": [[73, "tensorrt_llm.llmapi.ExtendedRuntimePerfKnobConfig.cuda_graph_mode", false]], "cuda_graph_mode (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.cuda_graph_mode", false]], "cuda_graph_padding_enabled (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.cuda_graph_padding_enabled", false]], "cuda_stream_guard() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.cuda_stream_guard", false]], "cuda_stream_sync() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.cuda_stream_sync", false]], "cumsum() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.cumsum", false]], "cumulative_logprob (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.cumulative_logprob", false]], "custom_mask (tensorrt_llm.functional.attentionmasktype attribute)": [[85, "tensorrt_llm.functional.AttentionMaskType.custom_mask", false]], "data (tensorrt_llm.functional.sliceinputtype attribute)": [[85, "tensorrt_llm.functional.SliceInputType.data", false]], "dbrxconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.DbrxConfig", false]], "dbrxforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.DbrxForCausalLM", false]], "debug_mode (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.debug_mode", false]], "debug_tensors_to_save (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.debug_tensors_to_save", false]], "decode() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.decode", false]], "decode_batch() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.decode_batch", false]], "decode_duration_ms (tensorrt_llm.llmapi.kvcacheretentionconfig property)": [[73, "tensorrt_llm.llmapi.KvCacheRetentionConfig.decode_duration_ms", false]], "decode_regular() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.decode_regular", false]], "decode_retention_priority (tensorrt_llm.llmapi.kvcacheretentionconfig property)": [[73, "tensorrt_llm.llmapi.KvCacheRetentionConfig.decode_retention_priority", false]], "decode_stream() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.decode_stream", false]], "decode_words_list() (in module tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.decode_words_list", false]], "decodermodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.DecoderModel", false]], "decoding_config (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.decoding_config", false]], "decoding_config (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.decoding_config", false]], "decoding_type (tensorrt_llm.llmapi.drafttargetdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.DraftTargetDecodingConfig.decoding_type", false]], "decoding_type (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.decoding_type", false]], "decoding_type (tensorrt_llm.llmapi.lookaheaddecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.LookaheadDecodingConfig.decoding_type", false]], "decoding_type (tensorrt_llm.llmapi.medusadecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.MedusaDecodingConfig.decoding_type", false]], "decoding_type (tensorrt_llm.llmapi.mtpdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.MTPDecodingConfig.decoding_type", false]], "decoding_type (tensorrt_llm.llmapi.ngramdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.NGramDecodingConfig.decoding_type", false]], "deepseekforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.DeepseekForCausalLM", false]], "deepseekv2attention (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.DeepseekV2Attention", false]], "deepseekv2forcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.DeepseekV2ForCausalLM", false]], "default_plugin_config() (tensorrt_llm.models.cogvlmforcausallm method)": [[87, "tensorrt_llm.models.CogVLMForCausalLM.default_plugin_config", false]], "default_plugin_config() (tensorrt_llm.models.llamaforcausallm method)": [[87, "tensorrt_llm.models.LLaMAForCausalLM.default_plugin_config", false]], "deferred (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.deferred", false]], "detokenize (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.detokenize", false]], "device (tensorrt_llm.llmapi.calibconfig attribute)": [[73, "tensorrt_llm.llmapi.CalibConfig.device", false]], "device (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.device", false]], "diffusersattention (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.DiffusersAttention", false]], "dimrange (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.DimRange", false]], "directory (tensorrt_llm.llmapi.kvcacheretentionconfig property)": [[73, "tensorrt_llm.llmapi.KvCacheRetentionConfig.directory", false]], "disable (tensorrt_llm.functional.sidestreamidtype attribute)": [[85, "tensorrt_llm.functional.SideStreamIDType.disable", false]], "disable_forward_chunking() (tensorrt_llm.models.sd3transformer2dmodel method)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.disable_forward_chunking", false]], "disable_overlap_scheduler (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.disable_overlap_scheduler", false]], "disaggregated_params (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.disaggregated_params", false]], "disaggregatedparams (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.DisaggregatedParams", false]], "dit (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.DiT", false]], "div() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.div", false]], "dora_plugin() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.dora_plugin", false]], "draft_tokens (tensorrt_llm.llmapi.disaggregatedparams attribute)": [[73, "tensorrt_llm.llmapi.DisaggregatedParams.draft_tokens", false]], "draft_tokens_external (tensorrt_llm.models.speculativedecodingmode attribute)": [[87, "tensorrt_llm.models.SpeculativeDecodingMode.DRAFT_TOKENS_EXTERNAL", false]], "drafttargetdecodingconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.DraftTargetDecodingConfig", false]], "dry_run (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.dry_run", false]], "dtype (tensorrt_llm.functional.tensor property)": [[85, "tensorrt_llm.functional.Tensor.dtype", false]], "dtype (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.dtype", false]], "dtype (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.dtype", false]], "dtype (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.dtype", false]], "dtype (tensorrt_llm.runtime.modelrunnercpp property)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.dtype", false]], "dtype (tensorrt_llm.runtime.tensorinfo attribute)": [[90, "tensorrt_llm.runtime.TensorInfo.dtype", false]], "dump_debug_buffers() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.dump_debug_buffers", false]], "duration_ms (tensorrt_llm.llmapi.kvcacheretentionconfig.tokenrangeretentionconfig property)": [[73, "tensorrt_llm.llmapi.KvCacheRetentionConfig.TokenRangeRetentionConfig.duration_ms", false]], "dynamic (tensorrt_llm.functional.rotaryscalingtype attribute)": [[85, "tensorrt_llm.functional.RotaryScalingType.dynamic", false]], "dynamic_batch_config (tensorrt_llm.llmapi.schedulerconfig attribute)": [[73, "tensorrt_llm.llmapi.SchedulerConfig.dynamic_batch_config", false]], "dynamic_batch_moving_average_window (tensorrt_llm.llmapi.dynamicbatchconfig attribute)": [[73, "tensorrt_llm.llmapi.DynamicBatchConfig.dynamic_batch_moving_average_window", false]], "dynamic_tree_max_topk (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.dynamic_tree_max_topK", false]], "dynamicbatchconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.DynamicBatchConfig", false]], "eagle (tensorrt_llm.models.speculativedecodingmode attribute)": [[87, "tensorrt_llm.models.SpeculativeDecodingMode.EAGLE", false]], "eagle3_one_model (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.eagle3_one_model", false]], "eagle_choices (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.eagle_choices", false]], "eagledecodingconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig", false]], "eagleforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.EagleForCausalLM", false]], "early_stop_criteria() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.early_stop_criteria", false]], "early_stopping (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.early_stopping", false]], "early_stopping (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.early_stopping", false]], "einsum() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.einsum", false]], "elementwise_binary() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.elementwise_binary", false]], "embedding (class in tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.Embedding", false]], "embedding() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.embedding", false]], "embedding_bias (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.embedding_bias", false]], "embedding_parallel_mode (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.embedding_parallel_mode", false]], "enable_batch_size_tuning (tensorrt_llm.llmapi.dynamicbatchconfig attribute)": [[73, "tensorrt_llm.llmapi.DynamicBatchConfig.enable_batch_size_tuning", false]], "enable_block_reuse (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.enable_block_reuse", false]], "enable_build_cache (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.enable_build_cache", false]], "enable_context_fmha_fp32_acc (tensorrt_llm.llmapi.extendedruntimeperfknobconfig attribute)": [[73, "tensorrt_llm.llmapi.ExtendedRuntimePerfKnobConfig.enable_context_fmha_fp32_acc", false]], "enable_debug_output (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.enable_debug_output", false]], "enable_forward_chunking() (tensorrt_llm.models.sd3transformer2dmodel method)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.enable_forward_chunking", false]], "enable_fullgraph (tensorrt_llm.llmapi.torchcompileconfig attribute)": [[73, "tensorrt_llm.llmapi.TorchCompileConfig.enable_fullgraph", false]], "enable_inductor (tensorrt_llm.llmapi.torchcompileconfig attribute)": [[73, "tensorrt_llm.llmapi.TorchCompileConfig.enable_inductor", false]], "enable_iter_perf_stats (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.enable_iter_perf_stats", false]], "enable_iter_req_stats (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.enable_iter_req_stats", false]], "enable_layerwise_nvtx_marker (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.enable_layerwise_nvtx_marker", false]], "enable_max_num_tokens_tuning (tensorrt_llm.llmapi.dynamicbatchconfig attribute)": [[73, "tensorrt_llm.llmapi.DynamicBatchConfig.enable_max_num_tokens_tuning", false]], "enable_min_latency (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.enable_min_latency", false]], "enable_partial_reuse (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.enable_partial_reuse", false]], "enable_piecewise_cuda_graph (tensorrt_llm.llmapi.torchcompileconfig attribute)": [[73, "tensorrt_llm.llmapi.TorchCompileConfig.enable_piecewise_cuda_graph", false]], "enable_tqdm (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.enable_tqdm", false]], "enable_trtllm_sampler (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.enable_trtllm_sampler", false]], "enable_userbuffers (tensorrt_llm.llmapi.torchcompileconfig attribute)": [[73, "tensorrt_llm.llmapi.TorchCompileConfig.enable_userbuffers", false]], "encdecmodelrunner (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.EncDecModelRunner", false]], "encoder_run() (tensorrt_llm.runtime.encdecmodelrunner method)": [[90, "tensorrt_llm.runtime.EncDecModelRunner.encoder_run", false]], "encodermodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.EncoderModel", false]], "end_id (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.end_id", false]], "end_id (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.end_id", false]], "engine (tensorrt_llm.runtime.session property)": [[90, "tensorrt_llm.runtime.Session.engine", false]], "engine_inspector (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.engine_inspector", false]], "eq() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.eq", false]], "equal_progress (tensorrt_llm.llmapi.contextchunkingpolicy attribute)": [[73, "tensorrt_llm.llmapi.ContextChunkingPolicy.EQUAL_PROGRESS", false]], "event_buffer_max_size (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.event_buffer_max_size", false]], "exclude_input_from_output (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.exclude_input_from_output", false]], "exclude_modules (tensorrt_llm.llmapi.quantconfig attribute)": [[73, "tensorrt_llm.llmapi.QuantConfig.exclude_modules", false]], "exp() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.exp", false]], "expand() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.expand", false]], "expand_dims() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.expand_dims", false]], "expand_dims_like() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.expand_dims_like", false]], "expand_mask() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.expand_mask", false]], "explicit_draft_tokens (tensorrt_llm.models.speculativedecodingmode attribute)": [[87, "tensorrt_llm.models.SpeculativeDecodingMode.EXPLICIT_DRAFT_TOKENS", false]], "extended_runtime_perf_knob_config (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.extended_runtime_perf_knob_config", false]], "extendedruntimeperfknobconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.ExtendedRuntimePerfKnobConfig", false]], "extra_resource_managers (tensorrt_llm.llmapi.torchllmargs property)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.extra_resource_managers", false]], "falconconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.FalconConfig", false]], "falconforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.FalconForCausalLM", false]], "falconmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.FalconModel", false]], "fast_build (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.fast_build", false]], "fc_gate() (tensorrt_llm.layers.mlp.fusedgatedmlp method)": [[86, "tensorrt_llm.layers.mlp.FusedGatedMLP.fc_gate", false]], "fc_gate_dora() (in module tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.fc_gate_dora", false]], "fc_gate_lora() (in module tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.fc_gate_lora", false]], "fc_gate_plugin() (tensorrt_llm.layers.mlp.fusedgatedmlp method)": [[86, "tensorrt_llm.layers.mlp.FusedGatedMLP.fc_gate_plugin", false]], "field_name (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "id12", false], [73, "id15", false], [73, "id18", false], [73, "tensorrt_llm.llmapi.TorchLlmArgs.field_name", false]], "field_name (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "id21", false], [73, "id24", false], [73, "id27", false], [73, "id30", false], [73, "id33", false], [73, "tensorrt_llm.llmapi.TrtLlmArgs.field_name", false]], "fill_attention_const_params_for_long_rope() (tensorrt_llm.layers.attention.attentionparams method)": [[86, "tensorrt_llm.layers.attention.AttentionParams.fill_attention_const_params_for_long_rope", false]], "fill_attention_const_params_for_rope() (tensorrt_llm.layers.attention.attentionparams method)": [[86, "tensorrt_llm.layers.attention.AttentionParams.fill_attention_const_params_for_rope", false]], "fill_attention_params() (tensorrt_llm.layers.attention.attention static method)": [[86, "tensorrt_llm.layers.attention.Attention.fill_attention_params", false]], "fill_none_tensor_list() (tensorrt_llm.layers.attention.keyvaluecacheparams method)": [[86, "tensorrt_llm.layers.attention.KeyValueCacheParams.fill_none_tensor_list", false]], "fill_value (tensorrt_llm.functional.sliceinputtype attribute)": [[85, "tensorrt_llm.functional.SliceInputType.fill_value", false]], "filter_medusa_logits() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.filter_medusa_logits", false]], "finalize_decoder() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.finalize_decoder", false]], "find_best_medusa_path() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.find_best_medusa_path", false]], "finish_reason (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.finish_reason", false]], "finished (tensorrt_llm.llmapi.requestoutput attribute)": [[73, "tensorrt_llm.llmapi.RequestOutput.finished", false]], "first_come_first_served (tensorrt_llm.llmapi.contextchunkingpolicy attribute)": [[73, "tensorrt_llm.llmapi.ContextChunkingPolicy.FIRST_COME_FIRST_SERVED", false]], "first_gen_tokens (tensorrt_llm.llmapi.disaggregatedparams attribute)": [[73, "tensorrt_llm.llmapi.DisaggregatedParams.first_gen_tokens", false]], "first_layer (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.first_layer", false]], "flatten() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.flatten", false]], "flatten() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.flatten", false]], "flip() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.flip", false]], "floordiv() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.floordiv", false]], "fmt_dim (c macro)": [[1, "c.FMT_DIM", false]], "for_each_rank() (tensorrt_llm.models.pretrainedconfig method)": [[87, "tensorrt_llm.models.PretrainedConfig.for_each_rank", false]], "force_num_profiles (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.force_num_profiles", false]], "forward() (tensorrt_llm.layers.activation.mish method)": [[86, "tensorrt_llm.layers.activation.Mish.forward", false]], "forward() (tensorrt_llm.layers.attention.attention method)": [[86, "tensorrt_llm.layers.attention.Attention.forward", false]], "forward() (tensorrt_llm.layers.attention.bertattention method)": [[86, "tensorrt_llm.layers.attention.BertAttention.forward", false]], "forward() (tensorrt_llm.layers.attention.cogvlmattention method)": [[86, "tensorrt_llm.layers.attention.CogVLMAttention.forward", false]], "forward() (tensorrt_llm.layers.attention.deepseekv2attention method)": [[86, "tensorrt_llm.layers.attention.DeepseekV2Attention.forward", false]], "forward() (tensorrt_llm.layers.attention.diffusersattention method)": [[86, "tensorrt_llm.layers.attention.DiffusersAttention.forward", false]], "forward() (tensorrt_llm.layers.cast.cast method)": [[86, "tensorrt_llm.layers.cast.Cast.forward", false]], "forward() (tensorrt_llm.layers.conv.conv1d method)": [[86, "tensorrt_llm.layers.conv.Conv1d.forward", false]], "forward() (tensorrt_llm.layers.conv.conv2d method)": [[86, "tensorrt_llm.layers.conv.Conv2d.forward", false]], "forward() (tensorrt_llm.layers.conv.conv3d method)": [[86, "tensorrt_llm.layers.conv.Conv3d.forward", false]], "forward() (tensorrt_llm.layers.conv.convtranspose2d method)": [[86, "tensorrt_llm.layers.conv.ConvTranspose2d.forward", false]], "forward() (tensorrt_llm.layers.embedding.combinedtimesteplabelembeddings method)": [[86, "tensorrt_llm.layers.embedding.CombinedTimestepLabelEmbeddings.forward", false]], "forward() (tensorrt_llm.layers.embedding.combinedtimesteptextprojembeddings method)": [[86, "tensorrt_llm.layers.embedding.CombinedTimestepTextProjEmbeddings.forward", false]], "forward() (tensorrt_llm.layers.embedding.embedding method)": [[86, "tensorrt_llm.layers.embedding.Embedding.forward", false]], "forward() (tensorrt_llm.layers.embedding.labelembedding method)": [[86, "tensorrt_llm.layers.embedding.LabelEmbedding.forward", false]], "forward() (tensorrt_llm.layers.embedding.pixartalphatextprojection method)": [[86, "tensorrt_llm.layers.embedding.PixArtAlphaTextProjection.forward", false]], "forward() (tensorrt_llm.layers.embedding.prompttuningembedding method)": [[86, "tensorrt_llm.layers.embedding.PromptTuningEmbedding.forward", false]], "forward() (tensorrt_llm.layers.embedding.sd3patchembed method)": [[86, "tensorrt_llm.layers.embedding.SD3PatchEmbed.forward", false]], "forward() (tensorrt_llm.layers.embedding.timestepembedding method)": [[86, "tensorrt_llm.layers.embedding.TimestepEmbedding.forward", false]], "forward() (tensorrt_llm.layers.embedding.timesteps method)": [[86, "tensorrt_llm.layers.embedding.Timesteps.forward", false]], "forward() (tensorrt_llm.layers.linear.linearbase method)": [[86, "tensorrt_llm.layers.linear.LinearBase.forward", false]], "forward() (tensorrt_llm.layers.mlp.fusedgatedmlp method)": [[86, "tensorrt_llm.layers.mlp.FusedGatedMLP.forward", false]], "forward() (tensorrt_llm.layers.mlp.gatedmlp method)": [[86, "tensorrt_llm.layers.mlp.GatedMLP.forward", false]], "forward() (tensorrt_llm.layers.mlp.linearactivation method)": [[86, "tensorrt_llm.layers.mlp.LinearActivation.forward", false]], "forward() (tensorrt_llm.layers.mlp.linearapproximategelu method)": [[86, "tensorrt_llm.layers.mlp.LinearApproximateGELU.forward", false]], "forward() (tensorrt_llm.layers.mlp.lineargeglu method)": [[86, "tensorrt_llm.layers.mlp.LinearGEGLU.forward", false]], "forward() (tensorrt_llm.layers.mlp.lineargelu method)": [[86, "tensorrt_llm.layers.mlp.LinearGELU.forward", false]], "forward() (tensorrt_llm.layers.mlp.linearswiglu method)": [[86, "tensorrt_llm.layers.mlp.LinearSwiGLU.forward", false]], "forward() (tensorrt_llm.layers.mlp.mlp method)": [[86, "tensorrt_llm.layers.mlp.MLP.forward", false]], "forward() (tensorrt_llm.layers.normalization.adalayernorm method)": [[86, "tensorrt_llm.layers.normalization.AdaLayerNorm.forward", false]], "forward() (tensorrt_llm.layers.normalization.adalayernormcontinuous method)": [[86, "tensorrt_llm.layers.normalization.AdaLayerNormContinuous.forward", false]], "forward() (tensorrt_llm.layers.normalization.adalayernormzero method)": [[86, "tensorrt_llm.layers.normalization.AdaLayerNormZero.forward", false]], "forward() (tensorrt_llm.layers.normalization.adalayernormzerosingle method)": [[86, "tensorrt_llm.layers.normalization.AdaLayerNormZeroSingle.forward", false]], "forward() (tensorrt_llm.layers.normalization.groupnorm method)": [[86, "tensorrt_llm.layers.normalization.GroupNorm.forward", false]], "forward() (tensorrt_llm.layers.normalization.layernorm method)": [[86, "tensorrt_llm.layers.normalization.LayerNorm.forward", false]], "forward() (tensorrt_llm.layers.normalization.rmsnorm method)": [[86, "tensorrt_llm.layers.normalization.RmsNorm.forward", false]], "forward() (tensorrt_llm.layers.normalization.sd35adalayernormzerox method)": [[86, "tensorrt_llm.layers.normalization.SD35AdaLayerNormZeroX.forward", false]], "forward() (tensorrt_llm.layers.pooling.avgpool2d method)": [[86, "tensorrt_llm.layers.pooling.AvgPool2d.forward", false]], "forward() (tensorrt_llm.models.bertforquestionanswering method)": [[87, "tensorrt_llm.models.BertForQuestionAnswering.forward", false]], "forward() (tensorrt_llm.models.bertforsequenceclassification method)": [[87, "tensorrt_llm.models.BertForSequenceClassification.forward", false]], "forward() (tensorrt_llm.models.bertmodel method)": [[87, "tensorrt_llm.models.BertModel.forward", false]], "forward() (tensorrt_llm.models.bloommodel method)": [[87, "tensorrt_llm.models.BloomModel.forward", false]], "forward() (tensorrt_llm.models.chatglmmodel method)": [[87, "tensorrt_llm.models.ChatGLMModel.forward", false]], "forward() (tensorrt_llm.models.clipvisiontransformer method)": [[87, "tensorrt_llm.models.CLIPVisionTransformer.forward", false]], "forward() (tensorrt_llm.models.decodermodel method)": [[87, "tensorrt_llm.models.DecoderModel.forward", false]], "forward() (tensorrt_llm.models.dit method)": [[87, "tensorrt_llm.models.DiT.forward", false]], "forward() (tensorrt_llm.models.eagleforcausallm method)": [[87, "tensorrt_llm.models.EagleForCausalLM.forward", false]], "forward() (tensorrt_llm.models.encodermodel method)": [[87, "tensorrt_llm.models.EncoderModel.forward", false]], "forward() (tensorrt_llm.models.falconmodel method)": [[87, "tensorrt_llm.models.FalconModel.forward", false]], "forward() (tensorrt_llm.models.gptjmodel method)": [[87, "tensorrt_llm.models.GPTJModel.forward", false]], "forward() (tensorrt_llm.models.gptmodel method)": [[87, "tensorrt_llm.models.GPTModel.forward", false]], "forward() (tensorrt_llm.models.gptneoxmodel method)": [[87, "tensorrt_llm.models.GPTNeoXModel.forward", false]], "forward() (tensorrt_llm.models.llamamodel method)": [[87, "tensorrt_llm.models.LLaMAModel.forward", false]], "forward() (tensorrt_llm.models.llavanextvisionwrapper method)": [[87, "tensorrt_llm.models.LlavaNextVisionWrapper.forward", false]], "forward() (tensorrt_llm.models.mambaforcausallm method)": [[87, "tensorrt_llm.models.MambaForCausalLM.forward", false]], "forward() (tensorrt_llm.models.mllamaforcausallm method)": [[87, "tensorrt_llm.models.MLLaMAForCausalLM.forward", false]], "forward() (tensorrt_llm.models.mptmodel method)": [[87, "tensorrt_llm.models.MPTModel.forward", false]], "forward() (tensorrt_llm.models.optmodel method)": [[87, "tensorrt_llm.models.OPTModel.forward", false]], "forward() (tensorrt_llm.models.phi3model method)": [[87, "tensorrt_llm.models.Phi3Model.forward", false]], "forward() (tensorrt_llm.models.phimodel method)": [[87, "tensorrt_llm.models.PhiModel.forward", false]], "forward() (tensorrt_llm.models.recurrentgemmaforcausallm method)": [[87, "tensorrt_llm.models.RecurrentGemmaForCausalLM.forward", false]], "forward() (tensorrt_llm.models.sd3transformer2dmodel method)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.forward", false]], "forward() (tensorrt_llm.models.whisperencoder method)": [[87, "tensorrt_llm.models.WhisperEncoder.forward", false]], "forward_with_cfg() (tensorrt_llm.models.dit method)": [[87, "tensorrt_llm.models.DiT.forward_with_cfg", false]], "forward_without_cfg() (tensorrt_llm.models.dit method)": [[87, "tensorrt_llm.models.DiT.forward_without_cfg", false]], "fp8 (tensorrt_llm.llmapi.quantalgo attribute)": [[73, "tensorrt_llm.llmapi.QuantAlgo.FP8", false]], "fp8_block_scales (tensorrt_llm.llmapi.quantalgo attribute)": [[73, "tensorrt_llm.llmapi.QuantAlgo.FP8_BLOCK_SCALES", false]], "fp8_per_channel_per_token (tensorrt_llm.llmapi.quantalgo attribute)": [[73, "tensorrt_llm.llmapi.QuantAlgo.FP8_PER_CHANNEL_PER_TOKEN", false]], "free_gpu_memory_fraction (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.free_gpu_memory_fraction", false]], "frequency_penalty (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.frequency_penalty", false]], "frequency_penalty (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.frequency_penalty", false]], "from_arguments() (tensorrt_llm.models.speculativedecodingmode static method)": [[87, "tensorrt_llm.models.SpeculativeDecodingMode.from_arguments", false]], "from_checkpoint() (tensorrt_llm.models.pretrainedconfig class method)": [[87, "tensorrt_llm.models.PretrainedConfig.from_checkpoint", false]], "from_checkpoint() (tensorrt_llm.models.pretrainedmodel class method)": [[87, "tensorrt_llm.models.PretrainedModel.from_checkpoint", false]], "from_config() (tensorrt_llm.models.pretrainedmodel class method)": [[87, "tensorrt_llm.models.PretrainedModel.from_config", false]], "from_dict() (tensorrt_llm.llmapi.buildconfig class method)": [[73, "tensorrt_llm.llmapi.BuildConfig.from_dict", false]], "from_dict() (tensorrt_llm.llmapi.calibconfig class method)": [[73, "tensorrt_llm.llmapi.CalibConfig.from_dict", false]], "from_dict() (tensorrt_llm.llmapi.drafttargetdecodingconfig class method)": [[73, "tensorrt_llm.llmapi.DraftTargetDecodingConfig.from_dict", false]], "from_dict() (tensorrt_llm.llmapi.eagledecodingconfig class method)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.from_dict", false]], "from_dict() (tensorrt_llm.llmapi.lookaheaddecodingconfig class method)": [[73, "tensorrt_llm.llmapi.LookaheadDecodingConfig.from_dict", false]], "from_dict() (tensorrt_llm.llmapi.medusadecodingconfig class method)": [[73, "tensorrt_llm.llmapi.MedusaDecodingConfig.from_dict", false]], "from_dict() (tensorrt_llm.llmapi.mtpdecodingconfig class method)": [[73, "tensorrt_llm.llmapi.MTPDecodingConfig.from_dict", false]], "from_dict() (tensorrt_llm.llmapi.ngramdecodingconfig class method)": [[73, "tensorrt_llm.llmapi.NGramDecodingConfig.from_dict", false]], "from_dict() (tensorrt_llm.llmapi.quantconfig class method)": [[73, "tensorrt_llm.llmapi.QuantConfig.from_dict", false]], "from_dict() (tensorrt_llm.models.pretrainedconfig class method)": [[87, "tensorrt_llm.models.PretrainedConfig.from_dict", false]], "from_dir() (tensorrt_llm.runtime.modelrunner class method)": [[90, "tensorrt_llm.runtime.ModelRunner.from_dir", false]], "from_dir() (tensorrt_llm.runtime.modelrunnercpp class method)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.from_dir", false]], "from_engine() (tensorrt_llm.runtime.encdecmodelrunner class method)": [[90, "tensorrt_llm.runtime.EncDecModelRunner.from_engine", false]], "from_engine() (tensorrt_llm.runtime.modelrunner class method)": [[90, "tensorrt_llm.runtime.ModelRunner.from_engine", false]], "from_engine() (tensorrt_llm.runtime.session static method)": [[90, "tensorrt_llm.runtime.Session.from_engine", false]], "from_hugging_face() (tensorrt_llm.models.baichuanforcausallm class method)": [[87, "tensorrt_llm.models.BaichuanForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.chatglmconfig class method)": [[87, "tensorrt_llm.models.ChatGLMConfig.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.chatglmforcausallm class method)": [[87, "tensorrt_llm.models.ChatGLMForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.cogvlmforcausallm class method)": [[87, "tensorrt_llm.models.CogVLMForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.cohereforcausallm class method)": [[87, "tensorrt_llm.models.CohereForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.deepseekforcausallm class method)": [[87, "tensorrt_llm.models.DeepseekForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.deepseekv2forcausallm class method)": [[87, "tensorrt_llm.models.DeepseekV2ForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.eagleforcausallm class method)": [[87, "tensorrt_llm.models.EagleForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.falconconfig class method)": [[87, "tensorrt_llm.models.FalconConfig.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.falconforcausallm class method)": [[87, "tensorrt_llm.models.FalconForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.gemmaconfig class method)": [[87, "tensorrt_llm.models.GemmaConfig.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.gemmaforcausallm class method)": [[87, "tensorrt_llm.models.GemmaForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.gptconfig class method)": [[87, "tensorrt_llm.models.GPTConfig.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.gptforcausallm class method)": [[87, "tensorrt_llm.models.GPTForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.gptjconfig class method)": [[87, "tensorrt_llm.models.GPTJConfig.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.gptjforcausallm class method)": [[87, "tensorrt_llm.models.GPTJForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.llamaconfig class method)": [[87, "tensorrt_llm.models.LLaMAConfig.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.llamaforcausallm class method)": [[87, "tensorrt_llm.models.LLaMAForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.llavanextvisionconfig class method)": [[87, "tensorrt_llm.models.LlavaNextVisionConfig.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.llavanextvisionwrapper class method)": [[87, "tensorrt_llm.models.LlavaNextVisionWrapper.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.mambaforcausallm class method)": [[87, "tensorrt_llm.models.MambaForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.medusaconfig class method)": [[87, "tensorrt_llm.models.MedusaConfig.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.medusaforcausallm class method)": [[87, "tensorrt_llm.models.MedusaForCausalLm.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.mllamaforcausallm class method)": [[87, "tensorrt_llm.models.MLLaMAForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.phi3forcausallm class method)": [[87, "tensorrt_llm.models.Phi3ForCausalLM.from_hugging_face", false]], "from_hugging_face() (tensorrt_llm.models.phiforcausallm class method)": [[87, "tensorrt_llm.models.PhiForCausalLM.from_hugging_face", false]], "from_json_file() (tensorrt_llm.llmapi.buildconfig class method)": [[73, "tensorrt_llm.llmapi.BuildConfig.from_json_file", false]], "from_json_file() (tensorrt_llm.models.pretrainedconfig class method)": [[87, "tensorrt_llm.models.PretrainedConfig.from_json_file", false]], "from_meta_ckpt() (tensorrt_llm.models.llamaconfig class method)": [[87, "tensorrt_llm.models.LLaMAConfig.from_meta_ckpt", false]], "from_meta_ckpt() (tensorrt_llm.models.llamaforcausallm class method)": [[87, "tensorrt_llm.models.LLaMAForCausalLM.from_meta_ckpt", false]], "from_nemo() (tensorrt_llm.models.gptconfig class method)": [[87, "tensorrt_llm.models.GPTConfig.from_nemo", false]], "from_nemo() (tensorrt_llm.models.gptforcausallm class method)": [[87, "tensorrt_llm.models.GPTForCausalLM.from_nemo", false]], "from_pretrained() (tensorrt_llm.models.sd3transformer2dmodel class method)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.from_pretrained", false]], "from_serialized_engine() (tensorrt_llm.runtime.session static method)": [[90, "tensorrt_llm.runtime.Session.from_serialized_engine", false]], "from_string() (tensorrt_llm.functional.positionembeddingtype static method)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.from_string", false]], "from_string() (tensorrt_llm.functional.rotaryscalingtype static method)": [[85, "tensorrt_llm.functional.RotaryScalingType.from_string", false]], "fuse_qkv_projections() (tensorrt_llm.models.sd3transformer2dmodel method)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.fuse_qkv_projections", false]], "fusedgatedmlp (class in tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.FusedGatedMLP", false]], "fusedgatedmlp (tensorrt_llm.functional.mlptype attribute)": [[85, "tensorrt_llm.functional.MLPType.FusedGatedMLP", false]], "garbage_collection_gen0_threshold (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.garbage_collection_gen0_threshold", false]], "gatedmlp (class in tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.GatedMLP", false]], "gatedmlp (tensorrt_llm.functional.mlptype attribute)": [[85, "tensorrt_llm.functional.MLPType.GatedMLP", false]], "gather() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.gather", false]], "gather_context_logits (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.gather_context_logits", false]], "gather_context_logits (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.gather_context_logits", false]], "gather_context_logits (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.gather_context_logits", false]], "gather_context_logits (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.gather_context_logits", false]], "gather_context_logits (tensorrt_llm.runtime.modelrunnercpp property)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.gather_context_logits", false]], "gather_generation_logits (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.gather_generation_logits", false]], "gather_generation_logits (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.gather_generation_logits", false]], "gather_generation_logits (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.gather_generation_logits", false]], "gather_generation_logits (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.gather_generation_logits", false]], "gather_generation_logits (tensorrt_llm.runtime.modelrunnercpp property)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.gather_generation_logits", false]], "gather_last_token_logits() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.gather_last_token_logits", false]], "gather_nd() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.gather_nd", false]], "gegelu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.gegelu", false]], "geglu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.geglu", false]], "gelu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.gelu", false]], "gemm_allreduce() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.gemm_allreduce", false]], "gemm_allreduce_plugin (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.gemm_allreduce_plugin", false]], "gemm_allreduce_plugin (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.gemm_allreduce_plugin", false]], "gemm_swiglu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.gemm_swiglu", false]], "gemma2_added_fields (tensorrt_llm.models.gemmaconfig attribute)": [[87, "tensorrt_llm.models.GemmaConfig.GEMMA2_ADDED_FIELDS", false]], "gemma2_config() (tensorrt_llm.models.gemmaconfig method)": [[87, "tensorrt_llm.models.GemmaConfig.gemma2_config", false]], "gemma3_added_fields (tensorrt_llm.models.gemmaconfig attribute)": [[87, "tensorrt_llm.models.GemmaConfig.GEMMA3_ADDED_FIELDS", false]], "gemma3_config() (tensorrt_llm.models.gemmaconfig method)": [[87, "tensorrt_llm.models.GemmaConfig.gemma3_config", false]], "gemma_added_fields (tensorrt_llm.models.gemmaconfig attribute)": [[87, "tensorrt_llm.models.GemmaConfig.GEMMA_ADDED_FIELDS", false]], "gemmaconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GemmaConfig", false]], "gemmaforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GemmaForCausalLM", false]], "generate() (tensorrt_llm.llmapi.llm method)": [[73, "tensorrt_llm.llmapi.LLM.generate", false]], "generate() (tensorrt_llm.runtime.encdecmodelrunner method)": [[90, "tensorrt_llm.runtime.EncDecModelRunner.generate", false]], "generate() (tensorrt_llm.runtime.modelrunner method)": [[90, "tensorrt_llm.runtime.ModelRunner.generate", false]], "generate() (tensorrt_llm.runtime.modelrunnercpp method)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.generate", false]], "generate() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.generate", false]], "generate() (tensorrt_llm.runtime.qwenforcausallmgenerationsession method)": [[90, "tensorrt_llm.runtime.QWenForCausalLMGenerationSession.generate", false]], "generate_alibi_biases() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.generate_alibi_biases", false]], "generate_alibi_slopes() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.generate_alibi_slopes", false]], "generate_async() (tensorrt_llm.llmapi.llm method)": [[73, "tensorrt_llm.llmapi.LLM.generate_async", false]], "generate_logn_scaling() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.generate_logn_scaling", false]], "generation_logits (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.generation_logits", false]], "generationsequence (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.GenerationSequence", false]], "generationsession (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.GenerationSession", false]], "get_1d_sincos_pos_embed_from_grid() (in module tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.get_1d_sincos_pos_embed_from_grid", false]], "get_2d_sincos_pos_embed() (in module tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.get_2d_sincos_pos_embed", false]], "get_2d_sincos_pos_embed_from_grid() (in module tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.get_2d_sincos_pos_embed_from_grid", false]], "get_audio_features() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.get_audio_features", false]], "get_batch_idx() (tensorrt_llm.runtime.generationsequence method)": [[90, "tensorrt_llm.runtime.GenerationSequence.get_batch_idx", false]], "get_block_offsets() (tensorrt_llm.runtime.kvcachemanager method)": [[90, "tensorrt_llm.runtime.KVCacheManager.get_block_offsets", false]], "get_comm() (tensorrt_llm.llmapi.mpicommsession method)": [[73, "tensorrt_llm.llmapi.MpiCommSession.get_comm", false]], "get_config_group() (tensorrt_llm.models.pretrainedconfig method)": [[87, "tensorrt_llm.models.PretrainedConfig.get_config_group", false]], "get_context_phase_params() (tensorrt_llm.llmapi.disaggregatedparams method)": [[73, "tensorrt_llm.llmapi.DisaggregatedParams.get_context_phase_params", false]], "get_first_past_key_value() (tensorrt_llm.layers.attention.keyvaluecacheparams method)": [[86, "tensorrt_llm.layers.attention.KeyValueCacheParams.get_first_past_key_value", false]], "get_hf_config() (tensorrt_llm.models.gemmaconfig static method)": [[87, "tensorrt_llm.models.GemmaConfig.get_hf_config", false]], "get_kv_cache_events() (tensorrt_llm.llmapi.llm method)": [[73, "tensorrt_llm.llmapi.LLM.get_kv_cache_events", false]], "get_kv_cache_events_async() (tensorrt_llm.llmapi.llm method)": [[73, "tensorrt_llm.llmapi.LLM.get_kv_cache_events_async", false]], "get_next_medusa_tokens() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.get_next_medusa_tokens", false]], "get_num_heads_kv() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.get_num_heads_kv", false]], "get_parent() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.get_parent", false]], "get_pytorch_backend_config() (tensorrt_llm.llmapi.torchllmargs method)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.get_pytorch_backend_config", false]], "get_request_type() (tensorrt_llm.llmapi.disaggregatedparams method)": [[73, "tensorrt_llm.llmapi.DisaggregatedParams.get_request_type", false]], "get_rope_index() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.get_rope_index", false]], "get_seq_idx() (tensorrt_llm.runtime.generationsequence method)": [[90, "tensorrt_llm.runtime.GenerationSequence.get_seq_idx", false]], "get_stats() (tensorrt_llm.llmapi.llm method)": [[73, "tensorrt_llm.llmapi.LLM.get_stats", false]], "get_stats_async() (tensorrt_llm.llmapi.llm method)": [[73, "tensorrt_llm.llmapi.LLM.get_stats_async", false]], "get_timestep_embedding() (in module tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.get_timestep_embedding", false]], "get_users() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.get_users", false]], "get_visual_features() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.get_visual_features", false]], "get_weight() (tensorrt_llm.layers.linear.linearbase method)": [[86, "tensorrt_llm.layers.linear.LinearBase.get_weight", false]], "gpt_attention() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.gpt_attention", false]], "gpt_attention_plugin (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.gpt_attention_plugin", false]], "gptconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GPTConfig", false]], "gptforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GPTForCausalLM", false]], "gptjconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GPTJConfig", false]], "gptjforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GPTJForCausalLM", false]], "gptjmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GPTJModel", false]], "gptmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GPTModel", false]], "gptneoxforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GPTNeoXForCausalLM", false]], "gptneoxmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.GPTNeoXModel", false]], "gpu_weights_percent (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.gpu_weights_percent", false]], "grammar (tensorrt_llm.llmapi.guideddecodingparams attribute)": [[73, "tensorrt_llm.llmapi.GuidedDecodingParams.grammar", false]], "greedy_sampling (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.greedy_sampling", false]], "group_norm() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.group_norm", false]], "group_size (tensorrt_llm.llmapi.quantconfig attribute)": [[73, "tensorrt_llm.llmapi.QuantConfig.group_size", false]], "groupnorm (class in tensorrt_llm.layers.normalization)": [[86, "tensorrt_llm.layers.normalization.GroupNorm", false]], "groupnorm (tensorrt_llm.functional.layernormtype attribute)": [[85, "tensorrt_llm.functional.LayerNormType.GroupNorm", false]], "gt() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.gt", false]], "guaranteed_no_evict (tensorrt_llm.llmapi.capacityschedulerpolicy attribute)": [[73, "tensorrt_llm.llmapi.CapacitySchedulerPolicy.GUARANTEED_NO_EVICT", false]], "guided_decoding (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.guided_decoding", false]], "guideddecodingparams (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.GuidedDecodingParams", false]], "handle_per_step() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.handle_per_step", false]], "has_affine() (tensorrt_llm.functional.allreduceparams method)": [[85, "tensorrt_llm.functional.AllReduceParams.has_affine", false]], "has_bias() (tensorrt_llm.functional.allreduceparams method)": [[85, "tensorrt_llm.functional.AllReduceParams.has_bias", false]], "has_config_group() (tensorrt_llm.models.pretrainedconfig method)": [[87, "tensorrt_llm.models.PretrainedConfig.has_config_group", false]], "has_position_embedding (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.has_position_embedding", false]], "has_position_embedding (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.has_position_embedding", false]], "has_scale() (tensorrt_llm.functional.allreduceparams method)": [[85, "tensorrt_llm.functional.AllReduceParams.has_scale", false]], "has_token_type_embedding (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.has_token_type_embedding", false]], "has_token_type_embedding (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.has_token_type_embedding", false]], "has_zero_point (tensorrt_llm.llmapi.quantconfig attribute)": [[73, "tensorrt_llm.llmapi.QuantConfig.has_zero_point", false]], "head_size (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.head_size", false]], "head_size (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.head_size", false]], "hidden_size (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.hidden_size", false]], "hidden_size (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.hidden_size", false]], "hidden_size (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.hidden_size", false]], "hidden_size (tensorrt_llm.runtime.modelrunnercpp property)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.hidden_size", false]], "host_cache_size (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.host_cache_size", false]], "identity() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.identity", false]], "ignore_eos (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.ignore_eos", false]], "include_stop_str_in_output (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.include_stop_str_in_output", false]], "index (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.index", false]], "index_select() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.index_select", false]], "infer_shapes() (tensorrt_llm.runtime.session method)": [[90, "tensorrt_llm.runtime.Session.infer_shapes", false]], "inflight (tensorrt_llm.llmapi.batchingtype attribute)": [[73, "tensorrt_llm.llmapi.BatchingType.INFLIGHT", false]], "init_audio_encoder() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.init_audio_encoder", false]], "init_backend() (tensorrt_llm.llmapi.torchllmargs class method)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.init_backend", false]], "init_calib_config() (tensorrt_llm.llmapi.trtllmargs class method)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.init_calib_config", false]], "init_image_encoder() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.init_image_encoder", false]], "init_llm() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.init_llm", false]], "init_processor() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.init_processor", false]], "init_tokenizer() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.init_tokenizer", false]], "input_timing_cache (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.input_timing_cache", false]], "int8 (tensorrt_llm.llmapi.quantalgo attribute)": [[73, "tensorrt_llm.llmapi.QuantAlgo.INT8", false]], "int_clip() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.int_clip", false]], "interpolate() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.interpolate", false]], "is_alibi() (tensorrt_llm.functional.positionembeddingtype method)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.is_alibi", false]], "is_deferred() (tensorrt_llm.functional.positionembeddingtype method)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.is_deferred", false]], "is_dynamic() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.is_dynamic", false]], "is_gated_activation() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.is_gated_activation", false]], "is_gemma_2 (tensorrt_llm.models.gemmaconfig property)": [[87, "tensorrt_llm.models.GemmaConfig.is_gemma_2", false]], "is_gemma_3 (tensorrt_llm.models.gemmaconfig property)": [[87, "tensorrt_llm.models.GemmaConfig.is_gemma_3", false]], "is_keep_all (tensorrt_llm.llmapi.ngramdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.NGramDecodingConfig.is_keep_all", false]], "is_medusa_mode (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.is_medusa_mode", false]], "is_module_excluded_from_quantization() (tensorrt_llm.llmapi.quantconfig method)": [[73, "tensorrt_llm.llmapi.QuantConfig.is_module_excluded_from_quantization", false]], "is_mrope() (tensorrt_llm.functional.positionembeddingtype method)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.is_mrope", false]], "is_public_pool (tensorrt_llm.llmapi.ngramdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.NGramDecodingConfig.is_public_pool", false]], "is_redrafter_mode (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.is_redrafter_mode", false]], "is_rope() (tensorrt_llm.functional.positionembeddingtype method)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.is_rope", false]], "is_trt_wrapper() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.is_trt_wrapper", false]], "is_use_oldest (tensorrt_llm.llmapi.ngramdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.NGramDecodingConfig.is_use_oldest", false]], "is_valid() (tensorrt_llm.functional.moeallreduceparams method)": [[85, "tensorrt_llm.functional.MoEAllReduceParams.is_valid", false]], "is_valid() (tensorrt_llm.layers.attention.attentionparams method)": [[86, "tensorrt_llm.layers.attention.AttentionParams.is_valid", false]], "is_valid() (tensorrt_llm.layers.attention.keyvaluecacheparams method)": [[86, "tensorrt_llm.layers.attention.KeyValueCacheParams.is_valid", false]], "is_valid_cross_attn() (tensorrt_llm.layers.attention.attentionparams method)": [[86, "tensorrt_llm.layers.attention.AttentionParams.is_valid_cross_attn", false]], "joint_attn_forward() (tensorrt_llm.layers.attention.diffusersattention method)": [[86, "tensorrt_llm.layers.attention.DiffusersAttention.joint_attn_forward", false]], "json (tensorrt_llm.llmapi.guideddecodingparams attribute)": [[73, "tensorrt_llm.llmapi.GuidedDecodingParams.json", false]], "json_object (tensorrt_llm.llmapi.guideddecodingparams attribute)": [[73, "tensorrt_llm.llmapi.GuidedDecodingParams.json_object", false]], "keyvaluecacheparams (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.KeyValueCacheParams", false]], "kv_cache_dtype (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.kv_cache_dtype", false]], "kv_cache_quant_algo (tensorrt_llm.llmapi.quantconfig attribute)": [[73, "tensorrt_llm.llmapi.QuantConfig.kv_cache_quant_algo", false]], "kv_cache_type (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.kv_cache_type", false]], "kv_cache_type (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.kv_cache_type", false]], "kv_cache_type (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.kv_cache_type", false]], "kv_dtype (tensorrt_llm.models.pretrainedconfig property)": [[87, "tensorrt_llm.models.PretrainedConfig.kv_dtype", false]], "kvcacheconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.KvCacheConfig", false]], "kvcachemanager (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.KVCacheManager", false]], "kvcacheretentionconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.KvCacheRetentionConfig", false]], "kvcacheretentionconfig.tokenrangeretentionconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.KvCacheRetentionConfig.TokenRangeRetentionConfig", false]], "labelembedding (class in tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.LabelEmbedding", false]], "language_adapter_config (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.language_adapter_config", false]], "last_layer (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.last_layer", false]], "last_process_for_ub (tensorrt_llm.functional.allreducefusionop attribute)": [[85, "tensorrt_llm.functional.AllReduceFusionOp.LAST_PROCESS_FOR_UB", false]], "layer_norm() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.layer_norm", false]], "layer_quant_mode (tensorrt_llm.llmapi.quantconfig property)": [[73, "tensorrt_llm.llmapi.QuantConfig.layer_quant_mode", false]], "layer_types (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.layer_types", false]], "layernorm (class in tensorrt_llm.layers.normalization)": [[86, "tensorrt_llm.layers.normalization.LayerNorm", false]], "layernorm (tensorrt_llm.functional.layernormtype attribute)": [[85, "tensorrt_llm.functional.LayerNormType.LayerNorm", false]], "layernormpositiontype (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.LayerNormPositionType", false]], "layernormtype (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.LayerNormType", false]], "learned_absolute (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.learned_absolute", false]], "length (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.length", false]], "length (tensorrt_llm.llmapi.completionoutput property)": [[73, "id2", false]], "length_penalty (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.length_penalty", false]], "length_penalty (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.length_penalty", false]], "linear (class in tensorrt_llm.layers.linear)": [[86, "tensorrt_llm.layers.linear.Linear", false]], "linear (tensorrt_llm.functional.rotaryscalingtype attribute)": [[85, "tensorrt_llm.functional.RotaryScalingType.linear", false]], "linearactivation (class in tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.LinearActivation", false]], "linearapproximategelu (class in tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.LinearApproximateGELU", false]], "linearbase (class in tensorrt_llm.layers.linear)": [[86, "tensorrt_llm.layers.linear.LinearBase", false]], "lineargeglu (class in tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.LinearGEGLU", false]], "lineargelu (class in tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.LinearGELU", false]], "linearswiglu (class in tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.LinearSwiGLU", false]], "llama3 (tensorrt_llm.functional.rotaryscalingtype attribute)": [[85, "tensorrt_llm.functional.RotaryScalingType.llama3", false]], "llamaconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.LLaMAConfig", false]], "llamaforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.LLaMAForCausalLM", false]], "llamamodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.LLaMAModel", false]], "llavanextvisionconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.LlavaNextVisionConfig", false]], "llavanextvisionwrapper (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.LlavaNextVisionWrapper", false]], "llm (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.LLM", false]], "llm_engine_dir (tensorrt_llm.runtime.multimodalmodelrunner property)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.llm_engine_dir", false]], "llm_id (tensorrt_llm.llmapi.llm attribute)": [[73, "tensorrt_llm.llmapi.LLM.llm_id", false]], "llm_id (tensorrt_llm.llmapi.llm property)": [[73, "id0", false]], "llmargs (in module tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.LlmArgs", false]], "load() (tensorrt_llm.models.pretrainedmodel method)": [[87, "tensorrt_llm.models.PretrainedModel.load", false]], "load() (tensorrt_llm.models.sd3transformer2dmodel method)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.load", false]], "load_format (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.load_format", false]], "load_test_audio() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.load_test_audio", false]], "load_test_data() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.load_test_data", false]], "locate_accepted_draft_tokens() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.locate_accepted_draft_tokens", false]], "location (tensorrt_llm.functional.tensor property)": [[85, "tensorrt_llm.functional.Tensor.location", false]], "log() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.log", false]], "log() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.log", false]], "log_softmax() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.log_softmax", false]], "logits_processor (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.logits_processor", false]], "logitsprocessor (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.LogitsProcessor", false]], "logitsprocessorlist (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.LogitsProcessorList", false]], "logprobs (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.logprobs", false]], "logprobs (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.logprobs", false]], "logprobs_diff (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.logprobs_diff", false]], "logprobs_diff (tensorrt_llm.llmapi.completionoutput property)": [[73, "id3", false]], "long_rope (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.long_rope", false]], "longrope (tensorrt_llm.functional.rotaryscalingtype attribute)": [[85, "tensorrt_llm.functional.RotaryScalingType.longrope", false]], "lookahead_config (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.lookahead_config", false]], "lookahead_decoding (tensorrt_llm.models.speculativedecodingmode attribute)": [[87, "tensorrt_llm.models.SpeculativeDecodingMode.LOOKAHEAD_DECODING", false]], "lookaheaddecodingconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.LookaheadDecodingConfig", false]], "lora_config (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.lora_config", false]], "lora_plugin (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.lora_plugin", false]], "lora_plugin() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.lora_plugin", false]], "lora_target_modules (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.lora_target_modules", false]], "low_latency_gemm() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.low_latency_gemm", false]], "low_latency_gemm_swiglu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.low_latency_gemm_swiglu", false]], "lowprecision (tensorrt_llm.functional.allreducestrategy attribute)": [[85, "tensorrt_llm.functional.AllReduceStrategy.LOWPRECISION", false]], "lt() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.lt", false]], "make_causal_mask() (in module tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.make_causal_mask", false]], "mamba_conv1d() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.mamba_conv1d", false]], "mamba_conv1d_plugin (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.mamba_conv1d_plugin", false]], "mambaforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.MambaForCausalLM", false]], "mapping (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.mapping", false]], "mapping (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.mapping", false]], "mark_output() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.mark_output", false]], "masked_scatter() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.masked_scatter", false]], "masked_select() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.masked_select", false]], "matmul() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.matmul", false]], "max() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.max", false]], "max() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.max", false]], "max_attention_window (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.max_attention_window", false]], "max_attention_window_size (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.max_attention_window_size", false]], "max_batch_size (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.max_batch_size", false]], "max_batch_size (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.max_batch_size", false]], "max_beam_width (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.max_beam_width", false]], "max_beam_width (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.max_beam_width", false]], "max_cache_storage_gb (tensorrt_llm.llmapi.buildcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildCacheConfig.max_cache_storage_gb", false]], "max_cache_storage_gb (tensorrt_llm.llmapi.buildcacheconfig property)": [[73, "id8", false]], "max_cpu_loras (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.max_cpu_loras", false]], "max_cpu_loras (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.max_cpu_loras", false]], "max_draft_len (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.max_draft_len", false]], "max_draft_tokens (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.max_draft_tokens", false]], "max_encoder_input_len (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.max_encoder_input_len", false]], "max_input_len (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.max_input_len", false]], "max_lora_rank (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.max_lora_rank", false]], "max_lora_rank (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.max_lora_rank", false]], "max_loras (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.max_loras", false]], "max_loras (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.max_loras", false]], "max_matching_ngram_size (tensorrt_llm.llmapi.ngramdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.NGramDecodingConfig.max_matching_ngram_size", false]], "max_medusa_tokens (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.max_medusa_tokens", false]], "max_new_tokens (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.max_new_tokens", false]], "max_ngram_size (tensorrt_llm.llmapi.lookaheaddecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.LookaheadDecodingConfig.max_ngram_size", false]], "max_non_leaves_per_layer (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.max_non_leaves_per_layer", false]], "max_num_tokens (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.max_num_tokens", false]], "max_num_tokens (tensorrt_llm.llmapi.cachetransceiverconfig attribute)": [[73, "tensorrt_llm.llmapi.CacheTransceiverConfig.max_num_tokens", false]], "max_prompt_embedding_table_size (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.max_prompt_embedding_table_size", false]], "max_prompt_embedding_table_size (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.max_prompt_embedding_table_size", false]], "max_prompt_embedding_table_size (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.max_prompt_embedding_table_size", false]], "max_prompt_embedding_table_size (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.max_prompt_embedding_table_size", false]], "max_prompt_embedding_table_size (tensorrt_llm.runtime.modelrunnercpp property)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.max_prompt_embedding_table_size", false]], "max_records (tensorrt_llm.llmapi.buildcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildCacheConfig.max_records", false]], "max_records (tensorrt_llm.llmapi.buildcacheconfig property)": [[73, "id9", false]], "max_seq_len (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.max_seq_len", false]], "max_sequence_length (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.max_sequence_length", false]], "max_sequence_length (tensorrt_llm.runtime.modelrunnercpp property)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.max_sequence_length", false]], "max_tokens (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.max_tokens", false]], "max_tokens (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.max_tokens", false]], "max_utilization (tensorrt_llm.llmapi.capacityschedulerpolicy attribute)": [[73, "tensorrt_llm.llmapi.CapacitySchedulerPolicy.MAX_UTILIZATION", false]], "max_verification_set_size (tensorrt_llm.llmapi.lookaheaddecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.LookaheadDecodingConfig.max_verification_set_size", false]], "max_window_size (tensorrt_llm.llmapi.lookaheaddecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.LookaheadDecodingConfig.max_window_size", false]], "maximum() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.maximum", false]], "mean() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.mean", false]], "mean() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.mean", false]], "medusa (tensorrt_llm.models.speculativedecodingmode attribute)": [[87, "tensorrt_llm.models.SpeculativeDecodingMode.MEDUSA", false]], "medusa_choices (tensorrt_llm.llmapi.medusadecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.MedusaDecodingConfig.medusa_choices", false]], "medusa_decode_and_verify() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.medusa_decode_and_verify", false]], "medusa_paths (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.medusa_paths", false]], "medusa_position_offsets (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.medusa_position_offsets", false]], "medusa_temperature (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.medusa_temperature", false]], "medusa_topks (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.medusa_topks", false]], "medusa_tree_ids (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.medusa_tree_ids", false]], "medusaconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.MedusaConfig", false]], "medusadecodingconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.MedusaDecodingConfig", false]], "medusaforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.MedusaForCausalLm", false]], "meshgrid2d() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.meshgrid2d", false]], "min() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.min", false]], "min_latency (tensorrt_llm.functional.allreducestrategy attribute)": [[85, "tensorrt_llm.functional.AllReduceStrategy.MIN_LATENCY", false]], "min_length (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.min_length", false]], "min_p (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.min_p", false]], "min_p (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.min_p", false]], "min_tokens (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.min_tokens", false]], "minimum() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.minimum", false]], "mish (class in tensorrt_llm.layers.activation)": [[86, "tensorrt_llm.layers.activation.Mish", false]], "mixed_precision (tensorrt_llm.llmapi.quantalgo attribute)": [[73, "tensorrt_llm.llmapi.QuantAlgo.MIXED_PRECISION", false]], "mixed_sampler (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.mixed_sampler", false]], "mllamaforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.MLLaMAForCausalLM", false]], "mlp (class in tensorrt_llm.layers.mlp)": [[86, "tensorrt_llm.layers.mlp.MLP", false]], "mlp (tensorrt_llm.functional.mlptype attribute)": [[85, "tensorrt_llm.functional.MLPType.MLP", false]], "mlptype (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.MLPType", false]], "mnnvl (tensorrt_llm.functional.allreducestrategy attribute)": [[85, "tensorrt_llm.functional.AllReduceStrategy.MNNVL", false]], "model": [[33, "cmdoption-trtllm-serve-serve-arg-MODEL", false]], "model_config (tensorrt_llm.llmapi.cachetransceiverconfig attribute)": [[73, "tensorrt_llm.llmapi.CacheTransceiverConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.calibconfig attribute)": [[73, "tensorrt_llm.llmapi.CalibConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.drafttargetdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.DraftTargetDecodingConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.dynamicbatchconfig attribute)": [[73, "tensorrt_llm.llmapi.DynamicBatchConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.extendedruntimeperfknobconfig attribute)": [[73, "tensorrt_llm.llmapi.ExtendedRuntimePerfKnobConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.lookaheaddecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.LookaheadDecodingConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.medusadecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.MedusaDecodingConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.mtpdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.MTPDecodingConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.ngramdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.NGramDecodingConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.schedulerconfig attribute)": [[73, "tensorrt_llm.llmapi.SchedulerConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.torchcompileconfig attribute)": [[73, "tensorrt_llm.llmapi.TorchCompileConfig.model_config", false]], "model_config (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.model_config", false]], "model_config (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.model_config", false]], "model_name (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.model_name", false]], "model_post_init() (tensorrt_llm.llmapi.torchllmargs method)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.model_post_init", false]], "model_post_init() (tensorrt_llm.llmapi.trtllmargs method)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.model_post_init", false]], "modelconfig (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.ModelConfig", false]], "modelrunner (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.ModelRunner", false]], "modelrunnercpp (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp", false]], "module": [[85, "module-tensorrt_llm", false], [85, "module-tensorrt_llm.functional", false], [86, "module-tensorrt_llm", false], [86, "module-tensorrt_llm.layers.activation", false], [86, "module-tensorrt_llm.layers.attention", false], [86, "module-tensorrt_llm.layers.cast", false], [86, "module-tensorrt_llm.layers.conv", false], [86, "module-tensorrt_llm.layers.embedding", false], [86, "module-tensorrt_llm.layers.linear", false], [86, "module-tensorrt_llm.layers.mlp", false], [86, "module-tensorrt_llm.layers.normalization", false], [86, "module-tensorrt_llm.layers.pooling", false], [87, "module-tensorrt_llm", false], [87, "module-tensorrt_llm.models", false], [88, "module-tensorrt_llm", false], [88, "module-tensorrt_llm.plugin", false], [89, "module-tensorrt_llm", false], [89, "module-tensorrt_llm.quantization", false], [90, "module-tensorrt_llm", false], [90, "module-tensorrt_llm.runtime", false]], "modulo() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.modulo", false]], "moe (tensorrt_llm.functional.sidestreamidtype attribute)": [[85, "tensorrt_llm.functional.SideStreamIDType.moe", false]], "moe_backend (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.moe_backend", false]], "moe_finalize_allreduce_residual_rms_norm (tensorrt_llm.functional.allreducefusionop attribute)": [[85, "tensorrt_llm.functional.AllReduceFusionOp.MOE_FINALIZE_ALLREDUCE_RESIDUAL_RMS_NORM", false]], "moe_load_balancer (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.moe_load_balancer", false]], "moe_max_num_tokens (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.moe_max_num_tokens", false]], "moeallreduceparams (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.MoEAllReduceParams", false]], "monitor_memory (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.monitor_memory", false]], "mpicommsession (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.MpiCommSession", false]], "mptforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.MPTForCausalLM", false]], "mptmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.MPTModel", false]], "mrope (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.mrope", false]], "mrope (tensorrt_llm.functional.rotaryscalingtype attribute)": [[85, "tensorrt_llm.functional.RotaryScalingType.mrope", false]], "mropeparams (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.MropeParams", false]], "msg (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "id10", false], [73, "id13", false], [73, "id16", false], [73, "tensorrt_llm.llmapi.TorchLlmArgs.msg", false]], "msg (tensorrt_llm.llmapi.trtllmargs attribute)": [[73, "id19", false], [73, "id22", false], [73, "id25", false], [73, "id28", false], [73, "id31", false], [73, "tensorrt_llm.llmapi.TrtLlmArgs.msg", false]], "mtpdecodingconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.MTPDecodingConfig", false]], "mul() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.mul", false]], "multi_block_mode (tensorrt_llm.llmapi.extendedruntimeperfknobconfig attribute)": [[73, "tensorrt_llm.llmapi.ExtendedRuntimePerfKnobConfig.multi_block_mode", false]], "multimodalmodelrunner (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner", false]], "multiply_and_lora() (tensorrt_llm.layers.linear.linearbase method)": [[86, "tensorrt_llm.layers.linear.LinearBase.multiply_and_lora", false]], "multiply_collect() (tensorrt_llm.layers.linear.linearbase method)": [[86, "tensorrt_llm.layers.linear.LinearBase.multiply_collect", false]], "multiply_collect() (tensorrt_llm.layers.linear.rowlinear method)": [[86, "tensorrt_llm.layers.linear.RowLinear.multiply_collect", false]], "n (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.n", false]], "name (tensorrt_llm.functional.tensor property)": [[85, "tensorrt_llm.functional.Tensor.name", false]], "name (tensorrt_llm.runtime.tensorinfo attribute)": [[90, "tensorrt_llm.runtime.TensorInfo.name", false]], "native_quant_flow (tensorrt_llm.models.gemmaforcausallm attribute)": [[87, "tensorrt_llm.models.GemmaForCausalLM.NATIVE_QUANT_FLOW", false]], "nccl (tensorrt_llm.functional.allreducestrategy attribute)": [[85, "tensorrt_llm.functional.AllReduceStrategy.NCCL", false]], "ndim() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.ndim", false]], "network (tensorrt_llm.functional.tensor property)": [[85, "tensorrt_llm.functional.Tensor.network", false]], "next_medusa_input_ids() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.next_medusa_input_ids", false]], "ngram (tensorrt_llm.models.speculativedecodingmode attribute)": [[87, "tensorrt_llm.models.SpeculativeDecodingMode.NGRAM", false]], "ngramdecodingconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.NGramDecodingConfig", false]], "no_quant (tensorrt_llm.llmapi.quantalgo attribute)": [[73, "tensorrt_llm.llmapi.QuantAlgo.NO_QUANT", false]], "no_repeat_ngram_size (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.no_repeat_ngram_size", false]], "no_repeat_ngram_size (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.no_repeat_ngram_size", false]], "non_gated_version() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.non_gated_version", false]], "none (tensorrt_llm.functional.allreducefusionop attribute)": [[85, "tensorrt_llm.functional.AllReduceFusionOp.NONE", false]], "none (tensorrt_llm.functional.rotaryscalingtype attribute)": [[85, "tensorrt_llm.functional.RotaryScalingType.none", false]], "none (tensorrt_llm.models.speculativedecodingmode attribute)": [[87, "tensorrt_llm.models.SpeculativeDecodingMode.NONE", false]], "nonzero() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.nonzero", false]], "not_op() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.not_op", false]], "num_beams (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.num_beams", false]], "num_draft_tokens (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.num_draft_tokens", false]], "num_eagle_layers (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.num_eagle_layers", false]], "num_heads (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.num_heads", false]], "num_heads (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.num_heads", false]], "num_heads (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.num_heads", false]], "num_heads (tensorrt_llm.runtime.modelrunnercpp property)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.num_heads", false]], "num_kv_heads (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.num_kv_heads", false]], "num_kv_heads_per_cross_attn_layer (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.num_kv_heads_per_cross_attn_layer", false]], "num_kv_heads_per_layer (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.num_kv_heads_per_layer", false]], "num_layers (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.num_layers", false]], "num_layers (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.num_layers", false]], "num_layers (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.num_layers", false]], "num_layers (tensorrt_llm.runtime.modelrunnercpp property)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.num_layers", false]], "num_medusa_heads (tensorrt_llm.llmapi.medusadecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.MedusaDecodingConfig.num_medusa_heads", false]], "num_medusa_heads (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.num_medusa_heads", false]], "num_medusa_heads (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.num_medusa_heads", false]], "num_nextn_predict_layers (tensorrt_llm.llmapi.mtpdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.MTPDecodingConfig.num_nextn_predict_layers", false]], "num_return_sequences (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.num_return_sequences", false]], "numel() (tensorrt_llm.runtime.tensorinfo method)": [[90, "tensorrt_llm.runtime.TensorInfo.numel", false]], "nvfp4 (tensorrt_llm.llmapi.quantalgo attribute)": [[73, "tensorrt_llm.llmapi.QuantAlgo.NVFP4", false]], "nvinfer1 (c++ type)": [[1, "_CPPv48nvinfer1", false]], "onboard_blocks (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.onboard_blocks", false]], "oneshot (tensorrt_llm.functional.allreducestrategy attribute)": [[85, "tensorrt_llm.functional.AllReduceStrategy.ONESHOT", false]], "op_and() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.op_and", false]], "op_or() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.op_or", false]], "op_xor() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.op_xor", false]], "opaque_state (tensorrt_llm.llmapi.disaggregatedparams attribute)": [[73, "tensorrt_llm.llmapi.DisaggregatedParams.opaque_state", false]], "opt_batch_size (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.opt_batch_size", false]], "opt_num_tokens (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.opt_num_tokens", false]], "optforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.OPTForCausalLM", false]], "optmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.OPTModel", false]], "outer() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.outer", false]], "output_cum_log_probs (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.output_cum_log_probs", false]], "output_log_probs (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.output_log_probs", false]], "output_sequence_lengths (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.output_sequence_lengths", false]], "output_timing_cache (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.output_timing_cache", false]], "outputs (tensorrt_llm.llmapi.requestoutput attribute)": [[73, "tensorrt_llm.llmapi.RequestOutput.outputs", false]], "pad() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.pad", false]], "pad_id (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.pad_id", false]], "pad_id (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.pad_id", false]], "padding (tensorrt_llm.functional.attentionmasktype attribute)": [[85, "tensorrt_llm.functional.AttentionMaskType.padding", false]], "paged_kv_cache (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.paged_kv_cache", false]], "paged_state (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.paged_state", false]], "paged_state (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.paged_state", false]], "permute() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.permute", false]], "permute() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.permute", false]], "phi3forcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.Phi3ForCausalLM", false]], "phi3model (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.Phi3Model", false]], "phiforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.PhiForCausalLM", false]], "phimodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.PhiModel", false]], "pixartalphatextprojection (class in tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.PixArtAlphaTextProjection", false]], "plugin_config (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.plugin_config", false]], "pluginconfig (class in tensorrt_llm.plugin)": [[88, "tensorrt_llm.plugin.PluginConfig", false]], "positionembeddingtype (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.PositionEmbeddingType", false]], "post_layernorm (tensorrt_llm.functional.layernormpositiontype attribute)": [[85, "tensorrt_llm.functional.LayerNormPositionType.post_layernorm", false]], "posterior_threshold (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.posterior_threshold", false]], "postprocess() (tensorrt_llm.layers.attention.attention method)": [[86, "tensorrt_llm.layers.attention.Attention.postprocess", false]], "postprocess() (tensorrt_llm.layers.attention.deepseekv2attention method)": [[86, "tensorrt_llm.layers.attention.DeepseekV2Attention.postprocess", false]], "postprocess() (tensorrt_llm.layers.embedding.embedding method)": [[86, "tensorrt_llm.layers.embedding.Embedding.postprocess", false]], "postprocess() (tensorrt_llm.layers.linear.linear method)": [[86, "tensorrt_llm.layers.linear.Linear.postprocess", false]], "pow() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.pow", false]], "pp_communicate_final_output_ids() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.pp_communicate_final_output_ids", false]], "pp_communicate_new_tokens() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.pp_communicate_new_tokens", false]], "pre_layernorm (tensorrt_llm.functional.layernormpositiontype attribute)": [[85, "tensorrt_llm.functional.LayerNormPositionType.pre_layernorm", false]], "pre_quant_scale (tensorrt_llm.llmapi.quantconfig attribute)": [[73, "tensorrt_llm.llmapi.QuantConfig.pre_quant_scale", false]], "precompute_relative_attention_bias() (tensorrt_llm.models.decodermodel method)": [[87, "tensorrt_llm.models.DecoderModel.precompute_relative_attention_bias", false]], "precompute_relative_attention_bias() (tensorrt_llm.models.encodermodel method)": [[87, "tensorrt_llm.models.EncoderModel.precompute_relative_attention_bias", false]], "precompute_relative_attention_bias() (tensorrt_llm.models.whisperencoder method)": [[87, "tensorrt_llm.models.WhisperEncoder.precompute_relative_attention_bias", false]], "prepare_inputs() (tensorrt_llm.models.chatglmforcausallm method)": [[87, "tensorrt_llm.models.ChatGLMForCausalLM.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.decodermodel method)": [[87, "tensorrt_llm.models.DecoderModel.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.dit method)": [[87, "tensorrt_llm.models.DiT.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.eagleforcausallm method)": [[87, "tensorrt_llm.models.EagleForCausalLM.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.encodermodel method)": [[87, "tensorrt_llm.models.EncoderModel.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.llavanextvisionwrapper method)": [[87, "tensorrt_llm.models.LlavaNextVisionWrapper.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.mambaforcausallm method)": [[87, "tensorrt_llm.models.MambaForCausalLM.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.mllamaforcausallm method)": [[87, "tensorrt_llm.models.MLLaMAForCausalLM.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.pretrainedmodel method)": [[87, "tensorrt_llm.models.PretrainedModel.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.recurrentgemmaforcausallm method)": [[87, "tensorrt_llm.models.RecurrentGemmaForCausalLM.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.sd3transformer2dmodel method)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.prepare_inputs", false]], "prepare_inputs() (tensorrt_llm.models.whisperencoder method)": [[87, "tensorrt_llm.models.WhisperEncoder.prepare_inputs", false]], "prepare_position_ids_for_cogvlm() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.prepare_position_ids_for_cogvlm", false]], "prepare_recurrent_inputs() (tensorrt_llm.models.recurrentgemmaforcausallm method)": [[87, "tensorrt_llm.models.RecurrentGemmaForCausalLM.prepare_recurrent_inputs", false]], "preprocess() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.preprocess", false]], "presence_penalty (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.presence_penalty", false]], "presence_penalty (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.presence_penalty", false]], "pretrainedconfig (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.PretrainedConfig", false]], "pretrainedmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.PretrainedModel", false]], "print_iter_log (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.print_iter_log", false]], "priority (tensorrt_llm.llmapi.kvcacheretentionconfig.tokenrangeretentionconfig property)": [[73, "tensorrt_llm.llmapi.KvCacheRetentionConfig.TokenRangeRetentionConfig.priority", false]], "process_input() (tensorrt_llm.runtime.encdecmodelrunner method)": [[90, "tensorrt_llm.runtime.EncDecModelRunner.process_input", false]], "process_logits_including_draft() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.process_logits_including_draft", false]], "prod() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.prod", false]], "profiler (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.profiler", false]], "profiling_verbosity (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.profiling_verbosity", false]], "prompt (tensorrt_llm.llmapi.requestoutput attribute)": [[73, "tensorrt_llm.llmapi.RequestOutput.prompt", false]], "prompt (tensorrt_llm.llmapi.requestoutput property)": [[73, "id6", false]], "prompt_logprobs (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.prompt_logprobs", false]], "prompt_logprobs (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.prompt_logprobs", false]], "prompt_lookup_num_tokens (tensorrt_llm.llmapi.ngramdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.NGramDecodingConfig.prompt_lookup_num_tokens", false]], "prompt_token_ids (tensorrt_llm.llmapi.requestoutput attribute)": [[73, "tensorrt_llm.llmapi.RequestOutput.prompt_token_ids", false]], "prompttuningembedding (class in tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.PromptTuningEmbedding", false]], "ptuning_setup() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.ptuning_setup", false]], "ptuning_setup_fuyu() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.ptuning_setup_fuyu", false]], "ptuning_setup_llava_next() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.ptuning_setup_llava_next", false]], "ptuning_setup_phi3() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.ptuning_setup_phi3", false]], "ptuning_setup_pixtral() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.ptuning_setup_pixtral", false]], "python_e2e (tensorrt_llm.runtime.multimodalmodelrunner property)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.python_e2e", false]], "pytorch_weights_path (tensorrt_llm.llmapi.drafttargetdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.DraftTargetDecodingConfig.pytorch_weights_path", false]], "pytorch_weights_path (tensorrt_llm.llmapi.eagledecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.EagleDecodingConfig.pytorch_weights_path", false]], "quant_algo (tensorrt_llm.llmapi.quantconfig attribute)": [[73, "tensorrt_llm.llmapi.QuantConfig.quant_algo", false]], "quant_algo (tensorrt_llm.models.pretrainedconfig property)": [[87, "tensorrt_llm.models.PretrainedConfig.quant_algo", false]], "quant_mode (tensorrt_llm.llmapi.quantconfig property)": [[73, "tensorrt_llm.llmapi.QuantConfig.quant_mode", false]], "quant_mode (tensorrt_llm.models.pretrainedconfig property)": [[87, "tensorrt_llm.models.PretrainedConfig.quant_mode", false]], "quant_mode (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.quant_mode", false]], "quant_mode (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.quant_mode", false]], "quantalgo (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.QuantAlgo", false]], "quantalgo (class in tensorrt_llm.quantization)": [[89, "tensorrt_llm.quantization.QuantAlgo", false]], "quantconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.QuantConfig", false]], "quantize() (tensorrt_llm.models.baichuanforcausallm class method)": [[87, "tensorrt_llm.models.BaichuanForCausalLM.quantize", false]], "quantize() (tensorrt_llm.models.chatglmforcausallm class method)": [[87, "tensorrt_llm.models.ChatGLMForCausalLM.quantize", false]], "quantize() (tensorrt_llm.models.cogvlmforcausallm class method)": [[87, "tensorrt_llm.models.CogVLMForCausalLM.quantize", false]], "quantize() (tensorrt_llm.models.gemmaforcausallm class method)": [[87, "tensorrt_llm.models.GemmaForCausalLM.quantize", false]], "quantize() (tensorrt_llm.models.gptforcausallm class method)": [[87, "tensorrt_llm.models.GPTForCausalLM.quantize", false]], "quantize() (tensorrt_llm.models.llamaforcausallm class method)": [[87, "tensorrt_llm.models.LLaMAForCausalLM.quantize", false]], "quantize() (tensorrt_llm.models.pretrainedmodel class method)": [[87, "tensorrt_llm.models.PretrainedModel.quantize", false]], "quantize_and_export() (in module tensorrt_llm.quantization)": [[89, "tensorrt_llm.quantization.quantize_and_export", false]], "quantmode (class in tensorrt_llm.quantization)": [[89, "tensorrt_llm.quantization.QuantMode", false]], "quick_gelu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.quick_gelu", false]], "qwenforcausallmgenerationsession (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.QWenForCausalLMGenerationSession", false]], "rand() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.rand", false]], "random_seed (tensorrt_llm.llmapi.calibconfig attribute)": [[73, "tensorrt_llm.llmapi.CalibConfig.random_seed", false]], "random_seed (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.random_seed", false]], "rank() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.rank", false]], "rearrange() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.rearrange", false]], "recurrentgemmaforcausallm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.RecurrentGemmaForCausalLM", false]], "recv() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.recv", false]], "redrafter_draft_len_per_beam (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.redrafter_draft_len_per_beam", false]], "redrafter_num_beams (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.redrafter_num_beams", false]], "redrafterforllamalm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.ReDrafterForLLaMALM", false]], "redrafterforqwenlm (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.ReDrafterForQWenLM", false]], "reduce() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.reduce", false]], "reduce_scatter() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.reduce_scatter", false]], "regex (tensorrt_llm.llmapi.guideddecodingparams attribute)": [[73, "tensorrt_llm.llmapi.GuidedDecodingParams.regex", false]], "relative (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.relative", false]], "relaxed_delta (tensorrt_llm.llmapi.mtpdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.MTPDecodingConfig.relaxed_delta", false]], "relaxed_topk (tensorrt_llm.llmapi.mtpdecodingconfig attribute)": [[73, "tensorrt_llm.llmapi.MTPDecodingConfig.relaxed_topk", false]], "release() (tensorrt_llm.models.pretrainedmodel method)": [[87, "tensorrt_llm.models.PretrainedModel.release", false]], "relu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.relu", false]], "remove_input_padding (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.remove_input_padding", false]], "remove_input_padding (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.remove_input_padding", false]], "remove_input_padding (tensorrt_llm.runtime.modelrunner property)": [[90, "tensorrt_llm.runtime.ModelRunner.remove_input_padding", false]], "remove_input_padding (tensorrt_llm.runtime.modelrunnercpp property)": [[90, "tensorrt_llm.runtime.ModelRunnerCpp.remove_input_padding", false]], "reorder_kv_cache_for_beam_search() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.reorder_kv_cache_for_beam_search", false]], "repeat() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.repeat", false]], "repeat() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.repeat", false]], "repeat_interleave() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.repeat_interleave", false]], "repetition_penalty (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.repetition_penalty", false]], "repetition_penalty (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.repetition_penalty", false]], "replace_all_uses_with() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.replace_all_uses_with", false]], "request_id (tensorrt_llm.llmapi.requestoutput attribute)": [[73, "tensorrt_llm.llmapi.RequestOutput.request_id", false]], "request_perf_metrics (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.request_perf_metrics", false]], "request_type (tensorrt_llm.llmapi.disaggregatedparams attribute)": [[73, "tensorrt_llm.llmapi.DisaggregatedParams.request_type", false]], "requesterror (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.RequestError", false]], "requestoutput (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.RequestOutput", false]], "residual_rms_norm (tensorrt_llm.functional.allreducefusionop attribute)": [[85, "tensorrt_llm.functional.AllReduceFusionOp.RESIDUAL_RMS_NORM", false]], "residual_rms_norm_out_quant_fp8 (tensorrt_llm.functional.allreducefusionop attribute)": [[85, "tensorrt_llm.functional.AllReduceFusionOp.RESIDUAL_RMS_NORM_OUT_QUANT_FP8", false]], "residual_rms_norm_out_quant_nvfp4 (tensorrt_llm.functional.allreducefusionop attribute)": [[85, "tensorrt_llm.functional.AllReduceFusionOp.RESIDUAL_RMS_NORM_OUT_QUANT_NVFP4", false]], "residual_rms_norm_quant_fp8 (tensorrt_llm.functional.allreducefusionop attribute)": [[85, "tensorrt_llm.functional.AllReduceFusionOp.RESIDUAL_RMS_NORM_QUANT_FP8", false]], "residual_rms_norm_quant_nvfp4 (tensorrt_llm.functional.allreducefusionop attribute)": [[85, "tensorrt_llm.functional.AllReduceFusionOp.RESIDUAL_RMS_NORM_QUANT_NVFP4", false]], "residual_rms_prepost_norm (tensorrt_llm.functional.allreducefusionop attribute)": [[85, "tensorrt_llm.functional.AllReduceFusionOp.RESIDUAL_RMS_PREPOST_NORM", false]], "return_context_logits (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.return_context_logits", false]], "return_dict (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.return_dict", false]], "return_encoder_output (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.return_encoder_output", false]], "return_generation_logits (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.return_generation_logits", false]], "return_perf_metrics (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.return_perf_metrics", false]], "rg_lru() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.rg_lru", false]], "rms_norm() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.rms_norm", false]], "rmsnorm (class in tensorrt_llm.layers.normalization)": [[86, "tensorrt_llm.layers.normalization.RmsNorm", false]], "rmsnorm (tensorrt_llm.functional.layernormtype attribute)": [[85, "tensorrt_llm.functional.LayerNormType.RmsNorm", false]], "rnn_conv_dim_size (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.rnn_conv_dim_size", false]], "rnn_conv_dim_size (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.rnn_conv_dim_size", false]], "rnn_head_size (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.rnn_head_size", false]], "rnn_head_size (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.rnn_head_size", false]], "rnn_hidden_size (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.rnn_hidden_size", false]], "rnn_hidden_size (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.rnn_hidden_size", false]], "robertaforquestionanswering (in module tensorrt_llm.models)": [[87, "tensorrt_llm.models.RobertaForQuestionAnswering", false]], "robertaforsequenceclassification (in module tensorrt_llm.models)": [[87, "tensorrt_llm.models.RobertaForSequenceClassification", false]], "robertamodel (in module tensorrt_llm.models)": [[87, "tensorrt_llm.models.RobertaModel", false]], "rope_gpt_neox (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.rope_gpt_neox", false]], "rope_gptj (tensorrt_llm.functional.positionembeddingtype attribute)": [[85, "tensorrt_llm.functional.PositionEmbeddingType.rope_gptj", false]], "ropeembeddingutils (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils", false]], "rotaryscalingtype (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.RotaryScalingType", false]], "rotate_every_two() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.rotate_every_two", false]], "rotate_half() (tensorrt_llm.functional.ropeembeddingutils static method)": [[85, "tensorrt_llm.functional.RopeEmbeddingUtils.rotate_half", false]], "round() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.round", false]], "rowlinear (class in tensorrt_llm.layers.linear)": [[86, "tensorrt_llm.layers.linear.RowLinear", false]], "run() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.run", false]], "run() (tensorrt_llm.runtime.session method)": [[90, "tensorrt_llm.runtime.Session.run", false]], "runtime (tensorrt_llm.runtime.generationsession attribute)": [[90, "tensorrt_llm.runtime.GenerationSession.runtime", false]], "runtime (tensorrt_llm.runtime.session property)": [[90, "tensorrt_llm.runtime.Session.runtime", false]], "samplingconfig (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.SamplingConfig", false]], "samplingparams (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.SamplingParams", false]], "save_checkpoint() (tensorrt_llm.models.llavanextvisionwrapper method)": [[87, "tensorrt_llm.models.LlavaNextVisionWrapper.save_checkpoint", false]], "save_checkpoint() (tensorrt_llm.models.pretrainedmodel method)": [[87, "tensorrt_llm.models.PretrainedModel.save_checkpoint", false]], "scatter() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.scatter", false]], "scatter_nd() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.scatter_nd", false]], "schedulerconfig (class in tensorrt_llm.llmapi)": [[73, "tensorrt_llm.llmapi.SchedulerConfig", false]], "sd35adalayernormzerox (class in tensorrt_llm.layers.normalization)": [[86, "tensorrt_llm.layers.normalization.SD35AdaLayerNormZeroX", false]], "sd3patchembed (class in tensorrt_llm.layers.embedding)": [[86, "tensorrt_llm.layers.embedding.SD3PatchEmbed", false]], "sd3transformer2dmodel (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.SD3Transformer2DModel", false]], "secondary_offload_min_priority (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.secondary_offload_min_priority", false]], "seed (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.seed", false]], "select() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.select", false]], "select() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.select", false]], "selective_scan() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.selective_scan", false]], "send() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.send", false]], "serialize_engine() (tensorrt_llm.runtime.modelrunner method)": [[90, "tensorrt_llm.runtime.ModelRunner.serialize_engine", false]], "session (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.Session", false]], "set_attn_processor() (tensorrt_llm.models.sd3transformer2dmodel method)": [[87, "tensorrt_llm.models.SD3Transformer2DModel.set_attn_processor", false]], "set_from_optional (c macro)": [[1, "c.SET_FROM_OPTIONAL", false]], "set_if_not_exist() (tensorrt_llm.models.pretrainedconfig method)": [[87, "tensorrt_llm.models.PretrainedConfig.set_if_not_exist", false]], "set_rank() (tensorrt_llm.models.pretrainedconfig method)": [[87, "tensorrt_llm.models.PretrainedConfig.set_rank", false]], "set_rel_attn_table() (tensorrt_llm.layers.attention.attention method)": [[86, "tensorrt_llm.layers.attention.Attention.set_rel_attn_table", false]], "set_shapes() (tensorrt_llm.runtime.session method)": [[90, "tensorrt_llm.runtime.Session.set_shapes", false]], "setup() (tensorrt_llm.runtime.generationsession method)": [[90, "tensorrt_llm.runtime.GenerationSession.setup", false]], "setup_embedding_parallel_mode() (tensorrt_llm.llmapi.trtllmargs method)": [[73, "tensorrt_llm.llmapi.TrtLlmArgs.setup_embedding_parallel_mode", false]], "setup_fake_prompts() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.setup_fake_prompts", false]], "setup_fake_prompts_qwen2vl() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.setup_fake_prompts_qwen2vl", false]], "setup_fake_prompts_vila() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.setup_fake_prompts_vila", false]], "setup_inputs() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.setup_inputs", false]], "shape (tensorrt_llm.functional.tensor property)": [[85, "tensorrt_llm.functional.Tensor.shape", false]], "shape (tensorrt_llm.runtime.tensorinfo attribute)": [[90, "tensorrt_llm.runtime.TensorInfo.shape", false]], "shape() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.shape", false]], "shutdown() (tensorrt_llm.llmapi.llm method)": [[73, "tensorrt_llm.llmapi.LLM.shutdown", false]], "shutdown() (tensorrt_llm.llmapi.mpicommsession method)": [[73, "tensorrt_llm.llmapi.MpiCommSession.shutdown", false]], "sidestreamidtype (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.SideStreamIDType", false]], "sigmoid() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.sigmoid", false]], "silu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.silu", false]], "sin() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.sin", false]], "sink_token_length (tensorrt_llm.llmapi.kvcacheconfig attribute)": [[73, "tensorrt_llm.llmapi.KvCacheConfig.sink_token_length", false]], "sink_token_length (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.sink_token_length", false]], "size (tensorrt_llm.functional.sliceinputtype attribute)": [[85, "tensorrt_llm.functional.SliceInputType.size", false]], "size() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.size", false]], "skip_cross_attn_blocks (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.skip_cross_attn_blocks", false]], "skip_cross_kv (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.skip_cross_kv", false]], "skip_special_tokens (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.skip_special_tokens", false]], "slice() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.slice", false]], "sliceinputtype (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.SliceInputType", false]], "sliding_window_causal (tensorrt_llm.functional.attentionmasktype attribute)": [[85, "tensorrt_llm.functional.AttentionMaskType.sliding_window_causal", false]], "smoothquant_val (tensorrt_llm.llmapi.quantconfig attribute)": [[73, "tensorrt_llm.llmapi.QuantConfig.smoothquant_val", false]], "softmax() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.softmax", false]], "softplus() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.softplus", false]], "spaces_between_special_tokens (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.spaces_between_special_tokens", false]], "specdecodingparams (class in tensorrt_llm.layers.attention)": [[86, "tensorrt_llm.layers.attention.SpecDecodingParams", false]], "speculative_decoding_mode (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.speculative_decoding_mode", false]], "speculativedecodingmode (class in tensorrt_llm.models)": [[87, "tensorrt_llm.models.SpeculativeDecodingMode", false]], "split() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.split", false]], "split() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.split", false]], "split_prompt_by_images() (tensorrt_llm.runtime.multimodalmodelrunner method)": [[90, "tensorrt_llm.runtime.MultimodalModelRunner.split_prompt_by_images", false]], "sqrt() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.sqrt", false]], "sqrt() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.sqrt", false]], "squared_relu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.squared_relu", false]], "squeeze() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.squeeze", false]], "squeeze() (tensorrt_llm.functional.tensor method)": [[85, "tensorrt_llm.functional.Tensor.squeeze", false]], "squeeze() (tensorrt_llm.runtime.tensorinfo method)": [[90, "tensorrt_llm.runtime.TensorInfo.squeeze", false]], "stack() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.stack", false]], "start (tensorrt_llm.functional.sliceinputtype attribute)": [[85, "tensorrt_llm.functional.SliceInputType.start", false]], "state_dtype (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.state_dtype", false]], "state_dtype (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.state_dtype", false]], "state_size (tensorrt_llm.runtime.generationsession property)": [[90, "tensorrt_llm.runtime.GenerationSession.state_size", false]], "state_size (tensorrt_llm.runtime.modelconfig attribute)": [[90, "tensorrt_llm.runtime.ModelConfig.state_size", false]], "static (tensorrt_llm.llmapi.batchingtype attribute)": [[73, "tensorrt_llm.llmapi.BatchingType.STATIC", false]], "static_batch (tensorrt_llm.llmapi.capacityschedulerpolicy attribute)": [[73, "tensorrt_llm.llmapi.CapacitySchedulerPolicy.STATIC_BATCH", false]], "step() (tensorrt_llm.runtime.kvcachemanager method)": [[90, "tensorrt_llm.runtime.KVCacheManager.step", false]], "stop (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.stop", false]], "stop_reason (tensorrt_llm.llmapi.completionoutput attribute)": [[73, "tensorrt_llm.llmapi.CompletionOutput.stop_reason", false]], "stop_token_ids (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.stop_token_ids", false]], "stop_words_list (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.stop_words_list", false]], "stoppingcriteria (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.StoppingCriteria", false]], "stoppingcriterialist (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.StoppingCriteriaList", false]], "stream_interval (tensorrt_llm.llmapi.torchllmargs attribute)": [[73, "tensorrt_llm.llmapi.TorchLlmArgs.stream_interval", false]], "stride (tensorrt_llm.functional.sliceinputtype attribute)": [[85, "tensorrt_llm.functional.SliceInputType.stride", false]], "strongly_typed (tensorrt_llm.llmapi.buildconfig attribute)": [[73, "tensorrt_llm.llmapi.BuildConfig.strongly_typed", false]], "structural_tag (tensorrt_llm.llmapi.guideddecodingparams attribute)": [[73, "tensorrt_llm.llmapi.GuidedDecodingParams.structural_tag", false]], "sub() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.sub", false]], "submit() (tensorrt_llm.llmapi.mpicommsession method)": [[73, "tensorrt_llm.llmapi.MpiCommSession.submit", false]], "submit_sync() (tensorrt_llm.llmapi.mpicommsession method)": [[73, "tensorrt_llm.llmapi.MpiCommSession.submit_sync", false]], "sum() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.sum", false]], "swiglu() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.swiglu", false]], "tanh() (in module tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.tanh", false]], "temperature (tensorrt_llm.llmapi.samplingparams attribute)": [[73, "tensorrt_llm.llmapi.SamplingParams.temperature", false]], "temperature (tensorrt_llm.runtime.samplingconfig attribute)": [[90, "tensorrt_llm.runtime.SamplingConfig.temperature", false]], "tensor (class in tensorrt_llm.functional)": [[85, "tensorrt_llm.functional.Tensor", false]], "tensorinfo (class in tensorrt_llm.runtime)": [[90, "tensorrt_llm.runtime.TensorInfo", false]], "tensorrt_llm": [[85, "module-tensorrt_llm", false], [86, "module-tensorrt_llm", false], [87, "module-tensorrt_llm", false], [88, "module-tensorrt_llm", false], [89, "module-tensorrt_llm", false], [90, "module-tensorrt_llm", false]], "tensorrt_llm (c++ type)": [[0, "_CPPv412tensorrt_llm", false], [1, "_CPPv412tensorrt_llm", false]], "tensorrt_llm.functional": [[85, "module-tensorrt_llm.functional", false]], "tensorrt_llm.layers.activation": [[86, "module-tensorrt_llm.layers.activation", false]], "tensorrt_llm.layers.attention": [[86, "module-tensorrt_llm.layers.attention", false]], "tensorrt_llm.layers.cast": [[86, "module-tensorrt_llm.layers.cast", false]], "tensorrt_llm.layers.conv": [[86, "module-tensorrt_llm.layers.conv", false]], "tensorrt_llm.layers.embedding": [[86, "module-tensorrt_llm.layers.embedding", false]], "tensorrt_llm.layers.linear": [[86, "module-tensorrt_llm.layers.linear", false]], "tensorrt_llm.layers.mlp": [[86, "module-tensorrt_llm.layers.mlp", false]], "tensorrt_llm.layers.normalization": [[86, "module-tensorrt_llm.layers.normalization", false]], "tensorrt_llm.layers.pooling": [[86, "module-tensorrt_llm.layers.pooling", false]], "tensorrt_llm.models": [[87, "module-tensorrt_llm.models", false]], "tensorrt_llm.plugin": [[88, "module-tensorrt_llm.plugin", false]], "tensorrt_llm.quantization": [[89, "module-tensorrt_llm.quantization", false]], "tensorrt_llm.runtime": [[90, "module-tensorrt_llm.runtime", false]], "tensorrt_llm::batch_manager (c++ type)": [[0, "_CPPv4N12tensorrt_llm13batch_managerE", false], [1, "_CPPv4N12tensorrt_llm13batch_managerE", false]], "tensorrt_llm::batch_manager::kv_cache_manager (c++ type)": [[0, "_CPPv4N12tensorrt_llm13batch_manager16kv_cache_managerE", false]], "tensorrt_llm::executor (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executorE", false]], "tensorrt_llm::executor::additionalmodeloutput (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor21AdditionalModelOutputE", false]], "tensorrt_llm::executor::additionalmodeloutput::additionalmodeloutput (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor21AdditionalModelOutput21AdditionalModelOutputENSt6stringEb", false]], "tensorrt_llm::executor::additionalmodeloutput::gathercontext (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor21AdditionalModelOutput13gatherContextE", false]], "tensorrt_llm::executor::additionalmodeloutput::name (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor21AdditionalModelOutput4nameE", false]], "tensorrt_llm::executor::additionalmodeloutput::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor21AdditionalModelOutputeqERK21AdditionalModelOutput", false]], "tensorrt_llm::executor::additionaloutput (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor16AdditionalOutputE", false]], "tensorrt_llm::executor::additionaloutput::additionaloutput (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor16AdditionalOutput16AdditionalOutputENSt6stringE6Tensor", false], [0, "_CPPv4N12tensorrt_llm8executor16AdditionalOutput16AdditionalOutputERK16AdditionalOutput", false], [0, "_CPPv4N12tensorrt_llm8executor16AdditionalOutput16AdditionalOutputERR16AdditionalOutput", false]], "tensorrt_llm::executor::additionaloutput::name (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor16AdditionalOutput4nameE", false]], "tensorrt_llm::executor::additionaloutput::operator= (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor16AdditionalOutputaSERK16AdditionalOutput", false], [0, "_CPPv4N12tensorrt_llm8executor16AdditionalOutputaSERR16AdditionalOutput", false]], "tensorrt_llm::executor::additionaloutput::output (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor16AdditionalOutput6outputE", false]], "tensorrt_llm::executor::additionaloutput::~additionaloutput (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor16AdditionalOutputD0Ev", false]], "tensorrt_llm::executor::batchingtype (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor12BatchingTypeE", false]], "tensorrt_llm::executor::batchingtype::kinflight (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12BatchingType9kINFLIGHTE", false]], "tensorrt_llm::executor::batchingtype::kstatic (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12BatchingType7kSTATICE", false]], "tensorrt_llm::executor::beamtokens (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor10BeamTokensE", false]], "tensorrt_llm::executor::bufferview (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor10BufferViewE", false]], "tensorrt_llm::executor::cachetransceiverconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor22CacheTransceiverConfigE", false]], "tensorrt_llm::executor::cachetransceiverconfig::cachetransceiverconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor22CacheTransceiverConfig22CacheTransceiverConfigENSt8optionalI6size_tEE", false]], "tensorrt_llm::executor::cachetransceiverconfig::getmaxnumtokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22CacheTransceiverConfig15getMaxNumTokensEv", false]], "tensorrt_llm::executor::cachetransceiverconfig::mmaxnumtokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22CacheTransceiverConfig13mMaxNumTokensE", false]], "tensorrt_llm::executor::cachetransceiverconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22CacheTransceiverConfigeqERK22CacheTransceiverConfig", false]], "tensorrt_llm::executor::cachetransceiverconfig::setmaxnumtokens (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor22CacheTransceiverConfig15setMaxNumTokensE6size_t", false]], "tensorrt_llm::executor::capacityschedulerpolicy (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor23CapacitySchedulerPolicyE", false]], "tensorrt_llm::executor::capacityschedulerpolicy::kguaranteed_no_evict (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor23CapacitySchedulerPolicy20kGUARANTEED_NO_EVICTE", false]], "tensorrt_llm::executor::capacityschedulerpolicy::kmax_utilization (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor23CapacitySchedulerPolicy16kMAX_UTILIZATIONE", false]], "tensorrt_llm::executor::capacityschedulerpolicy::kstatic_batch (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor23CapacitySchedulerPolicy13kSTATIC_BATCHE", false]], "tensorrt_llm::executor::communicationmode (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor17CommunicationModeE", false]], "tensorrt_llm::executor::communicationmode::kleader (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor17CommunicationMode7kLEADERE", false]], "tensorrt_llm::executor::communicationmode::korchestrator (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor17CommunicationMode13kORCHESTRATORE", false]], "tensorrt_llm::executor::communicationtype (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor17CommunicationTypeE", false]], "tensorrt_llm::executor::communicationtype::kmpi (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor17CommunicationType4kMPIE", false]], "tensorrt_llm::executor::contextchunkingpolicy (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor21ContextChunkingPolicyE", false]], "tensorrt_llm::executor::contextchunkingpolicy::kequal_progress (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor21ContextChunkingPolicy15kEQUAL_PROGRESSE", false]], "tensorrt_llm::executor::contextchunkingpolicy::kfirst_come_first_served (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor21ContextChunkingPolicy24kFIRST_COME_FIRST_SERVEDE", false]], "tensorrt_llm::executor::contextphaseparams (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParamsE", false]], "tensorrt_llm::executor::contextphaseparams::contextphaseparams (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams18ContextPhaseParamsE9VecTokens13RequestIdTypeNSt8optionalI9VecTokensEE", false], [0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams18ContextPhaseParamsE9VecTokens13RequestIdTypePvNSt8optionalI9VecTokensEE", false], [0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams18ContextPhaseParamsE9VecTokens13RequestIdTypeRKNSt6vectorIcEENSt8optionalI9VecTokensEE", false], [0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams18ContextPhaseParamsERK18ContextPhaseParams", false], [0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams18ContextPhaseParamsERR18ContextPhaseParams", false]], "tensorrt_llm::executor::contextphaseparams::deleter (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams7deleterEPKv", false]], "tensorrt_llm::executor::contextphaseparams::getdrafttokens (c++ function)": [[0, "_CPPv4NKR12tensorrt_llm8executor18ContextPhaseParams14getDraftTokensEv", false]], "tensorrt_llm::executor::contextphaseparams::getfirstgentokens (c++ function)": [[0, "_CPPv4NKR12tensorrt_llm8executor18ContextPhaseParams17getFirstGenTokensEv", false]], "tensorrt_llm::executor::contextphaseparams::getreqid (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18ContextPhaseParams8getReqIdEv", false]], "tensorrt_llm::executor::contextphaseparams::getserializedstate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18ContextPhaseParams18getSerializedStateEv", false]], "tensorrt_llm::executor::contextphaseparams::getstate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams8getStateEv", false], [0, "_CPPv4NK12tensorrt_llm8executor18ContextPhaseParams8getStateEv", false]], "tensorrt_llm::executor::contextphaseparams::mdrafttokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams12mDraftTokensE", false]], "tensorrt_llm::executor::contextphaseparams::mfirstgentokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams15mFirstGenTokensE", false]], "tensorrt_llm::executor::contextphaseparams::mreqid (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams6mReqIdE", false]], "tensorrt_llm::executor::contextphaseparams::mstate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams6mStateE", false]], "tensorrt_llm::executor::contextphaseparams::operator= (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParamsaSERK18ContextPhaseParams", false], [0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParamsaSERR18ContextPhaseParams", false]], "tensorrt_llm::executor::contextphaseparams::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18ContextPhaseParamseqERK18ContextPhaseParams", false]], "tensorrt_llm::executor::contextphaseparams::popfirstgentokens (c++ function)": [[0, "_CPPv4NO12tensorrt_llm8executor18ContextPhaseParams17popFirstGenTokensEv", false]], "tensorrt_llm::executor::contextphaseparams::releasestate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams12releaseStateEv", false]], "tensorrt_llm::executor::contextphaseparams::requestidtype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams13RequestIdTypeE", false]], "tensorrt_llm::executor::contextphaseparams::stateptr (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParams8StatePtrE", false]], "tensorrt_llm::executor::contextphaseparams::~contextphaseparams (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18ContextPhaseParamsD0Ev", false]], "tensorrt_llm::executor::datatransceiverstate (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor20DataTransceiverStateE", false]], "tensorrt_llm::executor::datatransceiverstate::datatransceiverstate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor20DataTransceiverState20DataTransceiverStateEN8kv_cache10CacheStateEN8kv_cache9CommStateE", false], [0, "_CPPv4N12tensorrt_llm8executor20DataTransceiverState20DataTransceiverStateEv", false]], "tensorrt_llm::executor::datatransceiverstate::getcachestate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20DataTransceiverState13getCacheStateEv", false]], "tensorrt_llm::executor::datatransceiverstate::getcommstate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20DataTransceiverState12getCommStateEv", false]], "tensorrt_llm::executor::datatransceiverstate::mcachestate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor20DataTransceiverState11mCacheStateE", false]], "tensorrt_llm::executor::datatransceiverstate::mcommstate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor20DataTransceiverState10mCommStateE", false]], "tensorrt_llm::executor::datatransceiverstate::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20DataTransceiverStateeqERK20DataTransceiverState", false]], "tensorrt_llm::executor::datatransceiverstate::setcachestate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor20DataTransceiverState13setCacheStateEN8kv_cache10CacheStateE", false]], "tensorrt_llm::executor::datatransceiverstate::setcommstate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor20DataTransceiverState12setCommStateEN8kv_cache9CommStateE", false]], "tensorrt_llm::executor::datatransceiverstate::tostring (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20DataTransceiverState8toStringEv", false]], "tensorrt_llm::executor::datatype (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor8DataTypeE", false]], "tensorrt_llm::executor::datatype::kbf16 (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType5kBF16E", false]], "tensorrt_llm::executor::datatype::kbool (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType5kBOOLE", false]], "tensorrt_llm::executor::datatype::kfp16 (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType5kFP16E", false]], "tensorrt_llm::executor::datatype::kfp32 (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType5kFP32E", false]], "tensorrt_llm::executor::datatype::kfp8 (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType4kFP8E", false]], "tensorrt_llm::executor::datatype::kint32 (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType6kINT32E", false]], "tensorrt_llm::executor::datatype::kint64 (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType6kINT64E", false]], "tensorrt_llm::executor::datatype::kint8 (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType5kINT8E", false]], "tensorrt_llm::executor::datatype::kuint8 (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType6kUINT8E", false]], "tensorrt_llm::executor::datatype::kunknown (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8DataType8kUNKNOWNE", false]], "tensorrt_llm::executor::debugconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfigE", false]], "tensorrt_llm::executor::debugconfig::debugconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig11DebugConfigEbb9StringVec10SizeType32", false]], "tensorrt_llm::executor::debugconfig::getdebuginputtensors (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11DebugConfig20getDebugInputTensorsEv", false]], "tensorrt_llm::executor::debugconfig::getdebugoutputtensors (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11DebugConfig21getDebugOutputTensorsEv", false]], "tensorrt_llm::executor::debugconfig::getdebugtensornames (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11DebugConfig19getDebugTensorNamesEv", false]], "tensorrt_llm::executor::debugconfig::getdebugtensorsmaxiterations (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11DebugConfig28getDebugTensorsMaxIterationsEv", false]], "tensorrt_llm::executor::debugconfig::mdebuginputtensors (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig18mDebugInputTensorsE", false]], "tensorrt_llm::executor::debugconfig::mdebugoutputtensors (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig19mDebugOutputTensorsE", false]], "tensorrt_llm::executor::debugconfig::mdebugtensornames (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig17mDebugTensorNamesE", false]], "tensorrt_llm::executor::debugconfig::mdebugtensorsmaxiterations (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig26mDebugTensorsMaxIterationsE", false]], "tensorrt_llm::executor::debugconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11DebugConfigeqERK11DebugConfig", false]], "tensorrt_llm::executor::debugconfig::setdebuginputtensors (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig20setDebugInputTensorsEb", false]], "tensorrt_llm::executor::debugconfig::setdebugoutputtensors (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig21setDebugOutputTensorsEb", false]], "tensorrt_llm::executor::debugconfig::setdebugtensornames (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig19setDebugTensorNamesERK9StringVec", false]], "tensorrt_llm::executor::debugconfig::setdebugtensorsmaxiterations (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig28setDebugTensorsMaxIterationsE10SizeType32", false]], "tensorrt_llm::executor::debugconfig::stringvec (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor11DebugConfig9StringVecE", false]], "tensorrt_llm::executor::debugtensorsperiteration (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor24DebugTensorsPerIterationE", false]], "tensorrt_llm::executor::debugtensorsperiteration::debugtensors (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor24DebugTensorsPerIteration12debugTensorsE", false]], "tensorrt_llm::executor::debugtensorsperiteration::iter (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor24DebugTensorsPerIteration4iterE", false]], "tensorrt_llm::executor::decodingconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfigE", false]], "tensorrt_llm::executor::decodingconfig::decodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig14DecodingConfigENSt8optionalI12DecodingModeEENSt8optionalI23LookaheadDecodingConfigEENSt8optionalI13MedusaChoicesEENSt8optionalI11EagleConfigEE", false]], "tensorrt_llm::executor::decodingconfig::enableseamlesslookaheaddecoding (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig31enableSeamlessLookaheadDecodingEv", false]], "tensorrt_llm::executor::decodingconfig::getdecodingmode (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14DecodingConfig15getDecodingModeEv", false]], "tensorrt_llm::executor::decodingconfig::geteagleconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14DecodingConfig14getEagleConfigEv", false]], "tensorrt_llm::executor::decodingconfig::getlookaheaddecodingconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14DecodingConfig26getLookaheadDecodingConfigEv", false]], "tensorrt_llm::executor::decodingconfig::getlookaheaddecodingmaxnumrequest (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14DecodingConfig33getLookaheadDecodingMaxNumRequestEv", false]], "tensorrt_llm::executor::decodingconfig::getmedusachoices (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14DecodingConfig16getMedusaChoicesEv", false]], "tensorrt_llm::executor::decodingconfig::mdecodingmode (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig13mDecodingModeE", false]], "tensorrt_llm::executor::decodingconfig::meagleconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig12mEagleConfigE", false]], "tensorrt_llm::executor::decodingconfig::mlookaheaddecodingconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig24mLookaheadDecodingConfigE", false]], "tensorrt_llm::executor::decodingconfig::mlookaheaddecodingmaxnumrequest (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig31mLookaheadDecodingMaxNumRequestE", false]], "tensorrt_llm::executor::decodingconfig::mmedusachoices (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig14mMedusaChoicesE", false]], "tensorrt_llm::executor::decodingconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14DecodingConfigeqERK14DecodingConfig", false]], "tensorrt_llm::executor::decodingconfig::setdecodingmode (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig15setDecodingModeERK12DecodingMode", false]], "tensorrt_llm::executor::decodingconfig::seteagleconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig14setEagleConfigERK11EagleConfig", false]], "tensorrt_llm::executor::decodingconfig::setlookaheaddecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig26setLookaheadDecodingConfigERK23LookaheadDecodingConfig", false]], "tensorrt_llm::executor::decodingconfig::setmedusachoices (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14DecodingConfig16setMedusaChoicesERK13MedusaChoices", false]], "tensorrt_llm::executor::decodingmode (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingModeE", false]], "tensorrt_llm::executor::decodingmode::allbitset (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode9allBitSetE14UnderlyingType", false]], "tensorrt_llm::executor::decodingmode::anybitset (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode9anyBitSetE14UnderlyingType", false]], "tensorrt_llm::executor::decodingmode::auto (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode4AutoEv", false]], "tensorrt_llm::executor::decodingmode::beamsearch (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode10BeamSearchEv", false]], "tensorrt_llm::executor::decodingmode::decodingmode (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode12DecodingModeE14UnderlyingType", false]], "tensorrt_llm::executor::decodingmode::eagle (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode5EagleEv", false]], "tensorrt_llm::executor::decodingmode::explicitdrafttokens (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode19ExplicitDraftTokensEv", false]], "tensorrt_llm::executor::decodingmode::externaldrafttokens (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode19ExternalDraftTokensEv", false]], "tensorrt_llm::executor::decodingmode::getname (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode7getNameEv", false]], "tensorrt_llm::executor::decodingmode::getstate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode8getStateEv", false]], "tensorrt_llm::executor::decodingmode::isauto (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode6isAutoEv", false]], "tensorrt_llm::executor::decodingmode::isbeamsearch (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode12isBeamSearchEv", false]], "tensorrt_llm::executor::decodingmode::iseagle (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode7isEagleEv", false]], "tensorrt_llm::executor::decodingmode::isexplicitdrafttokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode21isExplicitDraftTokensEv", false]], "tensorrt_llm::executor::decodingmode::isexternaldrafttokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode21isExternalDraftTokensEv", false]], "tensorrt_llm::executor::decodingmode::islookahead (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode11isLookaheadEv", false]], "tensorrt_llm::executor::decodingmode::ismedusa (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode8isMedusaEv", false]], "tensorrt_llm::executor::decodingmode::istopk (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode6isTopKEv", false]], "tensorrt_llm::executor::decodingmode::istopkandtopp (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode13isTopKandTopPEv", false]], "tensorrt_llm::executor::decodingmode::istopkortopp (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode12isTopKorTopPEv", false]], "tensorrt_llm::executor::decodingmode::istopp (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode6isTopPEv", false]], "tensorrt_llm::executor::decodingmode::isusebantokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode14isUseBanTokensEv", false]], "tensorrt_llm::executor::decodingmode::isusebanwords (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode13isUseBanWordsEv", false]], "tensorrt_llm::executor::decodingmode::isuseexpliciteosstop (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode20isUseExplicitEosStopEv", false]], "tensorrt_llm::executor::decodingmode::isusefrequencypenalty (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode21isUseFrequencyPenaltyEv", false]], "tensorrt_llm::executor::decodingmode::isusemaxlengthstop (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode18isUseMaxLengthStopEv", false]], "tensorrt_llm::executor::decodingmode::isuseminlength (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode14isUseMinLengthEv", false]], "tensorrt_llm::executor::decodingmode::isuseminp (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode9isUseMinPEv", false]], "tensorrt_llm::executor::decodingmode::isusenorepeatngramsize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode22isUseNoRepeatNgramSizeEv", false]], "tensorrt_llm::executor::decodingmode::isuseoccurrencepenalty (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode22isUseOccurrencePenaltyEv", false]], "tensorrt_llm::executor::decodingmode::isusepenalty (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode12isUsePenaltyEv", false]], "tensorrt_llm::executor::decodingmode::isusepresencepenalty (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode20isUsePresencePenaltyEv", false]], "tensorrt_llm::executor::decodingmode::isuserepetitionpenalty (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode22isUseRepetitionPenaltyEv", false]], "tensorrt_llm::executor::decodingmode::isusestopcriteria (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode17isUseStopCriteriaEv", false]], "tensorrt_llm::executor::decodingmode::isusestopwords (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode14isUseStopWordsEv", false]], "tensorrt_llm::executor::decodingmode::isusetemperature (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode16isUseTemperatureEv", false]], "tensorrt_llm::executor::decodingmode::isusevariablebeamwidthsearch (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingMode28isUseVariableBeamWidthSearchEv", false]], "tensorrt_llm::executor::decodingmode::kauto (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode5kAutoE", false]], "tensorrt_llm::executor::decodingmode::kbeamsearch (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode11kBeamSearchE", false]], "tensorrt_llm::executor::decodingmode::keagle (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode6kEagleE", false]], "tensorrt_llm::executor::decodingmode::kexplicitdrafttokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode20kExplicitDraftTokensE", false]], "tensorrt_llm::executor::decodingmode::kexternaldrafttokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode20kExternalDraftTokensE", false]], "tensorrt_llm::executor::decodingmode::klookahead (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode10kLookaheadE", false]], "tensorrt_llm::executor::decodingmode::kmedusa (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode7kMedusaE", false]], "tensorrt_llm::executor::decodingmode::knumflags (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode9kNumFlagsE", false]], "tensorrt_llm::executor::decodingmode::ktopk (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode5kTopKE", false]], "tensorrt_llm::executor::decodingmode::ktopktopp (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode9kTopKTopPE", false]], "tensorrt_llm::executor::decodingmode::ktopp (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode5kTopPE", false]], "tensorrt_llm::executor::decodingmode::kusebantokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode13kUseBanTokensE", false]], "tensorrt_llm::executor::decodingmode::kusebanwords (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode12kUseBanWordsE", false]], "tensorrt_llm::executor::decodingmode::kuseexpliciteosstop (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode19kUseExplicitEosStopE", false]], "tensorrt_llm::executor::decodingmode::kusefrequencypenalties (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode22kUseFrequencyPenaltiesE", false]], "tensorrt_llm::executor::decodingmode::kusemaxlengthstop (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode17kUseMaxLengthStopE", false]], "tensorrt_llm::executor::decodingmode::kuseminlength (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode13kUseMinLengthE", false]], "tensorrt_llm::executor::decodingmode::kuseminp (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode8kUseMinPE", false]], "tensorrt_llm::executor::decodingmode::kusenorepeatngramsize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode21kUseNoRepeatNgramSizeE", false]], "tensorrt_llm::executor::decodingmode::kuseoccurrencepenalties (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode23kUseOccurrencePenaltiesE", false]], "tensorrt_llm::executor::decodingmode::kusepenalties (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode13kUsePenaltiesE", false]], "tensorrt_llm::executor::decodingmode::kusepresencepenalties (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode21kUsePresencePenaltiesE", false]], "tensorrt_llm::executor::decodingmode::kuserepetitionpenalties (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode23kUseRepetitionPenaltiesE", false]], "tensorrt_llm::executor::decodingmode::kusestandardstopcriteria (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode24kUseStandardStopCriteriaE", false]], "tensorrt_llm::executor::decodingmode::kusestopwords (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode13kUseStopWordsE", false]], "tensorrt_llm::executor::decodingmode::kusetemperature (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode15kUseTemperatureE", false]], "tensorrt_llm::executor::decodingmode::kusevariablebeamwidthsearch (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode27kUseVariableBeamWidthSearchE", false]], "tensorrt_llm::executor::decodingmode::lookahead (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode9LookaheadEv", false]], "tensorrt_llm::executor::decodingmode::medusa (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode6MedusaEv", false]], "tensorrt_llm::executor::decodingmode::mstate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode6mStateE", false]], "tensorrt_llm::executor::decodingmode::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor12DecodingModeeqERK12DecodingMode", false]], "tensorrt_llm::executor::decodingmode::setbitto (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode8setBitToE14UnderlyingTypeb", false]], "tensorrt_llm::executor::decodingmode::topk (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode4TopKEv", false]], "tensorrt_llm::executor::decodingmode::topktopp (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode8TopKTopPEv", false]], "tensorrt_llm::executor::decodingmode::topp (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode4TopPEv", false]], "tensorrt_llm::executor::decodingmode::underlyingtype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode14UnderlyingTypeE", false]], "tensorrt_llm::executor::decodingmode::usebantokens (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode12useBanTokensEb", false]], "tensorrt_llm::executor::decodingmode::usebanwords (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode11useBanWordsEb", false]], "tensorrt_llm::executor::decodingmode::useexpliciteosstop (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode18useExplicitEosStopEb", false]], "tensorrt_llm::executor::decodingmode::usefrequencypenalty (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode19useFrequencyPenaltyEb", false]], "tensorrt_llm::executor::decodingmode::usemaxlengthstop (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode16useMaxLengthStopEb", false]], "tensorrt_llm::executor::decodingmode::useminlength (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode12useMinLengthEb", false]], "tensorrt_llm::executor::decodingmode::useminp (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode7useMinPEb", false]], "tensorrt_llm::executor::decodingmode::usenorepeatngramsize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode20useNoRepeatNgramSizeEb", false]], "tensorrt_llm::executor::decodingmode::useoccurrencepenalties (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode22useOccurrencePenaltiesEb", false]], "tensorrt_llm::executor::decodingmode::usepresencepenalty (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode18usePresencePenaltyEb", false]], "tensorrt_llm::executor::decodingmode::userepetitionpenalty (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode20useRepetitionPenaltyEb", false]], "tensorrt_llm::executor::decodingmode::usestopwords (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode12useStopWordsEb", false]], "tensorrt_llm::executor::decodingmode::usetemperature (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode14useTemperatureEb", false]], "tensorrt_llm::executor::decodingmode::usevariablebeamwidthsearch (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12DecodingMode26useVariableBeamWidthSearchEb", false]], "tensorrt_llm::executor::detail (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor6detailE", false]], "tensorrt_llm::executor::detail::dimtype64 (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor6detail9DimType64E", false]], "tensorrt_llm::executor::detail::ofitensor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6detail9ofITensorENSt10shared_ptrIN7runtime7ITensorEEE", false]], "tensorrt_llm::executor::detail::toitensor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6detail9toITensorERK6Tensor", false]], "tensorrt_llm::executor::disagg_executor (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executorE", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestratorE", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::awaitcontextresponses (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestrator21awaitContextResponsesERKNSt8optionalINSt6chrono12millisecondsEEENSt8optionalIiEE", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::awaitgenerationresponses (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestrator24awaitGenerationResponsesERKNSt8optionalINSt6chrono12millisecondsEEENSt8optionalIiEE", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::canenqueue (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestrator10canEnqueueEv", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::disaggexecutororchestrator (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestrator26DisaggExecutorOrchestratorERKNSt6vectorINSt10filesystem4pathEEERKNSt6vectorINSt10filesystem4pathEEERKNSt6vectorIN8executor14ExecutorConfigEEERKNSt6vectorIN8executor14ExecutorConfigEEEbb", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::enqueuecontext (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestrator14enqueueContextERKNSt6vectorIN5texec7RequestEEENSt8optionalIiEEb", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::enqueuegeneration (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestrator17enqueueGenerationERKNSt6vectorIN5texec7RequestEEERKNSt6vectorI6IdTypeEENSt8optionalIiEEb", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::getcontextexecutors (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestrator19getContextExecutorsEv", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::getgenexecutors (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestrator15getGenExecutorsEv", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::mimpl (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestrator5mImplE", false]], "tensorrt_llm::executor::disagg_executor::disaggexecutororchestrator::~disaggexecutororchestrator (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor26DisaggExecutorOrchestratorD0Ev", false]], "tensorrt_llm::executor::disagg_executor::responsewithid (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithIdE", false]], "tensorrt_llm::executor::disagg_executor::responsewithid::gid (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithId3gidE", false]], "tensorrt_llm::executor::disagg_executor::responsewithid::operator= (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithIdaSERK14ResponseWithId", false], [0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithIdaSERR14ResponseWithId", false]], "tensorrt_llm::executor::disagg_executor::responsewithid::response (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithId8responseE", false]], "tensorrt_llm::executor::disagg_executor::responsewithid::responsewithid (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithId14ResponseWithIdERK14ResponseWithId", false], [0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithId14ResponseWithIdERKN12tensorrt_llm8executor8ResponseE6IdType", false], [0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithId14ResponseWithIdERR14ResponseWithId", false], [0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithId14ResponseWithIdERRN12tensorrt_llm8executor8ResponseE6IdType", false]], "tensorrt_llm::executor::disagg_executor::responsewithid::~responsewithid (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15disagg_executor14ResponseWithIdD0Ev", false]], "tensorrt_llm::executor::disservingrequeststats (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor22DisServingRequestStatsE", false]], "tensorrt_llm::executor::disservingrequeststats::kvcachesize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22DisServingRequestStats11kvCacheSizeE", false]], "tensorrt_llm::executor::disservingrequeststats::kvcachetransferms (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22DisServingRequestStats17kvCacheTransferMSE", false]], "tensorrt_llm::executor::dynamicbatchconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor18DynamicBatchConfigE", false]], "tensorrt_llm::executor::dynamicbatchconfig::dynamicbatchconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18DynamicBatchConfig18DynamicBatchConfigEbb10SizeType32NSt6vectorINSt4pairI10SizeType3210SizeType32EEEE", false]], "tensorrt_llm::executor::dynamicbatchconfig::getbatchsizetable (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18DynamicBatchConfig17getBatchSizeTableEv", false]], "tensorrt_llm::executor::dynamicbatchconfig::getdynamicbatchmovingaveragewindow (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18DynamicBatchConfig34getDynamicBatchMovingAverageWindowEv", false]], "tensorrt_llm::executor::dynamicbatchconfig::getenablebatchsizetuning (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18DynamicBatchConfig24getEnableBatchSizeTuningEv", false]], "tensorrt_llm::executor::dynamicbatchconfig::getenablemaxnumtokenstuning (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18DynamicBatchConfig27getEnableMaxNumTokensTuningEv", false]], "tensorrt_llm::executor::dynamicbatchconfig::kdefaultbatchsizetable (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18DynamicBatchConfig22kDefaultBatchSizeTableE", false]], "tensorrt_llm::executor::dynamicbatchconfig::kdefaultdynamicbatchmovingaveragewindow (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18DynamicBatchConfig39kDefaultDynamicBatchMovingAverageWindowE", false]], "tensorrt_llm::executor::dynamicbatchconfig::mbatchsizetable (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18DynamicBatchConfig15mBatchSizeTableE", false]], "tensorrt_llm::executor::dynamicbatchconfig::mdynamicbatchmovingaveragewindow (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18DynamicBatchConfig32mDynamicBatchMovingAverageWindowE", false]], "tensorrt_llm::executor::dynamicbatchconfig::menablebatchsizetuning (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18DynamicBatchConfig22mEnableBatchSizeTuningE", false]], "tensorrt_llm::executor::dynamicbatchconfig::menablemaxnumtokenstuning (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18DynamicBatchConfig25mEnableMaxNumTokensTuningE", false]], "tensorrt_llm::executor::eaglechoices (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor12EagleChoicesE", false]], "tensorrt_llm::executor::eagleconfig (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor11EagleConfigE", false]], "tensorrt_llm::executor::eagleconfig::checkposteriorvalue (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor11EagleConfig19checkPosteriorValueERKNSt8optionalIfEE", false]], "tensorrt_llm::executor::eagleconfig::eagleconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor11EagleConfig11EagleConfigENSt8optionalI12EagleChoicesEEbNSt8optionalIfEEbNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::eagleconfig::getdynamictreemaxtopk (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11EagleConfig21getDynamicTreeMaxTopKEv", false]], "tensorrt_llm::executor::eagleconfig::geteaglechoices (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11EagleConfig15getEagleChoicesEv", false]], "tensorrt_llm::executor::eagleconfig::getposteriorthreshold (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11EagleConfig21getPosteriorThresholdEv", false]], "tensorrt_llm::executor::eagleconfig::isgreedysampling (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11EagleConfig16isGreedySamplingEv", false]], "tensorrt_llm::executor::eagleconfig::mdynamictreemaxtopk (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11EagleConfig19mDynamicTreeMaxTopKE", false]], "tensorrt_llm::executor::eagleconfig::meaglechoices (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11EagleConfig13mEagleChoicesE", false]], "tensorrt_llm::executor::eagleconfig::mgreedysampling (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11EagleConfig15mGreedySamplingE", false]], "tensorrt_llm::executor::eagleconfig::mposteriorthreshold (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11EagleConfig19mPosteriorThresholdE", false]], "tensorrt_llm::executor::eagleconfig::musedynamictree (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11EagleConfig15mUseDynamicTreeE", false]], "tensorrt_llm::executor::eagleconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11EagleConfigeqERK11EagleConfig", false]], "tensorrt_llm::executor::eagleconfig::usedynamictree (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11EagleConfig14useDynamicTreeEv", false]], "tensorrt_llm::executor::executor (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8ExecutorE", false]], "tensorrt_llm::executor::executor::awaitresponses (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor14awaitResponsesERK6IdTypeRKNSt8optionalINSt6chrono12millisecondsEEE", false], [0, "_CPPv4N12tensorrt_llm8executor8Executor14awaitResponsesERKNSt6vectorI6IdTypeEERKNSt8optionalINSt6chrono12millisecondsEEE", false], [0, "_CPPv4N12tensorrt_llm8executor8Executor14awaitResponsesERKNSt8optionalINSt6chrono12millisecondsEEE", false]], "tensorrt_llm::executor::executor::cancelrequest (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor13cancelRequestE6IdType", false]], "tensorrt_llm::executor::executor::canenqueuerequests (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8Executor18canEnqueueRequestsEv", false]], "tensorrt_llm::executor::executor::enqueuerequest (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor14enqueueRequestERK7Request", false]], "tensorrt_llm::executor::executor::enqueuerequests (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor15enqueueRequestsERKNSt6vectorI7RequestEE", false]], "tensorrt_llm::executor::executor::executor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor8ExecutorENSt10shared_ptrI5ModelEENSt10shared_ptrI5ModelEERK14ExecutorConfig", false], [0, "_CPPv4N12tensorrt_llm8executor8Executor8ExecutorENSt10shared_ptrI5ModelEERK14ExecutorConfig", false], [0, "_CPPv4N12tensorrt_llm8executor8Executor8ExecutorERK10BufferViewRKNSt6stringE9ModelTypeRK14ExecutorConfigRKNSt8optionalINSt3mapINSt6stringE6TensorEEEE", false], [0, "_CPPv4N12tensorrt_llm8executor8Executor8ExecutorERK10BufferViewRKNSt6stringERK10BufferViewRKNSt6stringE9ModelTypeRK14ExecutorConfig", false], [0, "_CPPv4N12tensorrt_llm8executor8Executor8ExecutorERK8Executor", false], [0, "_CPPv4N12tensorrt_llm8executor8Executor8ExecutorERKNSt10filesystem4pathE9ModelTypeRK14ExecutorConfig", false], [0, "_CPPv4N12tensorrt_llm8executor8Executor8ExecutorERKNSt10filesystem4pathERKNSt10filesystem4pathE9ModelTypeRK14ExecutorConfig", false], [0, "_CPPv4N12tensorrt_llm8executor8Executor8ExecutorERR8Executor", false]], "tensorrt_llm::executor::executor::getkvcacheeventmanager (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8Executor22getKVCacheEventManagerEv", false]], "tensorrt_llm::executor::executor::getlatestdebugtensors (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor21getLatestDebugTensorsEv", false]], "tensorrt_llm::executor::executor::getlatestiterationstats (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor23getLatestIterationStatsEv", false]], "tensorrt_llm::executor::executor::getlatestrequeststats (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor21getLatestRequestStatsEv", false]], "tensorrt_llm::executor::executor::getnumresponsesready (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8Executor20getNumResponsesReadyERKNSt8optionalI6IdTypeEE", false]], "tensorrt_llm::executor::executor::isparticipant (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8Executor13isParticipantEv", false]], "tensorrt_llm::executor::executor::mimpl (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor5mImplE", false]], "tensorrt_llm::executor::executor::operator= (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8ExecutoraSERK8Executor", false], [0, "_CPPv4N12tensorrt_llm8executor8ExecutoraSERR8Executor", false]], "tensorrt_llm::executor::executor::shutdown (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Executor8shutdownEv", false]], "tensorrt_llm::executor::executor::~executor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8ExecutorD0Ev", false]], "tensorrt_llm::executor::executorconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfigE", false]], "tensorrt_llm::executor::executorconfig::executorconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig14ExecutorConfigE10SizeType3215SchedulerConfig13KvCacheConfigbb10SizeType3210SizeType3212BatchingTypeNSt8optionalI10SizeType32EENSt8optionalI10SizeType32EENSt8optionalI14ParallelConfigEERKNSt8optionalI15PeftCacheConfigEENSt8optionalI25LogitsPostProcessorConfigEENSt8optionalI14DecodingConfigEEbfNSt8optionalI10SizeType32EERK29ExtendedRuntimePerfKnobConfigNSt8optionalI11DebugConfigEE10SizeType328uint64_tNSt8optionalI25SpeculativeDecodingConfigEENSt8optionalI20GuidedDecodingConfigEENSt8optionalINSt6vectorI21AdditionalModelOutputEEEENSt8optionalI22CacheTransceiverConfigEEbbb", false]], "tensorrt_llm::executor::executorconfig::getadditionalmodeloutputs (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig25getAdditionalModelOutputsEv", false]], "tensorrt_llm::executor::executorconfig::getbatchingtype (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig15getBatchingTypeEv", false]], "tensorrt_llm::executor::executorconfig::getcachetransceiverconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig25getCacheTransceiverConfigEv", false]], "tensorrt_llm::executor::executorconfig::getdebugconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig14getDebugConfigEv", false]], "tensorrt_llm::executor::executorconfig::getdecodingconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig17getDecodingConfigEv", false]], "tensorrt_llm::executor::executorconfig::getenablechunkedcontext (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig23getEnableChunkedContextEv", false]], "tensorrt_llm::executor::executorconfig::getenabletrtoverlap (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig19getEnableTrtOverlapEv", false]], "tensorrt_llm::executor::executorconfig::getextendedruntimeperfknobconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig32getExtendedRuntimePerfKnobConfigEv", false]], "tensorrt_llm::executor::executorconfig::getgathergenerationlogits (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig25getGatherGenerationLogitsEv", false]], "tensorrt_llm::executor::executorconfig::getgpuweightspercent (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig20getGpuWeightsPercentEv", false]], "tensorrt_llm::executor::executorconfig::getguideddecodingconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig23getGuidedDecodingConfigEv", false]], "tensorrt_llm::executor::executorconfig::getiterstatsmaxiterations (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig25getIterStatsMaxIterationsEv", false]], "tensorrt_llm::executor::executorconfig::getkvcacheconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig16getKvCacheConfigEv", false]], "tensorrt_llm::executor::executorconfig::getkvcacheconfigref (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig19getKvCacheConfigRefEv", false]], "tensorrt_llm::executor::executorconfig::getlogitspostprocessorconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig28getLogitsPostProcessorConfigEv", false]], "tensorrt_llm::executor::executorconfig::getmaxbatchsize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig15getMaxBatchSizeEv", false]], "tensorrt_llm::executor::executorconfig::getmaxbeamwidth (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig15getMaxBeamWidthEv", false]], "tensorrt_llm::executor::executorconfig::getmaxnumtokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig15getMaxNumTokensEv", false]], "tensorrt_llm::executor::executorconfig::getmaxqueuesize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig15getMaxQueueSizeEv", false]], "tensorrt_llm::executor::executorconfig::getmaxseqidlemicroseconds (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig25getMaxSeqIdleMicrosecondsEv", false]], "tensorrt_llm::executor::executorconfig::getnormalizelogprobs (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig20getNormalizeLogProbsEv", false]], "tensorrt_llm::executor::executorconfig::getparallelconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig17getParallelConfigEv", false]], "tensorrt_llm::executor::executorconfig::getpeftcacheconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig18getPeftCacheConfigEv", false]], "tensorrt_llm::executor::executorconfig::getprompttableoffloading (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig24getPromptTableOffloadingEv", false]], "tensorrt_llm::executor::executorconfig::getrecvpollperiodms (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig19getRecvPollPeriodMsEv", false]], "tensorrt_llm::executor::executorconfig::getrequeststatsmaxiterations (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig28getRequestStatsMaxIterationsEv", false]], "tensorrt_llm::executor::executorconfig::getschedulerconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig18getSchedulerConfigEv", false]], "tensorrt_llm::executor::executorconfig::getschedulerconfigref (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig21getSchedulerConfigRefEv", false]], "tensorrt_llm::executor::executorconfig::getspecdecconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig16getSpecDecConfigEv", false]], "tensorrt_llm::executor::executorconfig::getusegpudirectstorage (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ExecutorConfig22getUseGpuDirectStorageEv", false]], "tensorrt_llm::executor::executorconfig::kdefaultiterstatsmaxiterations (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig30kDefaultIterStatsMaxIterationsE", false]], "tensorrt_llm::executor::executorconfig::kdefaultmaxseqidlemicroseconds (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig30kDefaultMaxSeqIdleMicrosecondsE", false]], "tensorrt_llm::executor::executorconfig::kdefaultrequeststatsmaxiterations (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig33kDefaultRequestStatsMaxIterationsE", false]], "tensorrt_llm::executor::executorconfig::madditionalmodeloutputs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig23mAdditionalModelOutputsE", false]], "tensorrt_llm::executor::executorconfig::mbatchingtype (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig13mBatchingTypeE", false]], "tensorrt_llm::executor::executorconfig::mcachetransceiverconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig23mCacheTransceiverConfigE", false]], "tensorrt_llm::executor::executorconfig::mdebugconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig12mDebugConfigE", false]], "tensorrt_llm::executor::executorconfig::mdecodingconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig15mDecodingConfigE", false]], "tensorrt_llm::executor::executorconfig::menablechunkedcontext (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig21mEnableChunkedContextE", false]], "tensorrt_llm::executor::executorconfig::menabletrtoverlap (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig17mEnableTrtOverlapE", false]], "tensorrt_llm::executor::executorconfig::mextendedruntimeperfknobconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig30mExtendedRuntimePerfKnobConfigE", false]], "tensorrt_llm::executor::executorconfig::mgathergenerationlogits (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig23mGatherGenerationLogitsE", false]], "tensorrt_llm::executor::executorconfig::mgpuweightspercent (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig18mGpuWeightsPercentE", false]], "tensorrt_llm::executor::executorconfig::mguideddecodingconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig21mGuidedDecodingConfigE", false]], "tensorrt_llm::executor::executorconfig::miterstatsmaxiterations (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig23mIterStatsMaxIterationsE", false]], "tensorrt_llm::executor::executorconfig::mkvcacheconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig14mKvCacheConfigE", false]], "tensorrt_llm::executor::executorconfig::mlogitspostprocessorconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig26mLogitsPostProcessorConfigE", false]], "tensorrt_llm::executor::executorconfig::mmaxbatchsize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig13mMaxBatchSizeE", false]], "tensorrt_llm::executor::executorconfig::mmaxbeamwidth (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig13mMaxBeamWidthE", false]], "tensorrt_llm::executor::executorconfig::mmaxnumtokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig13mMaxNumTokensE", false]], "tensorrt_llm::executor::executorconfig::mmaxqueuesize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig13mMaxQueueSizeE", false]], "tensorrt_llm::executor::executorconfig::mmaxseqidlemicroseconds (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig23mMaxSeqIdleMicrosecondsE", false]], "tensorrt_llm::executor::executorconfig::mnormalizelogprobs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig18mNormalizeLogProbsE", false]], "tensorrt_llm::executor::executorconfig::mparallelconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig15mParallelConfigE", false]], "tensorrt_llm::executor::executorconfig::mpeftcacheconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig16mPeftCacheConfigE", false]], "tensorrt_llm::executor::executorconfig::mprompttableoffloading (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig22mPromptTableOffloadingE", false]], "tensorrt_llm::executor::executorconfig::mrecvpollperiodms (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig17mRecvPollPeriodMsE", false]], "tensorrt_llm::executor::executorconfig::mrequeststatsmaxiterations (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig26mRequestStatsMaxIterationsE", false]], "tensorrt_llm::executor::executorconfig::mschedulerconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig16mSchedulerConfigE", false]], "tensorrt_llm::executor::executorconfig::mspeculativedecodingconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig26mSpeculativeDecodingConfigE", false]], "tensorrt_llm::executor::executorconfig::musegpudirectstorage (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig20mUseGpuDirectStorageE", false]], "tensorrt_llm::executor::executorconfig::setadditionalmodeloutputs (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig25setAdditionalModelOutputsERKNSt6vectorI21AdditionalModelOutputEE", false]], "tensorrt_llm::executor::executorconfig::setbatchingtype (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig15setBatchingTypeE12BatchingType", false]], "tensorrt_llm::executor::executorconfig::setcachetransceiverconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig25setCacheTransceiverConfigERK22CacheTransceiverConfig", false]], "tensorrt_llm::executor::executorconfig::setdebugconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig14setDebugConfigERK11DebugConfig", false]], "tensorrt_llm::executor::executorconfig::setdecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig17setDecodingConfigERK14DecodingConfig", false]], "tensorrt_llm::executor::executorconfig::setenablechunkedcontext (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig23setEnableChunkedContextEb", false]], "tensorrt_llm::executor::executorconfig::setenabletrtoverlap (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig19setEnableTrtOverlapEb", false]], "tensorrt_llm::executor::executorconfig::setextendedruntimeperfknobconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig32setExtendedRuntimePerfKnobConfigERK29ExtendedRuntimePerfKnobConfig", false]], "tensorrt_llm::executor::executorconfig::setgathergenerationlogits (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig25setGatherGenerationLogitsEb", false]], "tensorrt_llm::executor::executorconfig::setgpuweightspercent (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig20setGpuWeightsPercentERKf", false]], "tensorrt_llm::executor::executorconfig::setguideddecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig23setGuidedDecodingConfigERK20GuidedDecodingConfig", false]], "tensorrt_llm::executor::executorconfig::setiterstatsmaxiterations (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig25setIterStatsMaxIterationsE10SizeType32", false]], "tensorrt_llm::executor::executorconfig::setkvcacheconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig16setKvCacheConfigERK13KvCacheConfig", false]], "tensorrt_llm::executor::executorconfig::setlogitspostprocessorconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig28setLogitsPostProcessorConfigERK25LogitsPostProcessorConfig", false]], "tensorrt_llm::executor::executorconfig::setmaxbatchsize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig15setMaxBatchSizeE10SizeType32", false]], "tensorrt_llm::executor::executorconfig::setmaxbeamwidth (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig15setMaxBeamWidthE10SizeType32", false]], "tensorrt_llm::executor::executorconfig::setmaxnumtokens (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig15setMaxNumTokensE10SizeType32", false]], "tensorrt_llm::executor::executorconfig::setmaxqueuesize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig15setMaxQueueSizeERKNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::executorconfig::setmaxseqidlemicroseconds (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig25setMaxSeqIdleMicrosecondsE8uint64_t", false]], "tensorrt_llm::executor::executorconfig::setnormalizelogprobs (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig20setNormalizeLogProbsEb", false]], "tensorrt_llm::executor::executorconfig::setparallelconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig17setParallelConfigERK14ParallelConfig", false]], "tensorrt_llm::executor::executorconfig::setpeftcacheconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig18setPeftCacheConfigERK15PeftCacheConfig", false]], "tensorrt_llm::executor::executorconfig::setprompttableoffloading (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig24setPromptTableOffloadingEb", false]], "tensorrt_llm::executor::executorconfig::setrecvpollperiodms (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig19setRecvPollPeriodMsERK10SizeType32", false]], "tensorrt_llm::executor::executorconfig::setrequeststatsmaxiterations (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig28setRequestStatsMaxIterationsE10SizeType32", false]], "tensorrt_llm::executor::executorconfig::setschedulerconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig18setSchedulerConfigERK15SchedulerConfig", false]], "tensorrt_llm::executor::executorconfig::setspecdecconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig16setSpecDecConfigERK25SpeculativeDecodingConfig", false]], "tensorrt_llm::executor::executorconfig::setusegpudirectstorage (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ExecutorConfig22setUseGpuDirectStorageERKb", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfigE", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::extendedruntimeperfknobconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig29ExtendedRuntimePerfKnobConfigEbbb10SizeType32", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::getcudagraphcachesize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig21getCudaGraphCacheSizeEv", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::getcudagraphmode (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig16getCudaGraphModeEv", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::getenablecontextfmhafp32acc (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig27getEnableContextFMHAFP32AccEv", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::getmultiblockmode (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig17getMultiBlockModeEv", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::mcudagraphcachesize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig19mCudaGraphCacheSizeE", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::mcudagraphmode (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig14mCudaGraphModeE", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::menablecontextfmhafp32acc (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig25mEnableContextFMHAFP32AccE", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::mmultiblockmode (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig15mMultiBlockModeE", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfigeqERK29ExtendedRuntimePerfKnobConfig", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::setcudagraphcachesize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig21setCudaGraphCacheSizeE10SizeType32", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::setcudagraphmode (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig16setCudaGraphModeEb", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::setenablecontextfmhafp32acc (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig27setEnableContextFMHAFP32AccEb", false]], "tensorrt_llm::executor::extendedruntimeperfknobconfig::setmultiblockmode (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor29ExtendedRuntimePerfKnobConfig17setMultiBlockModeEb", false]], "tensorrt_llm::executor::externaldrafttokensconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor25ExternalDraftTokensConfigE", false]], "tensorrt_llm::executor::externaldrafttokensconfig::externaldrafttokensconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor25ExternalDraftTokensConfig25ExternalDraftTokensConfigE9VecTokensNSt8optionalI6TensorEERKNSt8optionalI9FloatTypeEERKNSt8optionalIbEE", false]], "tensorrt_llm::executor::externaldrafttokensconfig::getacceptancethreshold (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor25ExternalDraftTokensConfig22getAcceptanceThresholdEv", false]], "tensorrt_llm::executor::externaldrafttokensconfig::getfastlogits (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor25ExternalDraftTokensConfig13getFastLogitsEv", false]], "tensorrt_llm::executor::externaldrafttokensconfig::getlogits (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor25ExternalDraftTokensConfig9getLogitsEv", false]], "tensorrt_llm::executor::externaldrafttokensconfig::gettokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor25ExternalDraftTokensConfig9getTokensEv", false]], "tensorrt_llm::executor::externaldrafttokensconfig::macceptancethreshold (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor25ExternalDraftTokensConfig20mAcceptanceThresholdE", false]], "tensorrt_llm::executor::externaldrafttokensconfig::mfastlogits (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor25ExternalDraftTokensConfig11mFastLogitsE", false]], "tensorrt_llm::executor::externaldrafttokensconfig::mlogits (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor25ExternalDraftTokensConfig7mLogitsE", false]], "tensorrt_llm::executor::externaldrafttokensconfig::mtokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor25ExternalDraftTokensConfig7mTokensE", false]], "tensorrt_llm::executor::finishreason (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor12FinishReasonE", false]], "tensorrt_llm::executor::finishreason::kcancelled (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12FinishReason10kCANCELLEDE", false]], "tensorrt_llm::executor::finishreason::kend_id (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12FinishReason7kEND_IDE", false]], "tensorrt_llm::executor::finishreason::klength (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12FinishReason7kLENGTHE", false]], "tensorrt_llm::executor::finishreason::knot_finished (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12FinishReason13kNOT_FINISHEDE", false]], "tensorrt_llm::executor::finishreason::kstop_words (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12FinishReason11kSTOP_WORDSE", false]], "tensorrt_llm::executor::finishreason::ktimed_out (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12FinishReason10kTIMED_OUTE", false]], "tensorrt_llm::executor::floattype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor9FloatTypeE", false]], "tensorrt_llm::executor::guideddecodingconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfigE", false]], "tensorrt_llm::executor::guideddecodingconfig::getbackend (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20GuidedDecodingConfig10getBackendEv", false]], "tensorrt_llm::executor::guideddecodingconfig::getencodedvocab (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20GuidedDecodingConfig15getEncodedVocabEv", false]], "tensorrt_llm::executor::guideddecodingconfig::getstoptokenids (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20GuidedDecodingConfig15getStopTokenIdsEv", false]], "tensorrt_llm::executor::guideddecodingconfig::gettokenizerstr (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20GuidedDecodingConfig15getTokenizerStrEv", false]], "tensorrt_llm::executor::guideddecodingconfig::guideddecodingbackend (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig21GuidedDecodingBackendE", false]], "tensorrt_llm::executor::guideddecodingconfig::guideddecodingbackend::kllguidance (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig21GuidedDecodingBackend11kLLGUIDANCEE", false]], "tensorrt_llm::executor::guideddecodingconfig::guideddecodingbackend::kxgrammar (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig21GuidedDecodingBackend9kXGRAMMARE", false]], "tensorrt_llm::executor::guideddecodingconfig::guideddecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig20GuidedDecodingConfigE21GuidedDecodingBackendNSt8optionalINSt6vectorINSt6stringEEEEENSt8optionalINSt6stringEEENSt8optionalINSt6vectorI11TokenIdTypeEEEE", false]], "tensorrt_llm::executor::guideddecodingconfig::mbackend (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig8mBackendE", false]], "tensorrt_llm::executor::guideddecodingconfig::mencodedvocab (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig13mEncodedVocabE", false]], "tensorrt_llm::executor::guideddecodingconfig::mstoptokenids (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig13mStopTokenIdsE", false]], "tensorrt_llm::executor::guideddecodingconfig::mtokenizerstr (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig13mTokenizerStrE", false]], "tensorrt_llm::executor::guideddecodingconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20GuidedDecodingConfigeqERK20GuidedDecodingConfig", false]], "tensorrt_llm::executor::guideddecodingconfig::setbackend (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig10setBackendERK21GuidedDecodingBackend", false]], "tensorrt_llm::executor::guideddecodingconfig::setencodedvocab (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig15setEncodedVocabERKNSt6vectorINSt6stringEEE", false]], "tensorrt_llm::executor::guideddecodingconfig::setstoptokenids (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig15setStopTokenIdsERKNSt6vectorI11TokenIdTypeEE", false]], "tensorrt_llm::executor::guideddecodingconfig::settokenizerstr (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingConfig15setTokenizerStrERKNSt6stringE", false]], "tensorrt_llm::executor::guideddecodingconfig::validate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20GuidedDecodingConfig8validateEv", false]], "tensorrt_llm::executor::guideddecodingparams (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParamsE", false]], "tensorrt_llm::executor::guideddecodingparams::getguide (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20GuidedDecodingParams8getGuideEv", false]], "tensorrt_llm::executor::guideddecodingparams::getguidetype (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20GuidedDecodingParams12getGuideTypeEv", false]], "tensorrt_llm::executor::guideddecodingparams::guideddecodingparams (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParams20GuidedDecodingParamsE9GuideTypeNSt8optionalINSt6stringEEE", false]], "tensorrt_llm::executor::guideddecodingparams::guidetype (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParams9GuideTypeE", false]], "tensorrt_llm::executor::guideddecodingparams::guidetype::kebnf_grammar (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParams9GuideType13kEBNF_GRAMMARE", false]], "tensorrt_llm::executor::guideddecodingparams::guidetype::kjson (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParams9GuideType5kJSONE", false]], "tensorrt_llm::executor::guideddecodingparams::guidetype::kjson_schema (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParams9GuideType12kJSON_SCHEMAE", false]], "tensorrt_llm::executor::guideddecodingparams::guidetype::kregex (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParams9GuideType6kREGEXE", false]], "tensorrt_llm::executor::guideddecodingparams::guidetype::kstructural_tag (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParams9GuideType15kSTRUCTURAL_TAGE", false]], "tensorrt_llm::executor::guideddecodingparams::mguide (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParams6mGuideE", false]], "tensorrt_llm::executor::guideddecodingparams::mguidetype (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor20GuidedDecodingParams10mGuideTypeE", false]], "tensorrt_llm::executor::guideddecodingparams::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor20GuidedDecodingParamseqERK20GuidedDecodingParams", false]], "tensorrt_llm::executor::idtype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor6IdTypeE", false]], "tensorrt_llm::executor::inflightbatchingstats (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor21InflightBatchingStatsE", false]], "tensorrt_llm::executor::inflightbatchingstats::avgnumdecodedtokensperiter (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor21InflightBatchingStats26avgNumDecodedTokensPerIterE", false]], "tensorrt_llm::executor::inflightbatchingstats::microbatchid (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor21InflightBatchingStats12microBatchIdE", false]], "tensorrt_llm::executor::inflightbatchingstats::numcontextrequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor21InflightBatchingStats18numContextRequestsE", false]], "tensorrt_llm::executor::inflightbatchingstats::numctxtokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor21InflightBatchingStats12numCtxTokensE", false]], "tensorrt_llm::executor::inflightbatchingstats::numgenrequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor21InflightBatchingStats14numGenRequestsE", false]], "tensorrt_llm::executor::inflightbatchingstats::numpausedrequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor21InflightBatchingStats17numPausedRequestsE", false]], "tensorrt_llm::executor::inflightbatchingstats::numscheduledrequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor21InflightBatchingStats20numScheduledRequestsE", false]], "tensorrt_llm::executor::iterationstats (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStatsE", false]], "tensorrt_llm::executor::iterationstats::cpumemusage (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats11cpuMemUsageE", false]], "tensorrt_llm::executor::iterationstats::crosskvcachestats (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats17crossKvCacheStatsE", false]], "tensorrt_llm::executor::iterationstats::gpumemusage (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats11gpuMemUsageE", false]], "tensorrt_llm::executor::iterationstats::inflightbatchingstats (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats21inflightBatchingStatsE", false]], "tensorrt_llm::executor::iterationstats::iter (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats4iterE", false]], "tensorrt_llm::executor::iterationstats::iterlatencyms (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats13iterLatencyMSE", false]], "tensorrt_llm::executor::iterationstats::kvcachestats (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats12kvCacheStatsE", false]], "tensorrt_llm::executor::iterationstats::maxbatchsizeruntime (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats19maxBatchSizeRuntimeE", false]], "tensorrt_llm::executor::iterationstats::maxbatchsizestatic (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats18maxBatchSizeStaticE", false]], "tensorrt_llm::executor::iterationstats::maxbatchsizetunerrecommended (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats28maxBatchSizeTunerRecommendedE", false]], "tensorrt_llm::executor::iterationstats::maxnumactiverequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats20maxNumActiveRequestsE", false]], "tensorrt_llm::executor::iterationstats::maxnumtokensruntime (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats19maxNumTokensRuntimeE", false]], "tensorrt_llm::executor::iterationstats::maxnumtokensstatic (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats18maxNumTokensStaticE", false]], "tensorrt_llm::executor::iterationstats::maxnumtokenstunerrecommended (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats28maxNumTokensTunerRecommendedE", false]], "tensorrt_llm::executor::iterationstats::newactiverequestsqueuelatencyms (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats31newActiveRequestsQueueLatencyMSE", false]], "tensorrt_llm::executor::iterationstats::numactiverequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats17numActiveRequestsE", false]], "tensorrt_llm::executor::iterationstats::numcompletedrequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats20numCompletedRequestsE", false]], "tensorrt_llm::executor::iterationstats::numnewactiverequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats20numNewActiveRequestsE", false]], "tensorrt_llm::executor::iterationstats::numqueuedrequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats17numQueuedRequestsE", false]], "tensorrt_llm::executor::iterationstats::pinnedmemusage (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats14pinnedMemUsageE", false]], "tensorrt_llm::executor::iterationstats::specdecodingstats (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats17specDecodingStatsE", false]], "tensorrt_llm::executor::iterationstats::staticbatchingstats (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats19staticBatchingStatsE", false]], "tensorrt_llm::executor::iterationstats::timestamp (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14IterationStats9timestampE", false]], "tensorrt_llm::executor::iterationtype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor13IterationTypeE", false]], "tensorrt_llm::executor::jsonserialization (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor17JsonSerializationE", false]], "tensorrt_llm::executor::jsonserialization::tojsonstr (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor17JsonSerialization9toJsonStrERK12RequestStats", false], [0, "_CPPv4N12tensorrt_llm8executor17JsonSerialization9toJsonStrERK14IterationStats", false], [0, "_CPPv4N12tensorrt_llm8executor17JsonSerialization9toJsonStrERK24RequestStatsPerIteration", false]], "tensorrt_llm::executor::kv_cache (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cacheE", false]], "tensorrt_llm::executor::kv_cache::agentdesc (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache9AgentDescE", false]], "tensorrt_llm::executor::kv_cache::agentdesc::agentdesc (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache9AgentDesc9AgentDescENSt6stringE", false]], "tensorrt_llm::executor::kv_cache::agentdesc::getbackendagentdesc (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9AgentDesc19getBackendAgentDescEv", false]], "tensorrt_llm::executor::kv_cache::agentdesc::mbackendagentdesc (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache9AgentDesc17mBackendAgentDescE", false]], "tensorrt_llm::executor::kv_cache::agentstate (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10AgentStateE", false]], "tensorrt_llm::executor::kv_cache::agentstate::agentstate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10AgentState10AgentStateENSt6stringENSt6stringE", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache10AgentState10AgentStateEv", false]], "tensorrt_llm::executor::kv_cache::agentstate::magentname (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10AgentState10mAgentNameE", false]], "tensorrt_llm::executor::kv_cache::agentstate::mconnectioninfo (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10AgentState15mConnectionInfoE", false]], "tensorrt_llm::executor::kv_cache::agentstate::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10AgentStateeqERK10AgentState", false]], "tensorrt_llm::executor::kv_cache::agentstate::tostring (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10AgentState8toStringEv", false]], "tensorrt_llm::executor::kv_cache::baseagentconfig (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15BaseAgentConfigE", false]], "tensorrt_llm::executor::kv_cache::baseagentconfig::mname (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15BaseAgentConfig5mNameE", false]], "tensorrt_llm::executor::kv_cache::baseagentconfig::useprogthread (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15BaseAgentConfig13useProgThreadE", false]], "tensorrt_llm::executor::kv_cache::basetransferagent (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgentE", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::checkremotedescs (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent16checkRemoteDescsERKNSt6stringERK11MemoryDescs", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::connectremoteagent (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent18connectRemoteAgentERKNSt6stringERK18ConnectionInfoType", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::deregistermemory (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent16deregisterMemoryERK13RegisterDescs", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::getconnectioninfo (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent17getConnectionInfoEv", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::getlocalagentdesc (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent17getLocalAgentDescEv", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::getnotifiedsyncmessages (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent23getNotifiedSyncMessagesEv", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::invalidateremoteagent (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent21invalidateRemoteAgentERKNSt6stringE", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::loadremoteagent (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent15loadRemoteAgentERKNSt6stringERK9AgentDesc", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::notifysyncmessage (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent17notifySyncMessageERKNSt6stringERK11SyncMessage", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::registermemory (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent14registerMemoryERK13RegisterDescs", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::submittransferrequests (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgent22submitTransferRequestsERK15TransferRequest", false]], "tensorrt_llm::executor::kv_cache::basetransferagent::~basetransferagent (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17BaseTransferAgentD0Ev", false]], "tensorrt_llm::executor::kv_cache::cachestate (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheStateE", false]], "tensorrt_llm::executor::kv_cache::cachestate::attentionconfig (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState15AttentionConfigE", false]], "tensorrt_llm::executor::kv_cache::cachestate::attentionconfig::attentionconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState15AttentionConfig15AttentionConfigE13AttentionTypei", false]], "tensorrt_llm::executor::kv_cache::cachestate::attentionconfig::mattentiontype (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState15AttentionConfig14mAttentionTypeE", false]], "tensorrt_llm::executor::kv_cache::cachestate::attentionconfig::mkvfactor (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState15AttentionConfig9mKvFactorE", false]], "tensorrt_llm::executor::kv_cache::cachestate::attentiontype (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState13AttentionTypeE", false]], "tensorrt_llm::executor::kv_cache::cachestate::attentiontype::kdefault (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState13AttentionType8kDEFAULTE", false]], "tensorrt_llm::executor::kv_cache::cachestate::attentiontype::kmla (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState13AttentionType4kMLAE", false]], "tensorrt_llm::executor::kv_cache::cachestate::cachestate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState10CacheStateE10SizeType3210SizeType3210SizeType3210SizeType3210SizeType3210SizeType32N8nvinfer18DataTypeE13AttentionTypeibii", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState10CacheStateE11ModelConfigRKN7runtime11WorldConfigEN8nvinfer18DataTypeE13AttentionTypei", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState10CacheStateENSt6vectorI10SizeType32EE10SizeType3210SizeType3210SizeType3210SizeType32N8nvinfer18DataTypeE13AttentionTypeibii", false]], "tensorrt_llm::executor::kv_cache::cachestate::getattentionconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10CacheState18getAttentionConfigEv", false]], "tensorrt_llm::executor::kv_cache::cachestate::getdatatype (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10CacheState11getDataTypeEv", false]], "tensorrt_llm::executor::kv_cache::cachestate::getmodelconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10CacheState14getModelConfigEv", false]], "tensorrt_llm::executor::kv_cache::cachestate::getparallelconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10CacheState17getParallelConfigEv", false]], "tensorrt_llm::executor::kv_cache::cachestate::mattentionconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState16mAttentionConfigE", false]], "tensorrt_llm::executor::kv_cache::cachestate::mdatatype (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState9mDataTypeE", false]], "tensorrt_llm::executor::kv_cache::cachestate::mmodelconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState12mModelConfigE", false]], "tensorrt_llm::executor::kv_cache::cachestate::modelconfig (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState11ModelConfigE", false]], "tensorrt_llm::executor::kv_cache::cachestate::modelconfig::mnbkvheadsperlayer (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState11ModelConfig18mNbKvHeadsPerLayerE", false]], "tensorrt_llm::executor::kv_cache::cachestate::modelconfig::msizeperhead (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState11ModelConfig12mSizePerHeadE", false]], "tensorrt_llm::executor::kv_cache::cachestate::modelconfig::mtokensperblock (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState11ModelConfig15mTokensPerBlockE", false]], "tensorrt_llm::executor::kv_cache::cachestate::modelconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10CacheState11ModelConfigeqERK11ModelConfig", false]], "tensorrt_llm::executor::kv_cache::cachestate::mparallelconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState15mParallelConfigE", false]], "tensorrt_llm::executor::kv_cache::cachestate::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10CacheStateeqERKN8kv_cache10CacheStateE", false]], "tensorrt_llm::executor::kv_cache::cachestate::parallelconfig (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState14ParallelConfigE", false]], "tensorrt_llm::executor::kv_cache::cachestate::parallelconfig::mdprank (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState14ParallelConfig7mDPrankE", false]], "tensorrt_llm::executor::kv_cache::cachestate::parallelconfig::mdpsize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState14ParallelConfig7mDPsizeE", false]], "tensorrt_llm::executor::kv_cache::cachestate::parallelconfig::menableattentiondp (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState14ParallelConfig18mEnableAttentionDPE", false]], "tensorrt_llm::executor::kv_cache::cachestate::parallelconfig::mpipelineparallelism (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState14ParallelConfig20mPipelineParallelismE", false]], "tensorrt_llm::executor::kv_cache::cachestate::parallelconfig::mtensorparallelism (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10CacheState14ParallelConfig18mTensorParallelismE", false]], "tensorrt_llm::executor::kv_cache::cachestate::parallelconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10CacheState14ParallelConfigeqERK14ParallelConfig", false]], "tensorrt_llm::executor::kv_cache::cachestate::tostring (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10CacheState8toStringEv", false]], "tensorrt_llm::executor::kv_cache::commstate (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache9CommStateE", false]], "tensorrt_llm::executor::kv_cache::commstate::commstate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache9CommState9CommStateENSt6vectorI10AgentStateEEi", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache9CommState9CommStateENSt6vectorI10SizeType32EEi", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache9CommState9CommStateENSt6vectorI11SocketStateEEi", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache9CommState9CommStateENSt8uint16_tENSt6stringE", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache9CommState9CommStateEv", false]], "tensorrt_llm::executor::kv_cache::commstate::getagentstate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9CommState13getAgentStateEv", false]], "tensorrt_llm::executor::kv_cache::commstate::getmpistate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9CommState11getMpiStateEv", false]], "tensorrt_llm::executor::kv_cache::commstate::getselfidx (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9CommState10getSelfIdxEv", false]], "tensorrt_llm::executor::kv_cache::commstate::getsocketstate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9CommState14getSocketStateEv", false]], "tensorrt_llm::executor::kv_cache::commstate::isagentstate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9CommState12isAgentStateEv", false]], "tensorrt_llm::executor::kv_cache::commstate::ismpistate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9CommState10isMpiStateEv", false]], "tensorrt_llm::executor::kv_cache::commstate::issocketstate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9CommState13isSocketStateEv", false]], "tensorrt_llm::executor::kv_cache::commstate::mselfidx (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache9CommState8mSelfIdxE", false]], "tensorrt_llm::executor::kv_cache::commstate::mstate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache9CommState6mStateE", false]], "tensorrt_llm::executor::kv_cache::commstate::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9CommStateeqERK9CommState", false]], "tensorrt_llm::executor::kv_cache::commstate::tostring (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache9CommState8toStringEv", false]], "tensorrt_llm::executor::kv_cache::connection (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10ConnectionE", false]], "tensorrt_llm::executor::kv_cache::connection::isthreadsafe (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10Connection12isThreadSafeEv", false]], "tensorrt_llm::executor::kv_cache::connection::recv (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10Connection4recvERK11DataContextPv6size_t", false]], "tensorrt_llm::executor::kv_cache::connection::send (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10Connection4sendERK11DataContextPKv6size_t", false]], "tensorrt_llm::executor::kv_cache::connection::~connection (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10ConnectionD0Ev", false]], "tensorrt_llm::executor::kv_cache::connectioninfotype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache18ConnectionInfoTypeE", false]], "tensorrt_llm::executor::kv_cache::connectionmanager (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17ConnectionManagerE", false]], "tensorrt_llm::executor::kv_cache::connectionmanager::getcommstate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache17ConnectionManager12getCommStateEv", false]], "tensorrt_llm::executor::kv_cache::connectionmanager::getconnections (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17ConnectionManager14getConnectionsERK9CommState", false]], "tensorrt_llm::executor::kv_cache::connectionmanager::recvconnect (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17ConnectionManager11recvConnectERK11DataContextPv6size_t", false]], "tensorrt_llm::executor::kv_cache::connectionmanager::~connectionmanager (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache17ConnectionManagerD0Ev", false]], "tensorrt_llm::executor::kv_cache::datacontext (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11DataContextE", false]], "tensorrt_llm::executor::kv_cache::datacontext::datacontext (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11DataContext11DataContextEi", false]], "tensorrt_llm::executor::kv_cache::datacontext::gettag (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache11DataContext6getTagEv", false]], "tensorrt_llm::executor::kv_cache::datacontext::mtag (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11DataContext4mTagE", false]], "tensorrt_llm::executor::kv_cache::dynlibloader (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoaderE", false]], "tensorrt_llm::executor::kv_cache::dynlibloader::dlsym (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoader5dlSymEPvPKc", false]], "tensorrt_llm::executor::kv_cache::dynlibloader::dynlibloader (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoader12DynLibLoaderERK12DynLibLoader", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoader12DynLibLoaderEv", false]], "tensorrt_llm::executor::kv_cache::dynlibloader::getfunctionpointer (c++ function)": [[0, "_CPPv4I0EN12tensorrt_llm8executor8kv_cache12DynLibLoader18getFunctionPointerE9FunctionTRKNSt6stringERKNSt6stringE", false]], "tensorrt_llm::executor::kv_cache::dynlibloader::gethandle (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoader9getHandleERKNSt6stringE", false]], "tensorrt_llm::executor::kv_cache::dynlibloader::getinstance (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoader11getInstanceEv", false]], "tensorrt_llm::executor::kv_cache::dynlibloader::mdllmutex (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoader9mDllMutexE", false]], "tensorrt_llm::executor::kv_cache::dynlibloader::mhandlers (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoader9mHandlersE", false]], "tensorrt_llm::executor::kv_cache::dynlibloader::operator= (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoaderaSERK12DynLibLoader", false]], "tensorrt_llm::executor::kv_cache::dynlibloader::~dynlibloader (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache12DynLibLoaderD0Ev", false]], "tensorrt_llm::executor::kv_cache::maketransferagent (c++ function)": [[0, "_CPPv4IDpEN12tensorrt_llm8executor8kv_cache17makeTransferAgentENSt10unique_ptrI17BaseTransferAgentEERKNSt6stringEDpRR4Args", false]], "tensorrt_llm::executor::kv_cache::memorydesc (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDescE", false]], "tensorrt_llm::executor::kv_cache::memorydesc::deserialize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDesc11deserializeERNSt7istreamE", false]], "tensorrt_llm::executor::kv_cache::memorydesc::getaddr (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10MemoryDesc7getAddrEv", false]], "tensorrt_llm::executor::kv_cache::memorydesc::getdeviceid (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10MemoryDesc11getDeviceIdEv", false]], "tensorrt_llm::executor::kv_cache::memorydesc::getlen (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache10MemoryDesc6getLenEv", false]], "tensorrt_llm::executor::kv_cache::memorydesc::maddr (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDesc5mAddrE", false]], "tensorrt_llm::executor::kv_cache::memorydesc::mdeviceid (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDesc9mDeviceIdE", false]], "tensorrt_llm::executor::kv_cache::memorydesc::memorydesc (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDesc10MemoryDescE9uintptr_t6size_t8uint32_t", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDesc10MemoryDescEPv6size_t8uint32_t", false], [0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDesc10MemoryDescERKNSt6vectorIcEE8uint32_t", false]], "tensorrt_llm::executor::kv_cache::memorydesc::mlen (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDesc4mLenE", false]], "tensorrt_llm::executor::kv_cache::memorydesc::serialize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDesc9serializeERK10MemoryDescRNSt7ostreamE", false]], "tensorrt_llm::executor::kv_cache::memorydesc::serializedsize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryDesc14serializedSizeERK10MemoryDesc", false]], "tensorrt_llm::executor::kv_cache::memorydescs (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11MemoryDescsE", false]], "tensorrt_llm::executor::kv_cache::memorydescs::getdescs (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache11MemoryDescs8getDescsEv", false]], "tensorrt_llm::executor::kv_cache::memorydescs::gettype (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache11MemoryDescs7getTypeEv", false]], "tensorrt_llm::executor::kv_cache::memorydescs::mdescs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11MemoryDescs6mDescsE", false]], "tensorrt_llm::executor::kv_cache::memorydescs::memorydescs (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11MemoryDescs11MemoryDescsE10MemoryTypeNSt6vectorI10MemoryDescEE", false]], "tensorrt_llm::executor::kv_cache::memorydescs::mtype (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11MemoryDescs5mTypeE", false]], "tensorrt_llm::executor::kv_cache::memorytype (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryTypeE", false]], "tensorrt_llm::executor::kv_cache::memorytype::kblk (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryType4kBLKE", false]], "tensorrt_llm::executor::kv_cache::memorytype::kdram (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryType5kDRAME", false]], "tensorrt_llm::executor::kv_cache::memorytype::kfile (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryType5kFILEE", false]], "tensorrt_llm::executor::kv_cache::memorytype::kobj (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryType4kOBJE", false]], "tensorrt_llm::executor::kv_cache::memorytype::kvram (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10MemoryType5kVRAME", false]], "tensorrt_llm::executor::kv_cache::mpistate (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache8MpiStateE", false]], "tensorrt_llm::executor::kv_cache::mpistate::mranks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache8MpiState6mRanksE", false]], "tensorrt_llm::executor::kv_cache::mpistate::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache8MpiStateeqERK8MpiState", false]], "tensorrt_llm::executor::kv_cache::mpistate::tostring (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache8MpiState8toStringEv", false]], "tensorrt_llm::executor::kv_cache::registerdescs (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache13RegisterDescsE", false]], "tensorrt_llm::executor::kv_cache::socketstate (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11SocketStateE", false]], "tensorrt_llm::executor::kv_cache::socketstate::mip (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11SocketState3mIpE", false]], "tensorrt_llm::executor::kv_cache::socketstate::mport (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11SocketState5mPortE", false]], "tensorrt_llm::executor::kv_cache::socketstate::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache11SocketStateeqERK11SocketState", false]], "tensorrt_llm::executor::kv_cache::socketstate::tostring (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache11SocketState8toStringEv", false]], "tensorrt_llm::executor::kv_cache::syncmessage (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache11SyncMessageE", false]], "tensorrt_llm::executor::kv_cache::transferdescs (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache13TransferDescsE", false]], "tensorrt_llm::executor::kv_cache::transferop (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10TransferOpE", false]], "tensorrt_llm::executor::kv_cache::transferop::kread (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10TransferOp5kREADE", false]], "tensorrt_llm::executor::kv_cache::transferop::kwrite (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache10TransferOp6kWRITEE", false]], "tensorrt_llm::executor::kv_cache::transferrequest (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15TransferRequestE", false]], "tensorrt_llm::executor::kv_cache::transferrequest::getdstdescs (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache15TransferRequest11getDstDescsEv", false]], "tensorrt_llm::executor::kv_cache::transferrequest::getop (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache15TransferRequest5getOpEv", false]], "tensorrt_llm::executor::kv_cache::transferrequest::getremotename (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache15TransferRequest13getRemoteNameEv", false]], "tensorrt_llm::executor::kv_cache::transferrequest::getsrcdescs (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache15TransferRequest11getSrcDescsEv", false]], "tensorrt_llm::executor::kv_cache::transferrequest::getsyncmessage (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache15TransferRequest14getSyncMessageEv", false]], "tensorrt_llm::executor::kv_cache::transferrequest::mdstdescs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15TransferRequest9mDstDescsE", false]], "tensorrt_llm::executor::kv_cache::transferrequest::mop (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15TransferRequest3mOpE", false]], "tensorrt_llm::executor::kv_cache::transferrequest::mremotename (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15TransferRequest11mRemoteNameE", false]], "tensorrt_llm::executor::kv_cache::transferrequest::msrcdescs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15TransferRequest9mSrcDescsE", false]], "tensorrt_llm::executor::kv_cache::transferrequest::msyncmessage (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15TransferRequest12mSyncMessageE", false]], "tensorrt_llm::executor::kv_cache::transferrequest::transferrequest (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache15TransferRequest15TransferRequestE10TransferOp13TransferDescs13TransferDescsRKNSt6stringENSt8optionalI11SyncMessageEE", false]], "tensorrt_llm::executor::kv_cache::transferstatus (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache14TransferStatusE", false]], "tensorrt_llm::executor::kv_cache::transferstatus::iscompleted (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache14TransferStatus11isCompletedEv", false]], "tensorrt_llm::executor::kv_cache::transferstatus::wait (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8kv_cache14TransferStatus4waitEv", false]], "tensorrt_llm::executor::kv_cache::transferstatus::~transferstatus (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8kv_cache14TransferStatusD0Ev", false]], "tensorrt_llm::executor::kvcacheconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfigE", false]], "tensorrt_llm::executor::kvcacheconfig::fillemptyfieldsfromruntimedefaults (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig34fillEmptyFieldsFromRuntimeDefaultsERKN12tensorrt_llm7runtime15RuntimeDefaultsE", false]], "tensorrt_llm::executor::kvcacheconfig::getcopyonpartialreuse (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig21getCopyOnPartialReuseEv", false]], "tensorrt_llm::executor::kvcacheconfig::getcrosskvcachefraction (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig23getCrossKvCacheFractionEv", false]], "tensorrt_llm::executor::kvcacheconfig::getenableblockreuse (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig19getEnableBlockReuseEv", false]], "tensorrt_llm::executor::kvcacheconfig::getenablepartialreuse (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig21getEnablePartialReuseEv", false]], "tensorrt_llm::executor::kvcacheconfig::geteventbuffermaxsize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig21getEventBufferMaxSizeEv", false]], "tensorrt_llm::executor::kvcacheconfig::getfreegpumemoryfraction (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig24getFreeGpuMemoryFractionEv", false]], "tensorrt_llm::executor::kvcacheconfig::gethostcachesize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig16getHostCacheSizeEv", false]], "tensorrt_llm::executor::kvcacheconfig::getmaxattentionwindowvec (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig24getMaxAttentionWindowVecEv", false]], "tensorrt_llm::executor::kvcacheconfig::getmaxtokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig12getMaxTokensEv", false]], "tensorrt_llm::executor::kvcacheconfig::getonboardblocks (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig16getOnboardBlocksEv", false]], "tensorrt_llm::executor::kvcacheconfig::getsecondaryoffloadminpriority (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig30getSecondaryOffloadMinPriorityEv", false]], "tensorrt_llm::executor::kvcacheconfig::getsinktokenlength (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig18getSinkTokenLengthEv", false]], "tensorrt_llm::executor::kvcacheconfig::getuseuvm (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor13KvCacheConfig9getUseUvmEv", false]], "tensorrt_llm::executor::kvcacheconfig::kdefaultgpumemfraction (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig22kDefaultGpuMemFractionE", false]], "tensorrt_llm::executor::kvcacheconfig::kvcacheconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig13KvCacheConfigEbRKNSt8optionalI10SizeType32EERKNSt8optionalINSt6vectorI10SizeType32EEEERKNSt8optionalI10SizeType32EERKNSt8optionalI9FloatTypeEERKNSt8optionalI6size_tEEbRKNSt8optionalI9FloatTypeEENSt8optionalI17RetentionPriorityEE6size_tbbbRKNSt8optionalIN12tensorrt_llm7runtime15RuntimeDefaultsEEE", false]], "tensorrt_llm::executor::kvcacheconfig::mcopyonpartialreuse (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig19mCopyOnPartialReuseE", false]], "tensorrt_llm::executor::kvcacheconfig::mcrosskvcachefraction (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig21mCrossKvCacheFractionE", false]], "tensorrt_llm::executor::kvcacheconfig::menableblockreuse (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig17mEnableBlockReuseE", false]], "tensorrt_llm::executor::kvcacheconfig::menablepartialreuse (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig19mEnablePartialReuseE", false]], "tensorrt_llm::executor::kvcacheconfig::meventbuffermaxsize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig19mEventBufferMaxSizeE", false]], "tensorrt_llm::executor::kvcacheconfig::mfreegpumemoryfraction (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig22mFreeGpuMemoryFractionE", false]], "tensorrt_llm::executor::kvcacheconfig::mhostcachesize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig14mHostCacheSizeE", false]], "tensorrt_llm::executor::kvcacheconfig::mmaxattentionwindowvec (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig22mMaxAttentionWindowVecE", false]], "tensorrt_llm::executor::kvcacheconfig::mmaxtokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig10mMaxTokensE", false]], "tensorrt_llm::executor::kvcacheconfig::monboardblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig14mOnboardBlocksE", false]], "tensorrt_llm::executor::kvcacheconfig::msecondaryoffloadminpriority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig28mSecondaryOffloadMinPriorityE", false]], "tensorrt_llm::executor::kvcacheconfig::msinktokenlength (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig16mSinkTokenLengthE", false]], "tensorrt_llm::executor::kvcacheconfig::museuvm (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig7mUseUvmE", false]], "tensorrt_llm::executor::kvcacheconfig::setcopyonpartialreuse (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig21setCopyOnPartialReuseEb", false]], "tensorrt_llm::executor::kvcacheconfig::setcrosskvcachefraction (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig23setCrossKvCacheFractionE9FloatType", false]], "tensorrt_llm::executor::kvcacheconfig::setenableblockreuse (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig19setEnableBlockReuseEb", false]], "tensorrt_llm::executor::kvcacheconfig::setenablepartialreuse (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig21setEnablePartialReuseEb", false]], "tensorrt_llm::executor::kvcacheconfig::seteventbuffermaxsize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig21setEventBufferMaxSizeE6size_t", false]], "tensorrt_llm::executor::kvcacheconfig::setfreegpumemoryfraction (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig24setFreeGpuMemoryFractionE9FloatType", false]], "tensorrt_llm::executor::kvcacheconfig::sethostcachesize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig16setHostCacheSizeE6size_t", false]], "tensorrt_llm::executor::kvcacheconfig::setmaxattentionwindowvec (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig24setMaxAttentionWindowVecENSt6vectorI10SizeType32EE", false]], "tensorrt_llm::executor::kvcacheconfig::setmaxtokens (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig12setMaxTokensE10SizeType32", false]], "tensorrt_llm::executor::kvcacheconfig::setonboardblocks (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig16setOnboardBlocksEb", false]], "tensorrt_llm::executor::kvcacheconfig::setsecondaryoffloadminpriority (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig30setSecondaryOffloadMinPriorityENSt8optionalI17RetentionPriorityEE", false]], "tensorrt_llm::executor::kvcacheconfig::setsinktokenlength (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig18setSinkTokenLengthE10SizeType32", false]], "tensorrt_llm::executor::kvcacheconfig::setuseuvm (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13KvCacheConfig9setUseUvmEb", false]], "tensorrt_llm::executor::kvcachecreateddata (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheCreatedDataE", false]], "tensorrt_llm::executor::kvcachecreateddata::numblockspercachelevel (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheCreatedData22numBlocksPerCacheLevelE", false]], "tensorrt_llm::executor::kvcacheevent (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor12KVCacheEventE", false]], "tensorrt_llm::executor::kvcacheevent::data (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KVCacheEvent4dataE", false]], "tensorrt_llm::executor::kvcacheevent::eventid (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KVCacheEvent7eventIdE", false]], "tensorrt_llm::executor::kvcacheevent::kvcacheevent (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12KVCacheEvent12KVCacheEventE6IdType16KVCacheEventData", false]], "tensorrt_llm::executor::kvcacheeventdata (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor16KVCacheEventDataE", false]], "tensorrt_llm::executor::kvcacheeventdiff (c++ struct)": [[0, "_CPPv4I0EN12tensorrt_llm8executor16KVCacheEventDiffE", false]], "tensorrt_llm::executor::kvcacheeventdiff::newvalue (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor16KVCacheEventDiff8newValueE", false]], "tensorrt_llm::executor::kvcacheeventdiff::oldvalue (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor16KVCacheEventDiff8oldValueE", false]], "tensorrt_llm::executor::kvcacheeventmanager (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor19KVCacheEventManagerE", false]], "tensorrt_llm::executor::kvcacheeventmanager::getlatestevents (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor19KVCacheEventManager15getLatestEventsENSt8optionalINSt6chrono12millisecondsEEE", false]], "tensorrt_llm::executor::kvcacheeventmanager::kvcacheeventmanager (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor19KVCacheEventManager19KVCacheEventManagerENSt10shared_ptrIN12tensorrt_llm13batch_manager16kv_cache_manager18BaseKVCacheManagerEEE", false]], "tensorrt_llm::executor::kvcacheeventmanager::kvcachemanager (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor19KVCacheEventManager14kvCacheManagerE", false]], "tensorrt_llm::executor::kvcacheremoveddata (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheRemovedDataE", false]], "tensorrt_llm::executor::kvcacheremoveddata::blockhashes (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheRemovedData11blockHashesE", false]], "tensorrt_llm::executor::kvcacheretentionconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfigE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::getdecodedurationms (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22KvCacheRetentionConfig19getDecodeDurationMsEv", false]], "tensorrt_llm::executor::kvcacheretentionconfig::getdecoderetentionpriority (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22KvCacheRetentionConfig26getDecodeRetentionPriorityEv", false]], "tensorrt_llm::executor::kvcacheretentionconfig::getdirectory (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22KvCacheRetentionConfig12getDirectoryEv", false]], "tensorrt_llm::executor::kvcacheretentionconfig::getperblockretentionpriorityduration (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22KvCacheRetentionConfig36getPerBlockRetentionPriorityDurationE10SizeType3210SizeType32", false]], "tensorrt_llm::executor::kvcacheretentionconfig::gettokenrangeretentionconfigs (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22KvCacheRetentionConfig29getTokenRangeRetentionConfigsEv", false]], "tensorrt_llm::executor::kvcacheretentionconfig::gettransfermode (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22KvCacheRetentionConfig15getTransferModeEv", false]], "tensorrt_llm::executor::kvcacheretentionconfig::kdefaultretentionpriority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig25kDefaultRetentionPriorityE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::kmaxretentionpriority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig21kMaxRetentionPriorityE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::kminretentionpriority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig21kMinRetentionPriorityE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::kvcacheretentionconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig22KvCacheRetentionConfigERKNSt6vectorI25TokenRangeRetentionConfigEE17RetentionPriorityNSt8optionalINSt6chrono12millisecondsEEE19KvCacheTransferModeNSt8optionalINSt6stringEEE", false], [0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig22KvCacheRetentionConfigEv", false]], "tensorrt_llm::executor::kvcacheretentionconfig::mdecodedurationms (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig17mDecodeDurationMsE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::mdecoderetentionpriority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig24mDecodeRetentionPriorityE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::mdirectory (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig10mDirectoryE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::mtokenrangeretentionconfigs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig27mTokenRangeRetentionConfigsE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::mtransfermode (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig13mTransferModeE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22KvCacheRetentionConfigeqERK22KvCacheRetentionConfig", false]], "tensorrt_llm::executor::kvcacheretentionconfig::tokenrangeretentionconfig (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig25TokenRangeRetentionConfigE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::tokenrangeretentionconfig::durationms (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig25TokenRangeRetentionConfig10durationMsE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::tokenrangeretentionconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor22KvCacheRetentionConfig25TokenRangeRetentionConfigeqERK25TokenRangeRetentionConfig", false]], "tensorrt_llm::executor::kvcacheretentionconfig::tokenrangeretentionconfig::priority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig25TokenRangeRetentionConfig8priorityE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::tokenrangeretentionconfig::tokenend (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig25TokenRangeRetentionConfig8tokenEndE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::tokenrangeretentionconfig::tokenrangeretentionconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig25TokenRangeRetentionConfig25TokenRangeRetentionConfigE10SizeType32NSt8optionalI10SizeType32EE17RetentionPriorityNSt8optionalINSt6chrono12millisecondsEEE", false]], "tensorrt_llm::executor::kvcacheretentionconfig::tokenrangeretentionconfig::tokenstart (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KvCacheRetentionConfig25TokenRangeRetentionConfig10tokenStartE", false]], "tensorrt_llm::executor::kvcachestats (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStatsE", false]], "tensorrt_llm::executor::kvcachestats::allocnewblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStats14allocNewBlocksE", false]], "tensorrt_llm::executor::kvcachestats::alloctotalblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStats16allocTotalBlocksE", false]], "tensorrt_llm::executor::kvcachestats::cachehitrate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStats12cacheHitRateE", false]], "tensorrt_llm::executor::kvcachestats::freenumblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStats13freeNumBlocksE", false]], "tensorrt_llm::executor::kvcachestats::maxnumblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStats12maxNumBlocksE", false]], "tensorrt_llm::executor::kvcachestats::missedblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStats12missedBlocksE", false]], "tensorrt_llm::executor::kvcachestats::reusedblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStats12reusedBlocksE", false]], "tensorrt_llm::executor::kvcachestats::tokensperblock (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStats14tokensPerBlockE", false]], "tensorrt_llm::executor::kvcachestats::usednumblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12KvCacheStats13usedNumBlocksE", false]], "tensorrt_llm::executor::kvcachestoredblockdata (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor22KVCacheStoredBlockDataE", false]], "tensorrt_llm::executor::kvcachestoredblockdata::blockhash (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KVCacheStoredBlockData9blockHashE", false]], "tensorrt_llm::executor::kvcachestoredblockdata::cachelevel (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KVCacheStoredBlockData10cacheLevelE", false]], "tensorrt_llm::executor::kvcachestoredblockdata::kvcachestoredblockdata (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor22KVCacheStoredBlockData22KVCacheStoredBlockDataE6IdTypeN12tensorrt_llm7runtime15VecUniqueTokensENSt8optionalIN12tensorrt_llm7runtime14LoraTaskIdTypeEEE10SizeType3210SizeType32", false]], "tensorrt_llm::executor::kvcachestoredblockdata::loraid (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KVCacheStoredBlockData6loraIdE", false]], "tensorrt_llm::executor::kvcachestoredblockdata::priority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KVCacheStoredBlockData8priorityE", false]], "tensorrt_llm::executor::kvcachestoredblockdata::tokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor22KVCacheStoredBlockData6tokensE", false]], "tensorrt_llm::executor::kvcachestoreddata (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor17KVCacheStoredDataE", false]], "tensorrt_llm::executor::kvcachestoreddata::blocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor17KVCacheStoredData6blocksE", false]], "tensorrt_llm::executor::kvcachestoreddata::parenthash (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor17KVCacheStoredData10parentHashE", false]], "tensorrt_llm::executor::kvcachetransfermode (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor19KvCacheTransferModeE", false]], "tensorrt_llm::executor::kvcachetransfermode::dram (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor19KvCacheTransferMode4DRAME", false]], "tensorrt_llm::executor::kvcachetransfermode::gds (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor19KvCacheTransferMode3GDSE", false]], "tensorrt_llm::executor::kvcachetransfermode::posix_debug_fallback (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor19KvCacheTransferMode20POSIX_DEBUG_FALLBACKE", false]], "tensorrt_llm::executor::kvcacheupdateddata (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheUpdatedDataE", false]], "tensorrt_llm::executor::kvcacheupdateddata::blockhash (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheUpdatedData9blockHashE", false]], "tensorrt_llm::executor::kvcacheupdateddata::cachelevel (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheUpdatedData10cacheLevelE", false]], "tensorrt_llm::executor::kvcacheupdateddata::cachelevelupdated (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheUpdatedData17cacheLevelUpdatedE10SizeType3210SizeType32", false]], "tensorrt_llm::executor::kvcacheupdateddata::kvcacheupdateddata (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheUpdatedData18KVCacheUpdatedDataE6IdType", false]], "tensorrt_llm::executor::kvcacheupdateddata::priority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheUpdatedData8priorityE", false]], "tensorrt_llm::executor::kvcacheupdateddata::priorityupdated (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18KVCacheUpdatedData15priorityUpdatedE10SizeType3210SizeType32", false]], "tensorrt_llm::executor::logitspostprocessor (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor19LogitsPostProcessorE", false]], "tensorrt_llm::executor::logitspostprocessorbatched (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor26LogitsPostProcessorBatchedE", false]], "tensorrt_llm::executor::logitspostprocessorconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor25LogitsPostProcessorConfigE", false]], "tensorrt_llm::executor::logitspostprocessorconfig::getprocessorbatched (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor25LogitsPostProcessorConfig19getProcessorBatchedEv", false]], "tensorrt_llm::executor::logitspostprocessorconfig::getprocessormap (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor25LogitsPostProcessorConfig15getProcessorMapEv", false]], "tensorrt_llm::executor::logitspostprocessorconfig::getreplicate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor25LogitsPostProcessorConfig12getReplicateEv", false]], "tensorrt_llm::executor::logitspostprocessorconfig::logitspostprocessorconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor25LogitsPostProcessorConfig25LogitsPostProcessorConfigENSt8optionalI22LogitsPostProcessorMapEENSt8optionalI26LogitsPostProcessorBatchedEEb", false]], "tensorrt_llm::executor::logitspostprocessorconfig::mprocessorbatched (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor25LogitsPostProcessorConfig17mProcessorBatchedE", false]], "tensorrt_llm::executor::logitspostprocessorconfig::mprocessormap (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor25LogitsPostProcessorConfig13mProcessorMapE", false]], "tensorrt_llm::executor::logitspostprocessorconfig::mreplicate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor25LogitsPostProcessorConfig10mReplicateE", false]], "tensorrt_llm::executor::logitspostprocessorconfig::setprocessorbatched (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor25LogitsPostProcessorConfig19setProcessorBatchedERK26LogitsPostProcessorBatched", false]], "tensorrt_llm::executor::logitspostprocessorconfig::setprocessormap (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor25LogitsPostProcessorConfig15setProcessorMapERK22LogitsPostProcessorMap", false]], "tensorrt_llm::executor::logitspostprocessorconfig::setreplicate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor25LogitsPostProcessorConfig12setReplicateEb", false]], "tensorrt_llm::executor::logitspostprocessormap (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor22LogitsPostProcessorMapE", false]], "tensorrt_llm::executor::lookaheaddecodingconfig (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfigE", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::calculatespeculativeresource (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor23LookaheadDecodingConfig28calculateSpeculativeResourceEv", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::calculatespeculativeresourcetuple (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig33calculateSpeculativeResourceTupleE10SizeType3210SizeType3210SizeType32", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::get (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor23LookaheadDecodingConfig3getEv", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::getngramsize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor23LookaheadDecodingConfig12getNgramSizeEv", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::getverificationsetsize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor23LookaheadDecodingConfig22getVerificationSetSizeEv", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::getwindowsize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor23LookaheadDecodingConfig13getWindowSizeEv", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::isle (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor23LookaheadDecodingConfig4isLEERK23LookaheadDecodingConfig", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::islegal (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig7isLegalE10SizeType3210SizeType3210SizeType32", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::kdefaultlookaheaddecodingngram (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig30kDefaultLookaheadDecodingNgramE", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::kdefaultlookaheaddecodingverificationset (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig40kDefaultLookaheadDecodingVerificationSetE", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::kdefaultlookaheaddecodingwindow (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig31kDefaultLookaheadDecodingWindowE", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::lookaheaddecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig23LookaheadDecodingConfigE10SizeType3210SizeType3210SizeType32", false], [0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig23LookaheadDecodingConfigEv", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::mngramsize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig10mNgramSizeE", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::mverificationsetsize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig20mVerificationSetSizeE", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::mwindowsize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor23LookaheadDecodingConfig11mWindowSizeE", false]], "tensorrt_llm::executor::lookaheaddecodingconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor23LookaheadDecodingConfigeqERK23LookaheadDecodingConfig", false]], "tensorrt_llm::executor::loraconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor10LoraConfigE", false]], "tensorrt_llm::executor::loraconfig::getconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor10LoraConfig9getConfigEv", false]], "tensorrt_llm::executor::loraconfig::gettaskid (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor10LoraConfig9getTaskIdEv", false]], "tensorrt_llm::executor::loraconfig::getweights (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor10LoraConfig10getWeightsEv", false]], "tensorrt_llm::executor::loraconfig::loraconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor10LoraConfig10LoraConfigE6IdTypeNSt8optionalI6TensorEENSt8optionalI6TensorEE", false]], "tensorrt_llm::executor::loraconfig::mconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10LoraConfig7mConfigE", false]], "tensorrt_llm::executor::loraconfig::mtaskid (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10LoraConfig7mTaskIdE", false]], "tensorrt_llm::executor::loraconfig::mweights (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10LoraConfig8mWeightsE", false]], "tensorrt_llm::executor::medusachoices (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor13MedusaChoicesE", false]], "tensorrt_llm::executor::memorytype (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor10MemoryTypeE", false]], "tensorrt_llm::executor::memorytype::kcpu (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor10MemoryType4kCPUE", false]], "tensorrt_llm::executor::memorytype::kcpu_pinned (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor10MemoryType11kCPU_PINNEDE", false]], "tensorrt_llm::executor::memorytype::kcpu_pinnedpool (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor10MemoryType15kCPU_PINNEDPOOLE", false]], "tensorrt_llm::executor::memorytype::kgpu (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor10MemoryType4kGPUE", false]], "tensorrt_llm::executor::memorytype::kunknown (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor10MemoryType8kUNKNOWNE", false]], "tensorrt_llm::executor::memorytype::kuvm (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor10MemoryType4kUVME", false]], "tensorrt_llm::executor::millisecondstype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor16MillisecondsTypeE", false]], "tensorrt_llm::executor::modeltype (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor9ModelTypeE", false]], "tensorrt_llm::executor::modeltype::kdecoder_only (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor9ModelType13kDECODER_ONLYE", false]], "tensorrt_llm::executor::modeltype::kencoder_decoder (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor9ModelType16kENCODER_DECODERE", false]], "tensorrt_llm::executor::modeltype::kencoder_only (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor9ModelType13kENCODER_ONLYE", false]], "tensorrt_llm::executor::mropeconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor11MropeConfigE", false]], "tensorrt_llm::executor::mropeconfig::getmropepositiondeltas (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11MropeConfig22getMRopePositionDeltasEv", false]], "tensorrt_llm::executor::mropeconfig::getmroperotarycossin (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor11MropeConfig20getMRopeRotaryCosSinEv", false]], "tensorrt_llm::executor::mropeconfig::mmropepositiondeltas (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11MropeConfig20mMRopePositionDeltasE", false]], "tensorrt_llm::executor::mropeconfig::mmroperotarycossin (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor11MropeConfig18mMRopeRotaryCosSinE", false]], "tensorrt_llm::executor::mropeconfig::mropeconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor11MropeConfig11MropeConfigE6Tensor10SizeType32", false]], "tensorrt_llm::executor::multimodalinput (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor15MultimodalInputE", false]], "tensorrt_llm::executor::multimodalinput::getmultimodalhashes (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15MultimodalInput19getMultimodalHashesEv", false]], "tensorrt_llm::executor::multimodalinput::getmultimodallengths (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15MultimodalInput20getMultimodalLengthsEv", false]], "tensorrt_llm::executor::multimodalinput::getmultimodalpositions (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15MultimodalInput22getMultimodalPositionsEv", false]], "tensorrt_llm::executor::multimodalinput::mmultimodalhashes (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15MultimodalInput17mMultimodalHashesE", false]], "tensorrt_llm::executor::multimodalinput::mmultimodallengths (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15MultimodalInput18mMultimodalLengthsE", false]], "tensorrt_llm::executor::multimodalinput::mmultimodalpositions (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15MultimodalInput20mMultimodalPositionsE", false]], "tensorrt_llm::executor::multimodalinput::multimodalinput (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15MultimodalInput15MultimodalInputENSt6vectorINSt6vectorI10SizeType32EEEENSt6vectorI10SizeType32EENSt6vectorI10SizeType32EE", false]], "tensorrt_llm::executor::operator<< (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executorlsERNSt7ostreamE21ContextChunkingPolicy", false], [0, "_CPPv4N12tensorrt_llm8executorlsERNSt7ostreamE23CapacitySchedulerPolicy", false]], "tensorrt_llm::executor::orchestratorconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfigE", false]], "tensorrt_llm::executor::orchestratorconfig::getisorchestrator (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18OrchestratorConfig17getIsOrchestratorEv", false]], "tensorrt_llm::executor::orchestratorconfig::getorchleadercomm (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18OrchestratorConfig17getOrchLeaderCommEv", false]], "tensorrt_llm::executor::orchestratorconfig::getspawnprocesses (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18OrchestratorConfig17getSpawnProcessesEv", false]], "tensorrt_llm::executor::orchestratorconfig::getworkerexecutablepath (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18OrchestratorConfig23getWorkerExecutablePathEv", false]], "tensorrt_llm::executor::orchestratorconfig::misorchestrator (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfig15mIsOrchestratorE", false]], "tensorrt_llm::executor::orchestratorconfig::morchleadercomm (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfig15mOrchLeaderCommE", false]], "tensorrt_llm::executor::orchestratorconfig::mspawnprocesses (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfig15mSpawnProcessesE", false]], "tensorrt_llm::executor::orchestratorconfig::mworkerexecutablepath (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfig21mWorkerExecutablePathE", false]], "tensorrt_llm::executor::orchestratorconfig::orchestratorconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfig18OrchestratorConfigEbNSt6stringENSt10shared_ptrIN3mpi7MpiCommEEEb", false]], "tensorrt_llm::executor::orchestratorconfig::setisorchestrator (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfig17setIsOrchestratorEb", false]], "tensorrt_llm::executor::orchestratorconfig::setorchleadercomm (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfig17setOrchLeaderCommERKNSt10shared_ptrIN3mpi7MpiCommEEE", false]], "tensorrt_llm::executor::orchestratorconfig::setspawnprocesses (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfig17setSpawnProcessesEb", false]], "tensorrt_llm::executor::orchestratorconfig::setworkerexecutablepath (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18OrchestratorConfig23setWorkerExecutablePathERKNSt6stringE", false]], "tensorrt_llm::executor::outputconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor12OutputConfigE", false]], "tensorrt_llm::executor::outputconfig::additionalmodeloutputs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12OutputConfig22additionalModelOutputsE", false]], "tensorrt_llm::executor::outputconfig::excludeinputfromoutput (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12OutputConfig22excludeInputFromOutputE", false]], "tensorrt_llm::executor::outputconfig::outputconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor12OutputConfig12OutputConfigEbbbbbbNSt8optionalINSt6vectorI21AdditionalModelOutputEEEE", false]], "tensorrt_llm::executor::outputconfig::returncontextlogits (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12OutputConfig19returnContextLogitsE", false]], "tensorrt_llm::executor::outputconfig::returnencoderoutput (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12OutputConfig19returnEncoderOutputE", false]], "tensorrt_llm::executor::outputconfig::returngenerationlogits (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12OutputConfig22returnGenerationLogitsE", false]], "tensorrt_llm::executor::outputconfig::returnlogprobs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12OutputConfig14returnLogProbsE", false]], "tensorrt_llm::executor::outputconfig::returnperfmetrics (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12OutputConfig17returnPerfMetricsE", false]], "tensorrt_llm::executor::parallelconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfigE", false]], "tensorrt_llm::executor::parallelconfig::getcommunicationmode (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ParallelConfig20getCommunicationModeEv", false]], "tensorrt_llm::executor::parallelconfig::getcommunicationtype (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ParallelConfig20getCommunicationTypeEv", false]], "tensorrt_llm::executor::parallelconfig::getdeviceids (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ParallelConfig12getDeviceIdsEv", false]], "tensorrt_llm::executor::parallelconfig::getnumnodes (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ParallelConfig11getNumNodesEv", false]], "tensorrt_llm::executor::parallelconfig::getorchestratorconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ParallelConfig21getOrchestratorConfigEv", false]], "tensorrt_llm::executor::parallelconfig::getparticipantids (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14ParallelConfig17getParticipantIdsEv", false]], "tensorrt_llm::executor::parallelconfig::mcommmode (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig9mCommModeE", false]], "tensorrt_llm::executor::parallelconfig::mcommtype (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig9mCommTypeE", false]], "tensorrt_llm::executor::parallelconfig::mdeviceids (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig10mDeviceIdsE", false]], "tensorrt_llm::executor::parallelconfig::mnumnodes (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig9mNumNodesE", false]], "tensorrt_llm::executor::parallelconfig::morchestratorconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig19mOrchestratorConfigE", false]], "tensorrt_llm::executor::parallelconfig::mparticipantids (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig15mParticipantIdsE", false]], "tensorrt_llm::executor::parallelconfig::parallelconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig14ParallelConfigE17CommunicationType17CommunicationModeNSt8optionalINSt6vectorI10SizeType32EEEENSt8optionalINSt6vectorI10SizeType32EEEERKNSt8optionalI18OrchestratorConfigEENSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::parallelconfig::setcommunicationmode (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig20setCommunicationModeE17CommunicationMode", false]], "tensorrt_llm::executor::parallelconfig::setcommunicationtype (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig20setCommunicationTypeE17CommunicationType", false]], "tensorrt_llm::executor::parallelconfig::setdeviceids (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig12setDeviceIdsERKNSt6vectorI10SizeType32EE", false]], "tensorrt_llm::executor::parallelconfig::setnumnodes (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig11setNumNodesE10SizeType32", false]], "tensorrt_llm::executor::parallelconfig::setorchestratorconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig21setOrchestratorConfigERK18OrchestratorConfig", false]], "tensorrt_llm::executor::parallelconfig::setparticipantids (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14ParallelConfig17setParticipantIdsERKNSt6vectorI10SizeType32EE", false]], "tensorrt_llm::executor::peftcacheconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfigE", false]], "tensorrt_llm::executor::peftcacheconfig::getdevicecachepercent (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig21getDeviceCachePercentEv", false]], "tensorrt_llm::executor::peftcacheconfig::gethostcachesize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig16getHostCacheSizeEv", false]], "tensorrt_llm::executor::peftcacheconfig::getloraprefetchdir (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig18getLoraPrefetchDirEv", false]], "tensorrt_llm::executor::peftcacheconfig::getmaxadaptersize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig17getMaxAdapterSizeEv", false]], "tensorrt_llm::executor::peftcacheconfig::getmaxpagesperblockdevice (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig25getMaxPagesPerBlockDeviceEv", false]], "tensorrt_llm::executor::peftcacheconfig::getmaxpagesperblockhost (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig23getMaxPagesPerBlockHostEv", false]], "tensorrt_llm::executor::peftcacheconfig::getnumcopystreams (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig17getNumCopyStreamsEv", false]], "tensorrt_llm::executor::peftcacheconfig::getnumdevicemodulelayer (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig23getNumDeviceModuleLayerEv", false]], "tensorrt_llm::executor::peftcacheconfig::getnumensureworkers (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig19getNumEnsureWorkersEv", false]], "tensorrt_llm::executor::peftcacheconfig::getnumhostmodulelayer (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig21getNumHostModuleLayerEv", false]], "tensorrt_llm::executor::peftcacheconfig::getnumputworkers (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig16getNumPutWorkersEv", false]], "tensorrt_llm::executor::peftcacheconfig::getoptimaladaptersize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfig21getOptimalAdapterSizeEv", false]], "tensorrt_llm::executor::peftcacheconfig::kdefaultmaxadaptersize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig22kDefaultMaxAdapterSizeE", false]], "tensorrt_llm::executor::peftcacheconfig::kdefaultmaxpagesperblockdevice (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig30kDefaultMaxPagesPerBlockDeviceE", false]], "tensorrt_llm::executor::peftcacheconfig::kdefaultmaxpagesperblockhost (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig28kDefaultMaxPagesPerBlockHostE", false]], "tensorrt_llm::executor::peftcacheconfig::kdefaultoptimaladaptersize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig26kDefaultOptimalAdapterSizeE", false]], "tensorrt_llm::executor::peftcacheconfig::mdevicecachepercent (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig19mDeviceCachePercentE", false]], "tensorrt_llm::executor::peftcacheconfig::mhostcachesize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig14mHostCacheSizeE", false]], "tensorrt_llm::executor::peftcacheconfig::mloraprefetchdir (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig16mLoraPrefetchDirE", false]], "tensorrt_llm::executor::peftcacheconfig::mmaxadaptersize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig15mMaxAdapterSizeE", false]], "tensorrt_llm::executor::peftcacheconfig::mmaxpagesperblockdevice (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig23mMaxPagesPerBlockDeviceE", false]], "tensorrt_llm::executor::peftcacheconfig::mmaxpagesperblockhost (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig21mMaxPagesPerBlockHostE", false]], "tensorrt_llm::executor::peftcacheconfig::mnumcopystreams (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig15mNumCopyStreamsE", false]], "tensorrt_llm::executor::peftcacheconfig::mnumdevicemodulelayer (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig21mNumDeviceModuleLayerE", false]], "tensorrt_llm::executor::peftcacheconfig::mnumensureworkers (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig17mNumEnsureWorkersE", false]], "tensorrt_llm::executor::peftcacheconfig::mnumhostmodulelayer (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig19mNumHostModuleLayerE", false]], "tensorrt_llm::executor::peftcacheconfig::mnumputworkers (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig14mNumPutWorkersE", false]], "tensorrt_llm::executor::peftcacheconfig::moptimaladaptersize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig19mOptimalAdapterSizeE", false]], "tensorrt_llm::executor::peftcacheconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15PeftCacheConfigeqERK15PeftCacheConfig", false]], "tensorrt_llm::executor::peftcacheconfig::peftcacheconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15PeftCacheConfig15PeftCacheConfigE10SizeType3210SizeType3210SizeType3210SizeType3210SizeType3210SizeType3210SizeType3210SizeType3210SizeType32RKNSt8optionalIfEERKNSt8optionalI6size_tEERKNSt8optionalINSt6stringEEE", false]], "tensorrt_llm::executor::prioritytype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor12PriorityTypeE", false]], "tensorrt_llm::executor::prompttuningconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor18PromptTuningConfigE", false]], "tensorrt_llm::executor::prompttuningconfig::getembeddingtable (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18PromptTuningConfig17getEmbeddingTableEv", false]], "tensorrt_llm::executor::prompttuningconfig::getinputtokenextraids (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor18PromptTuningConfig21getInputTokenExtraIdsEv", false]], "tensorrt_llm::executor::prompttuningconfig::membeddingtable (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18PromptTuningConfig15mEmbeddingTableE", false]], "tensorrt_llm::executor::prompttuningconfig::minputtokenextraids (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18PromptTuningConfig19mInputTokenExtraIdsE", false]], "tensorrt_llm::executor::prompttuningconfig::prompttuningconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor18PromptTuningConfig18PromptTuningConfigE6TensorNSt8optionalI16VecTokenExtraIdsEE", false]], "tensorrt_llm::executor::randomseedtype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor14RandomSeedTypeE", false]], "tensorrt_llm::executor::request (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor7RequestE", false]], "tensorrt_llm::executor::request::getadditionaloutputnames (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request24getAdditionalOutputNamesEv", false]], "tensorrt_llm::executor::request::getallottedtimems (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request17getAllottedTimeMsEv", false]], "tensorrt_llm::executor::request::getbadwords (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request11getBadWordsEv", false]], "tensorrt_llm::executor::request::getclientid (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request11getClientIdEv", false]], "tensorrt_llm::executor::request::getcontextphaseparams (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request21getContextPhaseParamsEv", false]], "tensorrt_llm::executor::request::getcrossattentionmask (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request21getCrossAttentionMaskEv", false]], "tensorrt_llm::executor::request::geteagleconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request14getEagleConfigEv", false]], "tensorrt_llm::executor::request::getembeddingbias (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request16getEmbeddingBiasEv", false]], "tensorrt_llm::executor::request::getencoderinputfeatures (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request23getEncoderInputFeaturesEv", false]], "tensorrt_llm::executor::request::getencoderinputtokenids (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request23getEncoderInputTokenIdsEv", false]], "tensorrt_llm::executor::request::getencoderoutputlength (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request22getEncoderOutputLengthEv", false]], "tensorrt_llm::executor::request::getendid (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request8getEndIdEv", false]], "tensorrt_llm::executor::request::getexternaldrafttokensconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request28getExternalDraftTokensConfigEv", false]], "tensorrt_llm::executor::request::getguideddecodingparams (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request23getGuidedDecodingParamsEv", false]], "tensorrt_llm::executor::request::getinputtokenids (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request16getInputTokenIdsEv", false]], "tensorrt_llm::executor::request::getkvcacheretentionconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request25getKvCacheRetentionConfigEv", false]], "tensorrt_llm::executor::request::getlanguageadapteruid (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request21getLanguageAdapterUidEv", false]], "tensorrt_llm::executor::request::getlogitspostprocessor (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request22getLogitsPostProcessorEv", false]], "tensorrt_llm::executor::request::getlogitspostprocessorname (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request26getLogitsPostProcessorNameEv", false]], "tensorrt_llm::executor::request::getlookaheadconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request18getLookaheadConfigEv", false]], "tensorrt_llm::executor::request::getloraconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request13getLoraConfigEv", false]], "tensorrt_llm::executor::request::getmaxtokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request12getMaxTokensEv", false]], "tensorrt_llm::executor::request::getmropeconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request14getMropeConfigEv", false]], "tensorrt_llm::executor::request::getmultimodalembedding (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request22getMultimodalEmbeddingEv", false]], "tensorrt_llm::executor::request::getmultimodalinput (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request18getMultimodalInputEv", false]], "tensorrt_llm::executor::request::getoutputconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request15getOutputConfigEv", false]], "tensorrt_llm::executor::request::getpadid (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request8getPadIdEv", false]], "tensorrt_llm::executor::request::getpositionids (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request14getPositionIdsEv", false]], "tensorrt_llm::executor::request::getpriority (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request11getPriorityEv", false]], "tensorrt_llm::executor::request::getprompttuningconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request21getPromptTuningConfigEv", false]], "tensorrt_llm::executor::request::getrequesttype (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request14getRequestTypeEv", false]], "tensorrt_llm::executor::request::getreturnallgeneratedtokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request27getReturnAllGeneratedTokensEv", false]], "tensorrt_llm::executor::request::getsamplingconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request17getSamplingConfigEv", false]], "tensorrt_llm::executor::request::getskipcrossattnblocks (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request22getSkipCrossAttnBlocksEv", false]], "tensorrt_llm::executor::request::getstopwords (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request12getStopWordsEv", false]], "tensorrt_llm::executor::request::getstreaming (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor7Request12getStreamingEv", false]], "tensorrt_llm::executor::request::kbatchedpostprocessorname (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor7Request25kBatchedPostProcessorNameE", false]], "tensorrt_llm::executor::request::kdefaultpriority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor7Request16kDefaultPriorityE", false]], "tensorrt_llm::executor::request::kdynamicpostprocessornameprefix (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor7Request31kDynamicPostProcessorNamePrefixE", false]], "tensorrt_llm::executor::request::mimpl (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor7Request5mImplE", false]], "tensorrt_llm::executor::request::operator= (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7RequestaSERK7Request", false], [0, "_CPPv4N12tensorrt_llm8executor7RequestaSERR7Request", false]], "tensorrt_llm::executor::request::request (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request7RequestE9VecTokens10SizeType32bRK14SamplingConfigRK12OutputConfigRKNSt8optionalI10SizeType32EERKNSt8optionalI10SizeType32EENSt8optionalINSt6vectorI10SizeType32EEEENSt8optionalINSt4listI9VecTokensEEEENSt8optionalINSt4listI9VecTokensEEEENSt8optionalI6TensorEENSt8optionalI25ExternalDraftTokensConfigEENSt8optionalI18PromptTuningConfigEENSt8optionalI15MultimodalInputEENSt8optionalI6TensorEENSt8optionalI11MropeConfigEENSt8optionalI10LoraConfigEENSt8optionalI23LookaheadDecodingConfigEENSt8optionalI22KvCacheRetentionConfigEENSt8optionalINSt6stringEEENSt8optionalI19LogitsPostProcessorEENSt8optionalI9VecTokensEENSt8optionalI6IdTypeEEb12PriorityType11RequestTypeNSt8optionalI18ContextPhaseParamsEENSt8optionalI6TensorEENSt8optionalI10SizeType32EENSt8optionalI6TensorEE10SizeType32NSt8optionalI11EagleConfigEENSt8optionalI6TensorEENSt8optionalI20GuidedDecodingParamsEENSt8optionalI10SizeType32EENSt8optionalI16MillisecondsTypeEE", false], [0, "_CPPv4N12tensorrt_llm8executor7Request7RequestERK7Request", false], [0, "_CPPv4N12tensorrt_llm8executor7Request7RequestERR7Request", false]], "tensorrt_llm::executor::request::setallottedtimems (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request17setAllottedTimeMsE16MillisecondsType", false]], "tensorrt_llm::executor::request::setbadwords (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request11setBadWordsERKNSt4listI9VecTokensEE", false]], "tensorrt_llm::executor::request::setclientid (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request11setClientIdE6IdType", false]], "tensorrt_llm::executor::request::setcontextphaseparams (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request21setContextPhaseParamsE18ContextPhaseParams", false]], "tensorrt_llm::executor::request::setcrossattentionmask (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request21setCrossAttentionMaskE6Tensor", false]], "tensorrt_llm::executor::request::seteagleconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request14setEagleConfigERKNSt8optionalI11EagleConfigEE", false]], "tensorrt_llm::executor::request::setembeddingbias (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request16setEmbeddingBiasERK6Tensor", false]], "tensorrt_llm::executor::request::setencoderinputfeatures (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request23setEncoderInputFeaturesE6Tensor", false]], "tensorrt_llm::executor::request::setencoderinputtokenids (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request23setEncoderInputTokenIdsERK9VecTokens", false]], "tensorrt_llm::executor::request::setencoderoutputlength (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request22setEncoderOutputLengthE10SizeType32", false]], "tensorrt_llm::executor::request::setendid (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request8setEndIdE10SizeType32", false]], "tensorrt_llm::executor::request::setexternaldrafttokensconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request28setExternalDraftTokensConfigERK25ExternalDraftTokensConfig", false]], "tensorrt_llm::executor::request::setguideddecodingparams (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request23setGuidedDecodingParamsERK20GuidedDecodingParams", false]], "tensorrt_llm::executor::request::setkvcacheretentionconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request25setKvCacheRetentionConfigERK22KvCacheRetentionConfig", false]], "tensorrt_llm::executor::request::setlanguageadapteruid (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request21setLanguageAdapterUidE10SizeType32", false]], "tensorrt_llm::executor::request::setlogitspostprocessor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request22setLogitsPostProcessorERKNSt8optionalI19LogitsPostProcessorEE", false]], "tensorrt_llm::executor::request::setlogitspostprocessorname (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request26setLogitsPostProcessorNameERKNSt6stringE", false]], "tensorrt_llm::executor::request::setlookaheadconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request18setLookaheadConfigERK23LookaheadDecodingConfig", false]], "tensorrt_llm::executor::request::setloraconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request13setLoraConfigERK10LoraConfig", false]], "tensorrt_llm::executor::request::setmropeconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request14setMropeConfigERK11MropeConfig", false]], "tensorrt_llm::executor::request::setmultimodalembedding (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request22setMultimodalEmbeddingERK6Tensor", false]], "tensorrt_llm::executor::request::setmultimodalinput (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request18setMultimodalInputERK15MultimodalInput", false]], "tensorrt_llm::executor::request::setoutputconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request15setOutputConfigERK12OutputConfig", false]], "tensorrt_llm::executor::request::setpadid (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request8setPadIdE10SizeType32", false]], "tensorrt_llm::executor::request::setpositionids (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request14setPositionIdsERKNSt6vectorI10SizeType32EE", false]], "tensorrt_llm::executor::request::setpriority (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request11setPriorityE12PriorityType", false]], "tensorrt_llm::executor::request::setprompttuningconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request21setPromptTuningConfigERK18PromptTuningConfig", false]], "tensorrt_llm::executor::request::setrequesttype (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request14setRequestTypeERK11RequestType", false]], "tensorrt_llm::executor::request::setreturnallgeneratedtokens (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request27setReturnAllGeneratedTokensEb", false]], "tensorrt_llm::executor::request::setsamplingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request17setSamplingConfigERK14SamplingConfig", false]], "tensorrt_llm::executor::request::setskipcrossattnblocks (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request22setSkipCrossAttnBlocksE6Tensor", false]], "tensorrt_llm::executor::request::setstopwords (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request12setStopWordsERKNSt4listI9VecTokensEE", false]], "tensorrt_llm::executor::request::setstreaming (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7Request12setStreamingEb", false]], "tensorrt_llm::executor::request::~request (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7RequestD0Ev", false]], "tensorrt_llm::executor::requestperfmetrics (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetricsE", false]], "tensorrt_llm::executor::requestperfmetrics::firstiter (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics9firstIterE", false]], "tensorrt_llm::executor::requestperfmetrics::iter (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics4iterE", false]], "tensorrt_llm::executor::requestperfmetrics::kvcachemetrics (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics14kvCacheMetricsE", false]], "tensorrt_llm::executor::requestperfmetrics::kvcachemetrics (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics14KvCacheMetricsE", false]], "tensorrt_llm::executor::requestperfmetrics::kvcachemetrics::kvcachehitrate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics14KvCacheMetrics14kvCacheHitRateE", false]], "tensorrt_llm::executor::requestperfmetrics::kvcachemetrics::nummissedblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics14KvCacheMetrics15numMissedBlocksE", false]], "tensorrt_llm::executor::requestperfmetrics::kvcachemetrics::numnewallocatedblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics14KvCacheMetrics21numNewAllocatedBlocksE", false]], "tensorrt_llm::executor::requestperfmetrics::kvcachemetrics::numreusedblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics14KvCacheMetrics15numReusedBlocksE", false]], "tensorrt_llm::executor::requestperfmetrics::kvcachemetrics::numtotalallocatedblocks (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics14KvCacheMetrics23numTotalAllocatedBlocksE", false]], "tensorrt_llm::executor::requestperfmetrics::lastiter (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics8lastIterE", false]], "tensorrt_llm::executor::requestperfmetrics::speculativedecoding (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics19speculativeDecodingE", false]], "tensorrt_llm::executor::requestperfmetrics::speculativedecodingmetrics (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics26SpeculativeDecodingMetricsE", false]], "tensorrt_llm::executor::requestperfmetrics::speculativedecodingmetrics::acceptancerate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics26SpeculativeDecodingMetrics14acceptanceRateE", false]], "tensorrt_llm::executor::requestperfmetrics::speculativedecodingmetrics::totalaccepteddrafttokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics26SpeculativeDecodingMetrics24totalAcceptedDraftTokensE", false]], "tensorrt_llm::executor::requestperfmetrics::speculativedecodingmetrics::totaldrafttokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics26SpeculativeDecodingMetrics16totalDraftTokensE", false]], "tensorrt_llm::executor::requestperfmetrics::timepoint (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics9TimePointE", false]], "tensorrt_llm::executor::requestperfmetrics::timingmetrics (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics13timingMetricsE", false]], "tensorrt_llm::executor::requestperfmetrics::timingmetrics (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics13TimingMetricsE", false]], "tensorrt_llm::executor::requestperfmetrics::timingmetrics::arrivaltime (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics13TimingMetrics11arrivalTimeE", false]], "tensorrt_llm::executor::requestperfmetrics::timingmetrics::firstscheduledtime (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics13TimingMetrics18firstScheduledTimeE", false]], "tensorrt_llm::executor::requestperfmetrics::timingmetrics::firsttokentime (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics13TimingMetrics14firstTokenTimeE", false]], "tensorrt_llm::executor::requestperfmetrics::timingmetrics::kvcachesize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics13TimingMetrics11kvCacheSizeE", false]], "tensorrt_llm::executor::requestperfmetrics::timingmetrics::kvcachetransferend (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics13TimingMetrics18kvCacheTransferEndE", false]], "tensorrt_llm::executor::requestperfmetrics::timingmetrics::kvcachetransferstart (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics13TimingMetrics20kvCacheTransferStartE", false]], "tensorrt_llm::executor::requestperfmetrics::timingmetrics::lasttokentime (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor18RequestPerfMetrics13TimingMetrics13lastTokenTimeE", false]], "tensorrt_llm::executor::requeststage (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStageE", false]], "tensorrt_llm::executor::requeststage::kcontext_in_progress (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStage20kCONTEXT_IN_PROGRESSE", false]], "tensorrt_llm::executor::requeststage::kencoder_in_progress (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStage20kENCODER_IN_PROGRESSE", false]], "tensorrt_llm::executor::requeststage::kgeneration_complete (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStage20kGENERATION_COMPLETEE", false]], "tensorrt_llm::executor::requeststage::kgeneration_in_progress (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStage23kGENERATION_IN_PROGRESSE", false]], "tensorrt_llm::executor::requeststage::kqueued (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStage7kQUEUEDE", false]], "tensorrt_llm::executor::requeststats (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStatsE", false]], "tensorrt_llm::executor::requeststats::allocnewblocksperrequest (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats24allocNewBlocksPerRequestE", false]], "tensorrt_llm::executor::requeststats::alloctotalblocksperrequest (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats26allocTotalBlocksPerRequestE", false]], "tensorrt_llm::executor::requeststats::avgnumdecodedtokensperiter (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats26avgNumDecodedTokensPerIterE", false]], "tensorrt_llm::executor::requeststats::contextprefillposition (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats22contextPrefillPositionE", false]], "tensorrt_llm::executor::requeststats::disservingstats (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats15disServingStatsE", false]], "tensorrt_llm::executor::requeststats::id (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats2idE", false]], "tensorrt_llm::executor::requeststats::kvcachehitrateperrequest (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats24kvCacheHitRatePerRequestE", false]], "tensorrt_llm::executor::requeststats::missedblocksperrequest (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats22missedBlocksPerRequestE", false]], "tensorrt_llm::executor::requeststats::numgeneratedtokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats18numGeneratedTokensE", false]], "tensorrt_llm::executor::requeststats::paused (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats6pausedE", false]], "tensorrt_llm::executor::requeststats::reusedblocksperrequest (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats22reusedBlocksPerRequestE", false]], "tensorrt_llm::executor::requeststats::scheduled (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats9scheduledE", false]], "tensorrt_llm::executor::requeststats::stage (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor12RequestStats5stageE", false]], "tensorrt_llm::executor::requeststatsperiteration (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor24RequestStatsPerIterationE", false]], "tensorrt_llm::executor::requeststatsperiteration::iter (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor24RequestStatsPerIteration4iterE", false]], "tensorrt_llm::executor::requeststatsperiteration::requeststats (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor24RequestStatsPerIteration12requestStatsE", false]], "tensorrt_llm::executor::requesttype (c++ enum)": [[0, "_CPPv4N12tensorrt_llm8executor11RequestTypeE", false]], "tensorrt_llm::executor::requesttype::request_type_context_and_generation (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor11RequestType35REQUEST_TYPE_CONTEXT_AND_GENERATIONE", false]], "tensorrt_llm::executor::requesttype::request_type_context_only (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor11RequestType25REQUEST_TYPE_CONTEXT_ONLYE", false]], "tensorrt_llm::executor::requesttype::request_type_generation_only (c++ enumerator)": [[0, "_CPPv4N12tensorrt_llm8executor11RequestType28REQUEST_TYPE_GENERATION_ONLYE", false]], "tensorrt_llm::executor::response (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor8ResponseE", false]], "tensorrt_llm::executor::response::getclientid (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8Response11getClientIdEv", false]], "tensorrt_llm::executor::response::geterrormsg (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8Response11getErrorMsgEv", false]], "tensorrt_llm::executor::response::getrequestid (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8Response12getRequestIdEv", false]], "tensorrt_llm::executor::response::getresult (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8Response9getResultEv", false]], "tensorrt_llm::executor::response::haserror (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor8Response8hasErrorEv", false]], "tensorrt_llm::executor::response::mimpl (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor8Response5mImplE", false]], "tensorrt_llm::executor::response::operator= (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8ResponseaSERK8Response", false], [0, "_CPPv4N12tensorrt_llm8executor8ResponseaSERR8Response", false]], "tensorrt_llm::executor::response::response (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8Response8ResponseE6IdType6ResultNSt8optionalI6IdTypeEE", false], [0, "_CPPv4N12tensorrt_llm8executor8Response8ResponseE6IdTypeNSt6stringENSt8optionalI6IdTypeEE", false], [0, "_CPPv4N12tensorrt_llm8executor8Response8ResponseERK8Response", false], [0, "_CPPv4N12tensorrt_llm8executor8Response8ResponseERR8Response", false]], "tensorrt_llm::executor::response::~response (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor8ResponseD0Ev", false]], "tensorrt_llm::executor::result (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor6ResultE", false]], "tensorrt_llm::executor::result::additionaloutputs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result17additionalOutputsE", false]], "tensorrt_llm::executor::result::contextlogits (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result13contextLogitsE", false]], "tensorrt_llm::executor::result::contextphaseparams (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result18contextPhaseParamsE", false]], "tensorrt_llm::executor::result::cumlogprobs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result11cumLogProbsE", false]], "tensorrt_llm::executor::result::decodingiter (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result12decodingIterE", false]], "tensorrt_llm::executor::result::encoderoutput (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result13encoderOutputE", false]], "tensorrt_llm::executor::result::finishreasons (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result13finishReasonsE", false]], "tensorrt_llm::executor::result::generationlogits (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result16generationLogitsE", false]], "tensorrt_llm::executor::result::isfinal (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result7isFinalE", false]], "tensorrt_llm::executor::result::issequencefinal (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result15isSequenceFinalE", false]], "tensorrt_llm::executor::result::logprobs (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result8logProbsE", false]], "tensorrt_llm::executor::result::outputtokenids (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result14outputTokenIdsE", false]], "tensorrt_llm::executor::result::requestperfmetrics (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result18requestPerfMetricsE", false]], "tensorrt_llm::executor::result::sequenceindex (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result13sequenceIndexE", false]], "tensorrt_llm::executor::result::specdecfastlogitsinfo (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Result21specDecFastLogitsInfoE", false]], "tensorrt_llm::executor::retentionpriority (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor17RetentionPriorityE", false]], "tensorrt_llm::executor::retentionpriorityandduration (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor28RetentionPriorityAndDurationE", false]], "tensorrt_llm::executor::retentionpriorityandduration::durationms (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor28RetentionPriorityAndDuration10durationMsE", false]], "tensorrt_llm::executor::retentionpriorityandduration::retentionpriority (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor28RetentionPriorityAndDuration17retentionPriorityE", false]], "tensorrt_llm::executor::retentionpriorityandduration::retentionpriorityandduration (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor28RetentionPriorityAndDuration28RetentionPriorityAndDurationERKNSt8optionalI17RetentionPriorityEERKNSt8optionalINSt6chrono12millisecondsEEE", false]], "tensorrt_llm::executor::samplingconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfigE", false]], "tensorrt_llm::executor::samplingconfig::checkbeamsearchdiversityrate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig28checkBeamSearchDiversityRateERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::checkbeamwidth (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig14checkBeamWidthE10SizeType32", false]], "tensorrt_llm::executor::samplingconfig::checkbeamwidtharray (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig19checkBeamWidthArrayERKNSt8optionalINSt6vectorI10SizeType32EEEEK10SizeType32", false]], "tensorrt_llm::executor::samplingconfig::checkearlystopping (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig18checkEarlyStoppingERKNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::samplingconfig::checklengthpenalty (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig18checkLengthPenaltyERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::checkminp (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig9checkMinPERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::checkmintokens (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig14checkMinTokensERKNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::samplingconfig::checknorepeatngramsize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig22checkNoRepeatNgramSizeERKNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::samplingconfig::checknumreturnsequences (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig23checkNumReturnSequencesERKNSt8optionalI10SizeType32EE10SizeType32", false]], "tensorrt_llm::executor::samplingconfig::checkrepetitionpenalty (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig22checkRepetitionPenaltyERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::checktemperature (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig16checkTemperatureERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::checktopk (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig9checkTopKERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::checktopp (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig9checkTopPERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::checktoppdecay (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig14checkTopPDecayERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::checktoppmin (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig12checkTopPMinERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::checktoppresetids (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig17checkTopPResetIdsERKNSt8optionalI11TokenIdTypeEE", false]], "tensorrt_llm::executor::samplingconfig::getbeamsearchdiversityrate (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig26getBeamSearchDiversityRateEv", false]], "tensorrt_llm::executor::samplingconfig::getbeamwidth (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig12getBeamWidthEv", false]], "tensorrt_llm::executor::samplingconfig::getbeamwidtharray (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig17getBeamWidthArrayEv", false]], "tensorrt_llm::executor::samplingconfig::getearlystopping (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig16getEarlyStoppingEv", false]], "tensorrt_llm::executor::samplingconfig::getfrequencypenalty (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig19getFrequencyPenaltyEv", false]], "tensorrt_llm::executor::samplingconfig::getlengthpenalty (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig16getLengthPenaltyEv", false]], "tensorrt_llm::executor::samplingconfig::getminp (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig7getMinPEv", false]], "tensorrt_llm::executor::samplingconfig::getmintokens (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig12getMinTokensEv", false]], "tensorrt_llm::executor::samplingconfig::getnorepeatngramsize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig20getNoRepeatNgramSizeEv", false]], "tensorrt_llm::executor::samplingconfig::getnumreturnbeams (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig17getNumReturnBeamsEv", false]], "tensorrt_llm::executor::samplingconfig::getnumreturnsequences (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig21getNumReturnSequencesEv", false]], "tensorrt_llm::executor::samplingconfig::getpresencepenalty (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig18getPresencePenaltyEv", false]], "tensorrt_llm::executor::samplingconfig::getrepetitionpenalty (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig20getRepetitionPenaltyEv", false]], "tensorrt_llm::executor::samplingconfig::getseed (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig7getSeedEv", false]], "tensorrt_llm::executor::samplingconfig::gettemperature (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig14getTemperatureEv", false]], "tensorrt_llm::executor::samplingconfig::gettopk (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig7getTopKEv", false]], "tensorrt_llm::executor::samplingconfig::gettopp (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig7getTopPEv", false]], "tensorrt_llm::executor::samplingconfig::gettoppdecay (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig12getTopPDecayEv", false]], "tensorrt_llm::executor::samplingconfig::gettoppmin (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig10getTopPMinEv", false]], "tensorrt_llm::executor::samplingconfig::gettoppresetids (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfig15getTopPResetIdsEv", false]], "tensorrt_llm::executor::samplingconfig::mbeamsearchdiversityrate (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig24mBeamSearchDiversityRateE", false]], "tensorrt_llm::executor::samplingconfig::mbeamwidth (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig10mBeamWidthE", false]], "tensorrt_llm::executor::samplingconfig::mbeamwidtharray (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig15mBeamWidthArrayE", false]], "tensorrt_llm::executor::samplingconfig::mearlystopping (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig14mEarlyStoppingE", false]], "tensorrt_llm::executor::samplingconfig::mfrequencypenalty (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig17mFrequencyPenaltyE", false]], "tensorrt_llm::executor::samplingconfig::mlengthpenalty (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig14mLengthPenaltyE", false]], "tensorrt_llm::executor::samplingconfig::mminp (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig5mMinPE", false]], "tensorrt_llm::executor::samplingconfig::mmintokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig10mMinTokensE", false]], "tensorrt_llm::executor::samplingconfig::mnorepeatngramsize (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig18mNoRepeatNgramSizeE", false]], "tensorrt_llm::executor::samplingconfig::mnumreturnbeams (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig15mNumReturnBeamsE", false]], "tensorrt_llm::executor::samplingconfig::mnumreturnsequences (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig19mNumReturnSequencesE", false]], "tensorrt_llm::executor::samplingconfig::mpresencepenalty (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig16mPresencePenaltyE", false]], "tensorrt_llm::executor::samplingconfig::mrepetitionpenalty (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig18mRepetitionPenaltyE", false]], "tensorrt_llm::executor::samplingconfig::mseed (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig5mSeedE", false]], "tensorrt_llm::executor::samplingconfig::mtemperature (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig12mTemperatureE", false]], "tensorrt_llm::executor::samplingconfig::mtopk (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig5mTopKE", false]], "tensorrt_llm::executor::samplingconfig::mtopp (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig5mTopPE", false]], "tensorrt_llm::executor::samplingconfig::mtoppdecay (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig10mTopPDecayE", false]], "tensorrt_llm::executor::samplingconfig::mtoppmin (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig8mTopPMinE", false]], "tensorrt_llm::executor::samplingconfig::mtoppresetids (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig13mTopPResetIdsE", false]], "tensorrt_llm::executor::samplingconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor14SamplingConfigeqERK14SamplingConfig", false]], "tensorrt_llm::executor::samplingconfig::samplingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig14SamplingConfigE10SizeType32RKNSt8optionalI10SizeType32EERKNSt8optionalI9FloatTypeEERKNSt8optionalI9FloatTypeEERKNSt8optionalI11TokenIdTypeEERKNSt8optionalI9FloatTypeEERKNSt8optionalI14RandomSeedTypeEERKNSt8optionalI9FloatTypeEERKNSt8optionalI10SizeType32EERKNSt8optionalI9FloatTypeEERKNSt8optionalI9FloatTypeEERKNSt8optionalI9FloatTypeEERKNSt8optionalI9FloatTypeEERKNSt8optionalI9FloatTypeEERKNSt8optionalI10SizeType32EERKNSt8optionalI10SizeType32EERKNSt8optionalI10SizeType32EERKNSt8optionalI9FloatTypeEERKNSt8optionalINSt6vectorI10SizeType32EEEE", false]], "tensorrt_llm::executor::samplingconfig::setbeamsearchdiversityrate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig26setBeamSearchDiversityRateERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::setbeamwidth (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig12setBeamWidthE10SizeType32", false]], "tensorrt_llm::executor::samplingconfig::setbeamwidtharray (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig17setBeamWidthArrayERKNSt8optionalINSt6vectorI10SizeType32EEEE", false]], "tensorrt_llm::executor::samplingconfig::setearlystopping (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig16setEarlyStoppingERKNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::samplingconfig::setfrequencypenalty (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig19setFrequencyPenaltyERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::setlengthpenalty (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig16setLengthPenaltyERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::setminp (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig7setMinPERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::setmintokens (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig12setMinTokensERKNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::samplingconfig::setnorepeatngramsize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig20setNoRepeatNgramSizeERKNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::samplingconfig::setnumreturnsequences (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig21setNumReturnSequencesERKNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::samplingconfig::setpresencepenalty (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig18setPresencePenaltyERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::setrepetitionpenalty (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig20setRepetitionPenaltyERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::setseed (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig7setSeedERKNSt8optionalI14RandomSeedTypeEE", false]], "tensorrt_llm::executor::samplingconfig::settemperature (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig14setTemperatureERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::settopk (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig7setTopKERKNSt8optionalI10SizeType32EE", false]], "tensorrt_llm::executor::samplingconfig::settopp (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig7setTopPERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::settoppdecay (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig12setTopPDecayERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::settoppmin (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig10setTopPMinERKNSt8optionalI9FloatTypeEE", false]], "tensorrt_llm::executor::samplingconfig::settoppresetids (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig15setTopPResetIdsERKNSt8optionalI11TokenIdTypeEE", false]], "tensorrt_llm::executor::samplingconfig::updatenumreturnbeams (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor14SamplingConfig20updateNumReturnBeamsEv", false]], "tensorrt_llm::executor::schedulerconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor15SchedulerConfigE", false]], "tensorrt_llm::executor::schedulerconfig::getcapacityschedulerpolicy (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15SchedulerConfig26getCapacitySchedulerPolicyEv", false]], "tensorrt_llm::executor::schedulerconfig::getcontextchunkingpolicy (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15SchedulerConfig24getContextChunkingPolicyEv", false]], "tensorrt_llm::executor::schedulerconfig::getdynamicbatchconfig (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15SchedulerConfig21getDynamicBatchConfigEv", false]], "tensorrt_llm::executor::schedulerconfig::mcapacityschedulerpolicy (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15SchedulerConfig24mCapacitySchedulerPolicyE", false]], "tensorrt_llm::executor::schedulerconfig::mcontextchunkingpolicy (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15SchedulerConfig22mContextChunkingPolicyE", false]], "tensorrt_llm::executor::schedulerconfig::mdynamicbatchconfig (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor15SchedulerConfig19mDynamicBatchConfigE", false]], "tensorrt_llm::executor::schedulerconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor15SchedulerConfigeqERK15SchedulerConfig", false]], "tensorrt_llm::executor::schedulerconfig::schedulerconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor15SchedulerConfig15SchedulerConfigE23CapacitySchedulerPolicyNSt8optionalI21ContextChunkingPolicyEENSt8optionalI18DynamicBatchConfigEE", false]], "tensorrt_llm::executor::serialization (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor13SerializationE", false]], "tensorrt_llm::executor::serialization::deserializeadditionalmodeloutput (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization32deserializeAdditionalModelOutputERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeadditionaloutput (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization27deserializeAdditionalOutputERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeagentstate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization21deserializeAgentStateERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializebool (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization15deserializeBoolERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializecachestate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization21deserializeCacheStateERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializecachetransceiverconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization33deserializeCacheTransceiverConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializecommstate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization20deserializeCommStateERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializecontextphaseparams (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization29deserializeContextPhaseParamsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializedatatransceiverstate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization31deserializeDataTransceiverStateERNSt6vectorIcEE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization31deserializeDataTransceiverStateERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializedebugconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization22deserializeDebugConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializedecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization25deserializeDecodingConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializedecodingmode (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization23deserializeDecodingModeERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializedisservingrequeststats (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization33deserializeDisServingRequestStatsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializedynamicbatchconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization29deserializeDynamicBatchConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeeagleconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization22deserializeEagleConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeexecutorconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization25deserializeExecutorConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeextendedruntimeperfknobconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization40deserializeExtendedRuntimePerfKnobConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeexternaldrafttokensconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization36deserializeExternalDraftTokensConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeguideddecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization31deserializeGuidedDecodingConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeguideddecodingparams (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization31deserializeGuidedDecodingParamsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeinflightbatchingstats (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization32deserializeInflightBatchingStatsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeiterationstats (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization25deserializeIterationStatsERNSt6vectorIcEE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization25deserializeIterationStatsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeiterationstatsvec (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization28deserializeIterationStatsVecERNSt6vectorIcEE", false]], "tensorrt_llm::executor::serialization::deserializekvcacheconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization24deserializeKvCacheConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializekvcacheretentionconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization33deserializeKvCacheRetentionConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializekvcachestats (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization23deserializeKvCacheStatsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializelookaheaddecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization34deserializeLookaheadDecodingConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeloraconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization21deserializeLoraConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializemodeltype (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization20deserializeModelTypeERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializemropeconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization22deserializeMropeConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializemultimodalinput (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization26deserializeMultimodalInputERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeorchestratorconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization29deserializeOrchestratorConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeoutputconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization23deserializeOutputConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeparallelconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization25deserializeParallelConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializepeftcacheconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization26deserializePeftCacheConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeprompttuningconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization29deserializePromptTuningConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializerequest (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization18deserializeRequestERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializerequestperfmetrics (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization29deserializeRequestPerfMetricsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializerequeststage (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization23deserializeRequestStageERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializerequeststats (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization23deserializeRequestStatsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializerequeststatsperiteration (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization35deserializeRequestStatsPerIterationERNSt6vectorIcEE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization35deserializeRequestStatsPerIterationERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializerequeststatsperiterationvec (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization38deserializeRequestStatsPerIterationVecERNSt6vectorIcEE", false]], "tensorrt_llm::executor::serialization::deserializeresponse (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization19deserializeResponseERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeresponses (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization20deserializeResponsesERNSt6vectorIcEE", false]], "tensorrt_llm::executor::serialization::deserializeresult (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization17deserializeResultERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializesamplingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization25deserializeSamplingConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializeschedulerconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization26deserializeSchedulerConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializesocketstate (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization22deserializeSocketStateERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializespecdecfastlogitsinfo (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization32deserializeSpecDecFastLogitsInfoERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializespecdecodingstats (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization28deserializeSpecDecodingStatsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializespeculativedecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization36deserializeSpeculativeDecodingConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializestaticbatchingstats (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization30deserializeStaticBatchingStatsERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializestring (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization17deserializeStringERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializetensor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization17deserializeTensorERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializetimepoint (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization20deserializeTimePointERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::deserializetokenrangeretentionconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization36deserializeTokenRangeRetentionConfigERNSt7istreamE", false]], "tensorrt_llm::executor::serialization::serialize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK10LoraConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK11DebugConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK11EagleConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK11MropeConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK12DecodingModeRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK12KvCacheStatsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK12OutputConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK12RequestStageRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK12RequestStatsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK13KvCacheConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK14DecodingConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK14ExecutorConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK14IterationStats", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK14IterationStatsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK14ParallelConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK14SamplingConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK15MultimodalInputRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK15PeftCacheConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK15SchedulerConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK16AdditionalOutputRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK17SpecDecodingStatsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK18ContextPhaseParamsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK18DynamicBatchConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK18OrchestratorConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK18PromptTuningConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK18RequestPerfMetricsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK19StaticBatchingStatsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK20DataTransceiverState", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK20DataTransceiverStateRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK20GuidedDecodingConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK20GuidedDecodingParamsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK21AdditionalModelOutputRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK21InflightBatchingStatsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK22CacheTransceiverConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK22DisServingRequestStatsRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK22KvCacheRetentionConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK23LookaheadDecodingConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK24RequestStatsPerIteration", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK24RequestStatsPerIterationRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK25ExternalDraftTokensConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK25SpeculativeDecodingConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK29ExtendedRuntimePerfKnobConfigRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK33SpeculativeDecodingFastLogitsInfoRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK6ResultRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK6TensorRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK7RequestRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERK8ResponseRNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERKN18RequestPerfMetrics9TimePointERNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERKN22KvCacheRetentionConfig25TokenRangeRetentionConfigERNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERKN8kv_cache10AgentStateERNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERKN8kv_cache10CacheStateERNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERKN8kv_cache11SocketStateERNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERKN8kv_cache9CommStateERNSt7ostreamE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERKNSt6vectorI14IterationStatsEE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERKNSt6vectorI24RequestStatsPerIterationEE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization9serializeERKNSt6vectorI8ResponseEE", false]], "tensorrt_llm::executor::serialization::serializedsize (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK10LoraConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK11DebugConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK11EagleConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK11MropeConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK12DecodingMode", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK12KvCacheStats", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK12OutputConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK12RequestStage", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK12RequestStats", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK13KvCacheConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK14DecodingConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK14ExecutorConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK14IterationStats", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK14ParallelConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK14SamplingConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK15MultimodalInput", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK15PeftCacheConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK15SchedulerConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK16AdditionalOutput", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK17SpecDecodingStats", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK18ContextPhaseParams", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK18DynamicBatchConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK18OrchestratorConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK18PromptTuningConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK18RequestPerfMetrics", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK19StaticBatchingStats", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK20DataTransceiverState", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK20GuidedDecodingConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK20GuidedDecodingParams", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK21AdditionalModelOutput", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK21InflightBatchingStats", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK22CacheTransceiverConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK22DisServingRequestStats", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK22KvCacheRetentionConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK23LookaheadDecodingConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK24RequestStatsPerIteration", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK25ExternalDraftTokensConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK25SpeculativeDecodingConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK29ExtendedRuntimePerfKnobConfig", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK33SpeculativeDecodingFastLogitsInfo", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK6Result", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK6Tensor", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK7Request", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERK8Response", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERKN18RequestPerfMetrics9TimePointE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERKN22KvCacheRetentionConfig25TokenRangeRetentionConfigE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERKN8kv_cache10AgentStateE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERKN8kv_cache10CacheStateE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERKN8kv_cache11SocketStateE", false], [0, "_CPPv4N12tensorrt_llm8executor13Serialization14serializedSizeERKN8kv_cache9CommStateE", false]], "tensorrt_llm::executor::shape (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor5ShapeE", false]], "tensorrt_llm::executor::shape::base (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor5Shape4BaseE", false]], "tensorrt_llm::executor::shape::dimtype64 (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor5Shape9DimType64E", false]], "tensorrt_llm::executor::shape::shape (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor5Shape5ShapeENSt16initializer_listI9DimType64EE", false], [0, "_CPPv4N12tensorrt_llm8executor5Shape5ShapeEPK9DimType64N4Base9size_typeE", false], [0, "_CPPv4N12tensorrt_llm8executor5Shape5ShapeEv", false]], "tensorrt_llm::executor::sizetype32 (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor10SizeType32E", false]], "tensorrt_llm::executor::sizetype64 (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor10SizeType64E", false]], "tensorrt_llm::executor::specdecodingstats (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor17SpecDecodingStatsE", false]], "tensorrt_llm::executor::specdecodingstats::acceptancelength (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor17SpecDecodingStats16acceptanceLengthE", false]], "tensorrt_llm::executor::specdecodingstats::draftoverhead (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor17SpecDecodingStats13draftOverheadE", false]], "tensorrt_llm::executor::specdecodingstats::iterlatencyms (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor17SpecDecodingStats13iterLatencyMSE", false]], "tensorrt_llm::executor::specdecodingstats::numacceptedtokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor17SpecDecodingStats17numAcceptedTokensE", false]], "tensorrt_llm::executor::specdecodingstats::numdrafttokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor17SpecDecodingStats14numDraftTokensE", false]], "tensorrt_llm::executor::specdecodingstats::numrequestswithdrafttokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor17SpecDecodingStats26numRequestsWithDraftTokensE", false]], "tensorrt_llm::executor::speculativedecodingconfig (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor25SpeculativeDecodingConfigE", false]], "tensorrt_llm::executor::speculativedecodingconfig::fastlogits (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor25SpeculativeDecodingConfig10fastLogitsE", false]], "tensorrt_llm::executor::speculativedecodingconfig::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor25SpeculativeDecodingConfigeqERK25SpeculativeDecodingConfig", false]], "tensorrt_llm::executor::speculativedecodingconfig::speculativedecodingconfig (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor25SpeculativeDecodingConfig25SpeculativeDecodingConfigEb", false]], "tensorrt_llm::executor::speculativedecodingfastlogitsinfo (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor33SpeculativeDecodingFastLogitsInfoE", false]], "tensorrt_llm::executor::speculativedecodingfastlogitsinfo::draftparticipantid (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor33SpeculativeDecodingFastLogitsInfo18draftParticipantIdE", false]], "tensorrt_llm::executor::speculativedecodingfastlogitsinfo::draftrequestid (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor33SpeculativeDecodingFastLogitsInfo14draftRequestIdE", false]], "tensorrt_llm::executor::speculativedecodingfastlogitsinfo::totensor (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor33SpeculativeDecodingFastLogitsInfo8toTensorEv", false]], "tensorrt_llm::executor::staticbatchingstats (c++ struct)": [[0, "_CPPv4N12tensorrt_llm8executor19StaticBatchingStatsE", false]], "tensorrt_llm::executor::staticbatchingstats::emptygenslots (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor19StaticBatchingStats13emptyGenSlotsE", false]], "tensorrt_llm::executor::staticbatchingstats::numcontextrequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor19StaticBatchingStats18numContextRequestsE", false]], "tensorrt_llm::executor::staticbatchingstats::numctxtokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor19StaticBatchingStats12numCtxTokensE", false]], "tensorrt_llm::executor::staticbatchingstats::numgentokens (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor19StaticBatchingStats12numGenTokensE", false]], "tensorrt_llm::executor::staticbatchingstats::numscheduledrequests (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor19StaticBatchingStats20numScheduledRequestsE", false]], "tensorrt_llm::executor::streamptr (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor9StreamPtrE", false]], "tensorrt_llm::executor::tensor (c++ class)": [[0, "_CPPv4N12tensorrt_llm8executor6TensorE", false]], "tensorrt_llm::executor::tensor::copyto (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor6copyToENSt10shared_ptrI4ImplEE13CudaStreamPtr", false]], "tensorrt_llm::executor::tensor::copytocpu (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor9copyToCpuEN6Tensor13CudaStreamPtrE", false]], "tensorrt_llm::executor::tensor::copytogpu (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor9copyToGpuEN6Tensor13CudaStreamPtrE", false]], "tensorrt_llm::executor::tensor::copytomanaged (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor13copyToManagedEN6Tensor13CudaStreamPtrE", false]], "tensorrt_llm::executor::tensor::copytopinned (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor12copyToPinnedEN6Tensor13CudaStreamPtrE", false]], "tensorrt_llm::executor::tensor::copytopooledpinned (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor18copyToPooledPinnedEN6Tensor13CudaStreamPtrE", false]], "tensorrt_llm::executor::tensor::cpu (c++ function)": [[0, "_CPPv4I0EN12tensorrt_llm8executor6Tensor3cpuE6Tensor5Shape", false], [0, "_CPPv4N12tensorrt_llm8executor6Tensor3cpuE8DataType5Shape", false]], "tensorrt_llm::executor::tensor::cudastreamptr (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor6Tensor13CudaStreamPtrE", false]], "tensorrt_llm::executor::tensor::detail::ofitensor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6Tensor6detail9ofITensorENSt10shared_ptrIN7runtime7ITensorEEE", false]], "tensorrt_llm::executor::tensor::detail::toitensor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6Tensor6detail9toITensorERK6Tensor", false]], "tensorrt_llm::executor::tensor::getdata (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6Tensor7getDataEv", false], [0, "_CPPv4NK12tensorrt_llm8executor6Tensor7getDataEv", false]], "tensorrt_llm::executor::tensor::getdatatype (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor11getDataTypeEv", false]], "tensorrt_llm::executor::tensor::getmemorytype (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor13getMemoryTypeEv", false]], "tensorrt_llm::executor::tensor::getruntimetype (c++ function)": [[0, "_CPPv4I0EN12tensorrt_llm8executor6Tensor14getRuntimeTypeE8DataTypev", false]], "tensorrt_llm::executor::tensor::getshape (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor8getShapeEv", false]], "tensorrt_llm::executor::tensor::getsize (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor7getSizeEv", false]], "tensorrt_llm::executor::tensor::getsizeinbytes (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6Tensor14getSizeInBytesEv", false]], "tensorrt_llm::executor::tensor::gpu (c++ function)": [[0, "_CPPv4I0EN12tensorrt_llm8executor6Tensor3gpuE6Tensor13CudaStreamPtr5Shape", false], [0, "_CPPv4N12tensorrt_llm8executor6Tensor3gpuE8DataType13CudaStreamPtr5Shape", false]], "tensorrt_llm::executor::tensor::impl (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor6Tensor4ImplE", false]], "tensorrt_llm::executor::tensor::managed (c++ function)": [[0, "_CPPv4I0EN12tensorrt_llm8executor6Tensor7managedE6Tensor5Shape", false], [0, "_CPPv4N12tensorrt_llm8executor6Tensor7managedE8DataType5Shape", false]], "tensorrt_llm::executor::tensor::mtensor (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor6Tensor7mTensorE", false]], "tensorrt_llm::executor::tensor::of (c++ function)": [[0, "_CPPv4I0EN12tensorrt_llm8executor6Tensor2ofE6TensorP1T5Shape", false], [0, "_CPPv4I0EN12tensorrt_llm8executor6Tensor2ofE6TensorR1T", false], [0, "_CPPv4N12tensorrt_llm8executor6Tensor2ofE8DataTypePv5Shape", false]], "tensorrt_llm::executor::tensor::operator bool (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6TensorcvbEv", false]], "tensorrt_llm::executor::tensor::operator!= (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6TensorneERK6Tensor", false]], "tensorrt_llm::executor::tensor::operator= (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6TensoraSERK6Tensor", false], [0, "_CPPv4N12tensorrt_llm8executor6TensoraSERR6Tensor", false]], "tensorrt_llm::executor::tensor::operator== (c++ function)": [[0, "_CPPv4NK12tensorrt_llm8executor6TensoreqERK6Tensor", false]], "tensorrt_llm::executor::tensor::pinned (c++ function)": [[0, "_CPPv4I0EN12tensorrt_llm8executor6Tensor6pinnedE6Tensor5Shape", false], [0, "_CPPv4N12tensorrt_llm8executor6Tensor6pinnedE8DataType5Shape", false]], "tensorrt_llm::executor::tensor::pooledpinned (c++ function)": [[0, "_CPPv4I0EN12tensorrt_llm8executor6Tensor12pooledPinnedE6Tensor5Shape", false], [0, "_CPPv4N12tensorrt_llm8executor6Tensor12pooledPinnedE8DataType5Shape", false]], "tensorrt_llm::executor::tensor::setfrom (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6Tensor7setFromERK6Tensor13CudaStreamPtr", false]], "tensorrt_llm::executor::tensor::setzero (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6Tensor7setZeroE13CudaStreamPtr", false]], "tensorrt_llm::executor::tensor::tensor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6Tensor6TensorENSt10shared_ptrIN7runtime7ITensorEEE", false], [0, "_CPPv4N12tensorrt_llm8executor6Tensor6TensorERK6Tensor", false], [0, "_CPPv4N12tensorrt_llm8executor6Tensor6TensorERR6Tensor", false], [0, "_CPPv4N12tensorrt_llm8executor6Tensor6TensorEv", false]], "tensorrt_llm::executor::tensor::~tensor (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor6TensorD0Ev", false]], "tensorrt_llm::executor::tensorptr (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor9TensorPtrE", false]], "tensorrt_llm::executor::tokenidtype (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor11TokenIdTypeE", false]], "tensorrt_llm::executor::typetraits (c++ struct)": [[0, "_CPPv4I0_bEN12tensorrt_llm8executor10TypeTraitsE", false]], "tensorrt_llm::executor::typetraits (c++ struct)": [[0, "_CPPv4IEN12tensorrt_llm8executor10TypeTraitsIbEE", false]], "tensorrt_llm::executor::typetraits::value (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10TypeTraitsIbE5valueE", false]], "tensorrt_llm::executor::typetraits (c++ struct)": [[0, "_CPPv4IEN12tensorrt_llm8executor10TypeTraitsIfEE", false]], "tensorrt_llm::executor::typetraits::value (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10TypeTraitsIfE5valueE", false]], "tensorrt_llm::executor::typetraits (c++ struct)": [[0, "_CPPv4IEN12tensorrt_llm8executor10TypeTraitsI4halfEE", false]], "tensorrt_llm::executor::typetraits::value (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10TypeTraitsI4halfE5valueE", false]], "tensorrt_llm::executor::typetraits (c++ struct)": [[0, "_CPPv4IEN12tensorrt_llm8executor10TypeTraitsINSt7int32_tEEE", false]], "tensorrt_llm::executor::typetraits::value (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10TypeTraitsINSt7int32_tEE5valueE", false]], "tensorrt_llm::executor::typetraits (c++ struct)": [[0, "_CPPv4IEN12tensorrt_llm8executor10TypeTraitsINSt7int64_tEEE", false]], "tensorrt_llm::executor::typetraits::value (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10TypeTraitsINSt7int64_tEE5valueE", false]], "tensorrt_llm::executor::typetraits (c++ struct)": [[0, "_CPPv4IEN12tensorrt_llm8executor10TypeTraitsINSt6int8_tEEE", false]], "tensorrt_llm::executor::typetraits::value (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10TypeTraitsINSt6int8_tEE5valueE", false]], "tensorrt_llm::executor::typetraits (c++ struct)": [[0, "_CPPv4IEN12tensorrt_llm8executor10TypeTraitsINSt7uint8_tEEE", false]], "tensorrt_llm::executor::typetraits::value (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10TypeTraitsINSt7uint8_tEE5valueE", false]], "tensorrt_llm::executor::typetraits (c++ struct)": [[0, "_CPPv4I0EN12tensorrt_llm8executor10TypeTraitsIP1TEE", false]], "tensorrt_llm::executor::typetraits::value (c++ member)": [[0, "_CPPv4N12tensorrt_llm8executor10TypeTraitsIP1TE5valueE", false]], "tensorrt_llm::executor::veclogprobs (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor11VecLogProbsE", false]], "tensorrt_llm::executor::vectokenextraids (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor16VecTokenExtraIdsE", false]], "tensorrt_llm::executor::vectokens (c++ type)": [[0, "_CPPv4N12tensorrt_llm8executor9VecTokensE", false]], "tensorrt_llm::executor::version (c++ function)": [[0, "_CPPv4N12tensorrt_llm8executor7versionEv", false]], "tensorrt_llm::layers (c++ type)": [[1, "_CPPv4N12tensorrt_llm6layersE", false]], "tensorrt_llm::mpi (c++ type)": [[0, "_CPPv4N12tensorrt_llm3mpiE", false]], "tensorrt_llm::runtime (c++ type)": [[0, "_CPPv4N12tensorrt_llm7runtimeE", false], [1, "_CPPv4N12tensorrt_llm7runtimeE", false]], "tensorrt_llm::runtime::allreducebuffers (c++ class)": [[1, "_CPPv4N12tensorrt_llm7runtime16AllReduceBuffersE", false]], "tensorrt_llm::runtime::allreducebuffers::allreducebuffers (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime16AllReduceBuffers16AllReduceBuffersE10SizeType3210SizeType3210SizeType3210SizeType32RK13BufferManagerRK11WorldConfigKb", false]], "tensorrt_llm::runtime::allreducebuffers::mallreducecommptrs (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime16AllReduceBuffers18mAllReduceCommPtrsE", false]], "tensorrt_llm::runtime::allreducebuffers::mflagptrs (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime16AllReduceBuffers9mFlagPtrsE", false]], "tensorrt_llm::runtime::allreducebuffers::mipcmemoryhandles (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime16AllReduceBuffers17mIpcMemoryHandlesE", false]], "tensorrt_llm::runtime::allreducebuffers::tensorptr (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime16AllReduceBuffers9TensorPtrE", false]], "tensorrt_llm::runtime::buffercast (c++ function)": [[1, "_CPPv4I0EN12tensorrt_llm7runtime10bufferCastEP1TR7IBuffer", false], [1, "_CPPv4I0EN12tensorrt_llm7runtime10bufferCastEPK1TRK7IBuffer", false]], "tensorrt_llm::runtime::buffercastornull (c++ function)": [[1, "_CPPv4I0EN12tensorrt_llm7runtime16bufferCastOrNullEP1TRKN7IBuffer9SharedPtrE", false], [1, "_CPPv4I0EN12tensorrt_llm7runtime16bufferCastOrNullEP1TRKN7ITensor9SharedPtrE", false], [1, "_CPPv4I0EN12tensorrt_llm7runtime16bufferCastOrNullEP1TRKNSt8optionalIN7IBuffer9SharedPtrEEE", false], [1, "_CPPv4I0EN12tensorrt_llm7runtime16bufferCastOrNullEP1TRKNSt8optionalIN7ITensor9SharedPtrEEE", false], [1, "_CPPv4I0EN12tensorrt_llm7runtime16bufferCastOrNullEPK1TRKN7IBuffer14SharedConstPtrE", false], [1, "_CPPv4I0EN12tensorrt_llm7runtime16bufferCastOrNullEPK1TRKN7ITensor14SharedConstPtrE", false], [1, "_CPPv4I0EN12tensorrt_llm7runtime16bufferCastOrNullEPK1TRKNSt8optionalIN7IBuffer14SharedConstPtrEEE", false], [1, "_CPPv4I0EN12tensorrt_llm7runtime16bufferCastOrNullEPK1TRKNSt8optionalIN7ITensor14SharedConstPtrEEE", false]], "tensorrt_llm::runtime::bufferdatatype (c++ class)": [[1, "_CPPv4N12tensorrt_llm7runtime14BufferDataTypeE", false]], "tensorrt_llm::runtime::bufferdatatype::bufferdatatype (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime14BufferDataType14BufferDataTypeEN8nvinfer18DataTypeEbb", false]], "tensorrt_llm::runtime::bufferdatatype::getdatatype (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime14BufferDataType11getDataTypeEv", false]], "tensorrt_llm::runtime::bufferdatatype::getsize (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime14BufferDataType7getSizeEv", false]], "tensorrt_llm::runtime::bufferdatatype::getsizeinbits (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime14BufferDataType13getSizeInBitsEv", false]], "tensorrt_llm::runtime::bufferdatatype::ispointer (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime14BufferDataType9isPointerEv", false]], "tensorrt_llm::runtime::bufferdatatype::isunsigned (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime14BufferDataType10isUnsignedEv", false]], "tensorrt_llm::runtime::bufferdatatype::ktrtpointertype (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14BufferDataType15kTrtPointerTypeE", false]], "tensorrt_llm::runtime::bufferdatatype::mdatatype (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14BufferDataType9mDataTypeE", false]], "tensorrt_llm::runtime::bufferdatatype::mpointer (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14BufferDataType8mPointerE", false]], "tensorrt_llm::runtime::bufferdatatype::munsigned (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14BufferDataType9mUnsignedE", false]], "tensorrt_llm::runtime::bufferdatatype::operator nvinfer1::datatype (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime14BufferDataTypecvN8nvinfer18DataTypeEEv", false]], "tensorrt_llm::runtime::buffermanager (c++ class)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManagerE", false]], "tensorrt_llm::runtime::buffermanager::allocate (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager8allocateE10MemoryTypeN8nvinfer14DimsEN8nvinfer18DataTypeE", false], [1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager8allocateE10MemoryTypeNSt6size_tEN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::buffermanager (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager13BufferManagerE13CudaStreamPtrb", false]], "tensorrt_llm::runtime::buffermanager::copy (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager4copyEPKvR7IBuffer", false], [1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager4copyEPKvR7IBuffer10MemoryType", false], [1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager4copyERK7IBufferPv", false], [1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager4copyERK7IBufferPv10MemoryType", false], [1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager4copyERK7IBufferR7IBuffer", false]], "tensorrt_llm::runtime::buffermanager::copyfrom (c++ function)": [[1, "_CPPv4I0ENK12tensorrt_llm7runtime13BufferManager8copyFromE10IBufferPtrRKNSt6vectorI1TEE10MemoryType", false], [1, "_CPPv4I0ENK12tensorrt_llm7runtime13BufferManager8copyFromE10ITensorPtrP1TN8nvinfer14DimsE10MemoryType", false], [1, "_CPPv4I0ENK12tensorrt_llm7runtime13BufferManager8copyFromE10ITensorPtrRKNSt6vectorI1TEEN8nvinfer14DimsE10MemoryType", false], [1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager8copyFromERK7IBuffer10MemoryType", false], [1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager8copyFromERK7ITensor10MemoryType", false]], "tensorrt_llm::runtime::buffermanager::cpu (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager3cpuEN8nvinfer14DimsEN8nvinfer18DataTypeE", false], [1, "_CPPv4N12tensorrt_llm7runtime13BufferManager3cpuENSt6size_tEN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::cudamempoolptr (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager14CudaMemPoolPtrE", false]], "tensorrt_llm::runtime::buffermanager::cudastreamptr (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager13CudaStreamPtrE", false]], "tensorrt_llm::runtime::buffermanager::emptybuffer (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager11emptyBufferE10MemoryTypeN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::emptytensor (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager11emptyTensorE10MemoryTypeN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::getstream (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager9getStreamEv", false]], "tensorrt_llm::runtime::buffermanager::gpu (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager3gpuEN8nvinfer14DimsEN8nvinfer18DataTypeE", false], [1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager3gpuENSt6size_tEN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::gpusync (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager7gpuSyncEN8nvinfer14DimsEN8nvinfer18DataTypeE", false], [1, "_CPPv4N12tensorrt_llm7runtime13BufferManager7gpuSyncENSt6size_tEN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::ibufferptr (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager10IBufferPtrE", false]], "tensorrt_llm::runtime::buffermanager::ipcnvls (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager7ipcNvlsENSt3setIiEEN8nvinfer14DimsEN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::itensorptr (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager10ITensorPtrE", false]], "tensorrt_llm::runtime::buffermanager::kbyte_type (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager10kBYTE_TYPEE", false]], "tensorrt_llm::runtime::buffermanager::managed (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager7managedEN8nvinfer14DimsEN8nvinfer18DataTypeE", false], [1, "_CPPv4N12tensorrt_llm7runtime13BufferManager7managedENSt6size_tEN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::memorypoolfree (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager14memoryPoolFreeEv", false]], "tensorrt_llm::runtime::buffermanager::memorypoolreserved (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager18memoryPoolReservedEv", false]], "tensorrt_llm::runtime::buffermanager::memorypooltrimto (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager16memoryPoolTrimToENSt6size_tE", false]], "tensorrt_llm::runtime::buffermanager::memorypoolused (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager14memoryPoolUsedEv", false]], "tensorrt_llm::runtime::buffermanager::mpool (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager5mPoolE", false]], "tensorrt_llm::runtime::buffermanager::mstream (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager7mStreamE", false]], "tensorrt_llm::runtime::buffermanager::mtrimpool (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager9mTrimPoolE", false]], "tensorrt_llm::runtime::buffermanager::pinned (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager6pinnedEN8nvinfer14DimsEN8nvinfer18DataTypeE", false], [1, "_CPPv4N12tensorrt_llm7runtime13BufferManager6pinnedENSt6size_tEN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::pinnedpool (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManager10pinnedPoolEN8nvinfer14DimsEN8nvinfer18DataTypeE", false], [1, "_CPPv4N12tensorrt_llm7runtime13BufferManager10pinnedPoolENSt6size_tEN8nvinfer18DataTypeE", false]], "tensorrt_llm::runtime::buffermanager::setmem (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager6setMemER7IBuffer7int32_t", false]], "tensorrt_llm::runtime::buffermanager::setzero (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime13BufferManager7setZeroER7IBuffer", false]], "tensorrt_llm::runtime::buffermanager::~buffermanager (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13BufferManagerD0Ev", false]], "tensorrt_llm::runtime::bufferrange (c++ class)": [[1, "_CPPv4I0EN12tensorrt_llm7runtime11BufferRangeE", false]], "tensorrt_llm::runtime::bufferrange::base (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime11BufferRange4BaseE", false]], "tensorrt_llm::runtime::bufferrange::bufferrange (c++ function)": [[1, "_CPPv4I0_NSt11enable_if_tINSt10is_const_vI1UEEbEEEN12tensorrt_llm7runtime11BufferRange11BufferRangeERK7IBuffer", false], [1, "_CPPv4I0_NSt11enable_if_tIXntNSt10is_const_vI1UEEEbEEEN12tensorrt_llm7runtime11BufferRange11BufferRangeER7IBuffer", false], [1, "_CPPv4N12tensorrt_llm7runtime11BufferRange11BufferRangeEP1T9size_type", false]], "tensorrt_llm::runtime::canaccesspeer (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime13canAccessPeerERK11WorldConfig", false]], "tensorrt_llm::runtime::constpointercast (c++ function)": [[1, "_CPPv4I00EN12tensorrt_llm7runtime16constPointerCastENSt10shared_ptrINSt14remove_const_tI1TEEEERRNSt10unique_ptrI1T1DEE", false], [1, "_CPPv4I0EN12tensorrt_llm7runtime16constPointerCastENSt10shared_ptrINSt14remove_const_tI1TEEEERKNSt10shared_ptrI1TEE", false]], "tensorrt_llm::runtime::cudaevent (c++ class)": [[1, "_CPPv4N12tensorrt_llm7runtime9CudaEventE", false]], "tensorrt_llm::runtime::cudaevent::cudaevent (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent9CudaEventE7pointerb", false], [1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent9CudaEventEj", false]], "tensorrt_llm::runtime::cudaevent::deleter (c++ class)": [[1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent7DeleterE", false]], "tensorrt_llm::runtime::cudaevent::deleter::deleter (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent7Deleter7DeleterEb", false], [1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent7Deleter7DeleterEv", false]], "tensorrt_llm::runtime::cudaevent::deleter::mownsevent (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent7Deleter10mOwnsEventE", false]], "tensorrt_llm::runtime::cudaevent::deleter::operator() (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime9CudaEvent7DeleterclE7pointer", false]], "tensorrt_llm::runtime::cudaevent::element_type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent12element_typeE", false]], "tensorrt_llm::runtime::cudaevent::eventptr (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent8EventPtrE", false]], "tensorrt_llm::runtime::cudaevent::get (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime9CudaEvent3getEv", false]], "tensorrt_llm::runtime::cudaevent::mevent (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent6mEventE", false]], "tensorrt_llm::runtime::cudaevent::pointer (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime9CudaEvent7pointerE", false]], "tensorrt_llm::runtime::cudaevent::synchronize (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime9CudaEvent11synchronizeEv", false]], "tensorrt_llm::runtime::cudastream (c++ class)": [[1, "_CPPv4N12tensorrt_llm7runtime10CudaStreamE", false]], "tensorrt_llm::runtime::cudastream::cudastream (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime10CudaStream10CudaStreamE12cudaStream_t", false], [1, "_CPPv4N12tensorrt_llm7runtime10CudaStream10CudaStreamE12cudaStream_tib", false], [1, "_CPPv4N12tensorrt_llm7runtime10CudaStream10CudaStreamEji", false]], "tensorrt_llm::runtime::cudastream::deleter (c++ class)": [[1, "_CPPv4N12tensorrt_llm7runtime10CudaStream7DeleterE", false]], "tensorrt_llm::runtime::cudastream::deleter::deleter (c++ function)": [[1, "_CPPv4N12tensorrt_llm7runtime10CudaStream7Deleter7DeleterEb", false], [1, "_CPPv4N12tensorrt_llm7runtime10CudaStream7Deleter7DeleterEv", false]], "tensorrt_llm::runtime::cudastream::deleter::mownsstream (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime10CudaStream7Deleter11mOwnsStreamE", false]], "tensorrt_llm::runtime::cudastream::deleter::operator() (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime10CudaStream7DeleterclE12cudaStream_t", false]], "tensorrt_llm::runtime::cudastream::get (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime10CudaStream3getEv", false]], "tensorrt_llm::runtime::cudastream::getdevice (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime10CudaStream9getDeviceEv", false]], "tensorrt_llm::runtime::cudastream::mdevice (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime10CudaStream7mDeviceE", false]], "tensorrt_llm::runtime::cudastream::mstream (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime10CudaStream7mStreamE", false]], "tensorrt_llm::runtime::cudastream::record (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime10CudaStream6recordEN9CudaEvent7pointerE", false], [1, "_CPPv4NK12tensorrt_llm7runtime10CudaStream6recordERK9CudaEvent", false]], "tensorrt_llm::runtime::cudastream::streamptr (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime10CudaStream9StreamPtrE", false]], "tensorrt_llm::runtime::cudastream::synchronize (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime10CudaStream11synchronizeEv", false]], "tensorrt_llm::runtime::cudastream::wait (c++ function)": [[1, "_CPPv4NK12tensorrt_llm7runtime10CudaStream4waitEN9CudaEvent7pointerE", false], [1, "_CPPv4NK12tensorrt_llm7runtime10CudaStream4waitERK9CudaEvent", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4I_N8nvinfer18DataTypeE_b_bEN12tensorrt_llm7runtime14DataTypeTraitsE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4I_N8nvinfer18DataTypeE_bEN12tensorrt_llm7runtime14DataTypeTraitsI9kDataType9kUnsignedXL1EEEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsI9kDataType9kUnsignedXL1EEE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsI9kDataType9kUnsignedXL1EEE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits::type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsI9kDataType9kUnsignedXL1EEE4typeE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4I_bEN12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kBOOLE9kUnsignedEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kBOOLE9kUnsignedE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kBOOLE9kUnsignedE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits::type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kBOOLE9kUnsignedE4typeE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4IEN12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kFLOATEEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kFLOATEE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kFLOATEE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits::type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kFLOATEE4typeE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4IEN12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kHALFEEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kHALFEE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kHALFEE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits::type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kHALFEE4typeE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4IEN12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT32EXL1EEEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT32EXL1EEE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT32EXL1EEE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits::type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT32EXL1EEE4typeE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4IEN12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT32EEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT32EE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT32EE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits::type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT32EE4typeE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4IEN12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT64EXL1EEEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT64EXL1EEE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT64EXL1EEE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits::type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT64EXL1EEE4typeE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4IEN12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT64EEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT64EE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT64EE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits::type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kINT64EE4typeE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4IEN12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kINT8EEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kINT8EE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kINT8EE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits::type (c++ type)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType5kINT8EE4typeE", false]], "tensorrt_llm::runtime::datatypetraits (c++ struct)": [[1, "_CPPv4I_bEN12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kUINT8E9kUnsignedEE", false]], "tensorrt_llm::runtime::datatypetraits::name (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kUINT8E9kUnsignedE4nameE", false]], "tensorrt_llm::runtime::datatypetraits::size (c++ member)": [[1, "_CPPv4N12tensorrt_llm7runtime14DataTypeTraitsIN8nvinfer18DataType6kUINT8E9kUnsignedE4sizeE", false]], "tensorrt_llm::runtime::datatypetraits