TensorRT-LLMs/advanced/weight-streaming.html
2024-12-25 13:44:02 +08:00

278 lines
21 KiB
HTML

<!DOCTYPE html>
<html class="writer-html5" lang="en" data-content_root="../">
<head>
<meta charset="utf-8" /><meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Running With Weight Streaming to Reduce GPU Memory Consumption &mdash; tensorrt_llm documentation</title>
<link rel="stylesheet" type="text/css" href="../_static/pygments.css?v=80d5e7a1" />
<link rel="stylesheet" type="text/css" href="../_static/css/theme.css?v=e59714d7" />
<link rel="stylesheet" type="text/css" href="../_static/copybutton.css?v=76b2166b" />
<script src="../_static/jquery.js?v=5d32c60e"></script>
<script src="../_static/_sphinx_javascript_frameworks_compat.js?v=2cd50e6c"></script>
<script src="../_static/documentation_options.js?v=5929fcd5"></script>
<script src="../_static/doctools.js?v=888ff710"></script>
<script src="../_static/sphinx_highlight.js?v=dc90522c"></script>
<script src="../_static/clipboard.min.js?v=a7894cd8"></script>
<script src="../_static/copybutton.js?v=65e89d2a"></script>
<script src="../_static/js/theme.js"></script>
<link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" />
</head>
<body class="wy-body-for-nav">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-scroll">
<div class="wy-side-nav-search" >
<a href="../index.html" class="icon icon-home">
tensorrt_llm
</a>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" aria-label="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div><div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="Navigation menu">
<p class="caption" role="heading"><span class="caption-text">Getting Started</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../overview.html">Overview</a></li>
<li class="toctree-l1"><a class="reference internal" href="../quick-start-guide.html">Quick Start Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../key-features.html">Key Features</a></li>
<li class="toctree-l1"><a class="reference internal" href="../release-notes.html">Release Notes</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Installation</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../installation/linux.html">Installing on Linux</a></li>
<li class="toctree-l1"><a class="reference internal" href="../installation/build-from-source-linux.html">Building from Source Code on Linux</a></li>
<li class="toctree-l1"><a class="reference internal" href="../installation/windows.html">Installing on Windows</a></li>
<li class="toctree-l1"><a class="reference internal" href="../installation/build-from-source-windows.html">Building from Source Code on Windows</a></li>
<li class="toctree-l1"><a class="reference internal" href="../installation/grace-hopper.html">Installing on Grace Hopper</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">LLM API</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../llm-api/index.html">API Introduction</a></li>
<li class="toctree-l1"><a class="reference internal" href="../llm-api/reference.html">API Reference</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">LLM API Examples</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../llm-api-examples/index.html">LLM Examples Introduction</a></li>
<li class="toctree-l1"><a class="reference internal" href="../llm-api-examples/customization.html">Common Customizations</a></li>
<li class="toctree-l1"><a class="reference internal" href="../llm-api-examples/llm_api_examples.html">Examples</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Model Definition API</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../python-api/tensorrt_llm.layers.html">Layers</a></li>
<li class="toctree-l1"><a class="reference internal" href="../python-api/tensorrt_llm.functional.html">Functionals</a></li>
<li class="toctree-l1"><a class="reference internal" href="../python-api/tensorrt_llm.models.html">Models</a></li>
<li class="toctree-l1"><a class="reference internal" href="../python-api/tensorrt_llm.plugin.html">Plugin</a></li>
<li class="toctree-l1"><a class="reference internal" href="../python-api/tensorrt_llm.quantization.html">Quantization</a></li>
<li class="toctree-l1"><a class="reference internal" href="../python-api/tensorrt_llm.runtime.html">Runtime</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">C++ API</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../_cpp_gen/executor.html">Executor</a></li>
<li class="toctree-l1"><a class="reference internal" href="../_cpp_gen/runtime.html">Runtime</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Command-Line Reference</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../commands/trtllm-build.html">trtllm-build</a></li>
<li class="toctree-l1"><a class="reference internal" href="../commands/trtllm-serve.html">trtllm-serve</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Architecture</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../architecture/overview.html">TensorRT-LLM Architecture</a></li>
<li class="toctree-l1"><a class="reference internal" href="../architecture/core-concepts.html">Model Definition</a></li>
<li class="toctree-l1"><a class="reference internal" href="../architecture/core-concepts.html#compilation">Compilation</a></li>
<li class="toctree-l1"><a class="reference internal" href="../architecture/core-concepts.html#runtime">Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="../architecture/core-concepts.html#multi-gpu-and-multi-node-support">Multi-GPU and Multi-Node Support</a></li>
<li class="toctree-l1"><a class="reference internal" href="../architecture/checkpoint.html">TensorRT-LLM Checkpoint</a></li>
<li class="toctree-l1"><a class="reference internal" href="../architecture/workflow.html">TensorRT-LLM Build Workflow</a></li>
<li class="toctree-l1"><a class="reference internal" href="../architecture/add-model.html">Adding a Model</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Advanced</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="gpt-attention.html">Multi-Head, Multi-Query, and Group-Query Attention</a></li>
<li class="toctree-l1"><a class="reference internal" href="gpt-runtime.html">C++ GPT Runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="executor.html">Executor API</a></li>
<li class="toctree-l1"><a class="reference internal" href="graph-rewriting.html">Graph Rewriting Module</a></li>
<li class="toctree-l1"><a class="reference internal" href="inference-request.html">Inference Request</a></li>
<li class="toctree-l1"><a class="reference internal" href="inference-request.html#responses">Responses</a></li>
<li class="toctree-l1"><a class="reference internal" href="lora.html">Run gpt-2b + LoRA using GptManager / cpp runtime</a></li>
<li class="toctree-l1"><a class="reference internal" href="expert-parallelism.html">Expert Parallelism in TensorRT-LLM</a></li>
<li class="toctree-l1"><a class="reference internal" href="kv-cache-reuse.html">KV cache reuse</a></li>
<li class="toctree-l1"><a class="reference internal" href="speculative-decoding.html">Speculative Sampling</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Performance</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../performance/perf-overview.html">Overview</a></li>
<li class="toctree-l1"><a class="reference internal" href="../performance/perf-benchmarking.html">Benchmarking</a></li>
<li class="toctree-l1"><a class="reference internal" href="../performance/perf-best-practices.html">Best Practices</a></li>
<li class="toctree-l1"><a class="reference internal" href="../performance/perf-analysis.html">Performance Analysis</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Reference</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../reference/troubleshooting.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="../reference/support-matrix.html">Support Matrix</a></li>
<li class="toctree-l1"><a class="reference internal" href="../reference/precision.html">Numerical Precision</a></li>
<li class="toctree-l1"><a class="reference internal" href="../reference/memory.html">Memory Usage of TensorRT-LLM</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Blogs</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../blogs/H100vsA100.html">H100 has 4.6x A100 Performance in TensorRT-LLM, achieving 10,000 tok/s at 100ms to first token</a></li>
<li class="toctree-l1"><a class="reference internal" href="../blogs/H200launch.html">H200 achieves nearly 12,000 tokens/sec on Llama2-13B with TensorRT-LLM</a></li>
<li class="toctree-l1"><a class="reference internal" href="../blogs/Falcon180B-H200.html">Falcon-180B on a single H200 GPU with INT4 AWQ, and 6.7x faster Llama-70B over A100</a></li>
<li class="toctree-l1"><a class="reference internal" href="../blogs/quantization-in-TRT-LLM.html">Speed up inference with SOTA quantization techniques in TRT-LLM</a></li>
<li class="toctree-l1"><a class="reference internal" href="../blogs/XQA-kernel.html">New XQA-kernel provides 2.4x more Llama-70B throughput within the same latency budget</a></li>
</ul>
</div>
</div>
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap"><nav class="wy-nav-top" aria-label="Mobile navigation menu" >
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="../index.html">tensorrt_llm</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content">
<div role="navigation" aria-label="Page navigation">
<ul class="wy-breadcrumbs">
<li><a href="../index.html" class="icon icon-home" aria-label="Home"></a></li>
<li class="breadcrumb-item active">Running With Weight Streaming to Reduce GPU Memory Consumption</li>
<li class="wy-breadcrumbs-aside">
<a href="../_sources/advanced/weight-streaming.md.txt" rel="nofollow"> View page source</a>
</li>
</ul>
<hr/>
</div>
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<section id="running-with-weight-streaming-to-reduce-gpu-memory-consumption">
<span id="weight-streaming"></span><h1>Running With Weight Streaming to Reduce GPU Memory Consumption<a class="headerlink" href="#running-with-weight-streaming-to-reduce-gpu-memory-consumption" title="Link to this heading"></a></h1>
<p>TensorRT Weight Streaming can offload some weights to the CPU memory and stream them to the GPU memory during runtime.
This can reduce the weights size in GPU memory, therefore, we can run larger models or larger batch sizes in the same GPU memory budget.</p>
<p>During build time, build the engine with <code class="docutils literal notranslate"><span class="pre">--weight-streaming</span> <span class="pre">--gemm_plugin</span> <span class="pre">disable</span></code> since Weight Streaming only supports non-plugin weights. During runtime, run with <code class="docutils literal notranslate"><span class="pre">--gpu_weights_percent</span> <span class="pre">x</span></code> to config the percent of weights that remained on the GPU. <code class="docutils literal notranslate"><span class="pre">x</span></code> can be a value from <code class="docutils literal notranslate"><span class="pre">0.0</span></code> to <code class="docutils literal notranslate"><span class="pre">1.0</span></code>.</p>
<p>Here is an example to run llama-7b with Weight Streaming:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="c1"># Convert model as normal. Assume hugging face model is in llama-7b-hf/</span>
python3<span class="w"> </span>examples/llama/convert_checkpoint.py<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--model_dir<span class="w"> </span>llama-7b-hf/<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--output_dir<span class="w"> </span>/tmp/llama_7b/trt_ckpt/fp16/1-gpu/<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--dtype<span class="w"> </span>float16
<span class="c1"># Build engine that enabled Weight Streaming.</span>
trtllm-build<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--checkpoint_dir<span class="w"> </span>/tmp/llama_7b/trt_ckpt/fp16/1-gpu/<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--output_dir<span class="w"> </span>/tmp/llama_7b/trt_engines/fp16/1-gpu/<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--weight_streaming<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--gemm_plugin<span class="w"> </span>disable<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--max_batch_size<span class="w"> </span><span class="m">128</span><span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--max_input_len<span class="w"> </span><span class="m">512</span><span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--max_seq_len<span class="w"> </span><span class="m">562</span>
<span class="c1"># Run the engine with 20% weights in GPU memory.</span>
python3<span class="w"> </span>examples/summarize.py<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--engine_dir<span class="w"> </span>/tmp/llama_7b/trt_engines/fp16/1-gpu/<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--batch_size<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--test_trt_llm<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--hf_model_dir<span class="w"> </span>llama-7b-hf/<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--data_type<span class="w"> </span>fp16<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--gpu_weights_percent<span class="w"> </span><span class="m">0</span>.2
</pre></div>
</div>
<p>We can also benchmark the efficiency of Weight Streaming. Here is an example:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>python3<span class="w"> </span>benchmarks/python/benchmark.py<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--engine_dir<span class="w"> </span>/tmp/llama_7b/trt_engines/fp16/1-gpu/<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--batch_size<span class="w"> </span><span class="s2">&quot;1;32&quot;</span><span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--input_output_len<span class="w"> </span><span class="s2">&quot;256,32&quot;</span><span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--gpu_weights_percent<span class="w"> </span><span class="s2">&quot;0.0;0.3;0.6;1.0&quot;</span><span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--dtype<span class="w"> </span>float16<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--csv<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--log_level<span class="w"> </span>verbose
</pre></div>
</div>
<section id="api-changes">
<h2>API Changes<a class="headerlink" href="#api-changes" title="Link to this heading"></a></h2>
<p>To build engines with Weight Streaming enabled, some API changes are needed for the builder:</p>
<ul class="simple">
<li><p>Added a new bool member <code class="docutils literal notranslate"><span class="pre">weight_streaming</span></code> to class <code class="docutils literal notranslate"><span class="pre">BuildConfig</span></code>.</p></li>
<li><p>Added a new bool parameter <code class="docutils literal notranslate"><span class="pre">weight_streaming</span></code> to method <code class="docutils literal notranslate"><span class="pre">create_builder_config</span></code> of class <code class="docutils literal notranslate"><span class="pre">Builder</span></code>.</p></li>
</ul>
<p>To run with Weight Streaming with <code class="docutils literal notranslate"><span class="pre">Executor</span></code>, there are some API change to its config <code class="docutils literal notranslate"><span class="pre">ExecutorConfig</span></code>:</p>
<ul class="simple">
<li><p>Added a new float parameter <code class="docutils literal notranslate"><span class="pre">gpuWeightsPercent</span></code> to the constructor of <code class="docutils literal notranslate"><span class="pre">ExecutorConfig</span></code>.</p></li>
<li><p>Added two member functions <code class="docutils literal notranslate"><span class="pre">setGpuWeightsPercent</span></code> and <code class="docutils literal notranslate"><span class="pre">getGpuWeightsPercent</span></code> to set and get the GPU weights percentage.</p></li>
</ul>
<p>Here is an example to create an <code class="docutils literal notranslate"><span class="pre">Executor</span></code> with Weight Streaming:</p>
<div class="highlight-c++ notranslate"><div class="highlight"><pre><span></span><span class="p">...</span>
<span class="k">auto</span><span class="w"> </span><span class="n">executorConfig</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tle</span><span class="o">::</span><span class="n">ExecutorConfig</span><span class="p">(</span><span class="n">gpuWeightsPercent</span><span class="o">=</span><span class="mf">0.5</span><span class="p">);</span>
<span class="k">auto</span><span class="w"> </span><span class="n">executor</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">tle</span><span class="o">::</span><span class="n">Executor</span><span class="p">(</span><span class="s">&quot;model_path&quot;</span><span class="p">,</span><span class="w"> </span><span class="n">tensorrt_llm</span><span class="o">::</span><span class="n">executor</span><span class="o">::</span><span class="n">ModelType</span><span class="o">::</span><span class="n">kDECODER_ONLY</span><span class="p">,</span><span class="w"> </span><span class="n">executorConfig</span><span class="p">);</span>
<span class="p">...</span>
</pre></div>
</div>
</section>
</section>
</div>
</div>
<footer>
<hr/>
<div role="contentinfo">
<jinja2.runtime.BlockReference object at 0x7fed9c7377a0>
<div class="footer">
<p>
Copyright © 2024 NVIDIA Corporation
</p>
<p>
<a class="Link" href="https://www.nvidia.com/en-us/about-nvidia/privacy-policy/" target="_blank" rel="noopener"
data-cms-ai="0">Privacy Policy</a> |
<a class="Link" href="https://www.nvidia.com/en-us/about-nvidia/privacy-center/" target="_blank" rel="noopener"
data-cms-ai="0">Manage My Privacy</a> |
<a class="Link" href="https://www.nvidia.com/en-us/preferences/start/" target="_blank" rel="noopener"
data-cms-ai="0">Do Not Sell or Share My Data</a> |
<a class="Link" href="https://www.nvidia.com/en-us/about-nvidia/terms-of-service/" target="_blank"
rel="noopener" data-cms-ai="0">Terms of Service</a> |
<a class="Link" href="https://www.nvidia.com/en-us/about-nvidia/accessibility/" target="_blank" rel="noopener"
data-cms-ai="0">Accessibility</a> |
<a class="Link" href="https://www.nvidia.com/en-us/about-nvidia/company-policies/" target="_blank"
rel="noopener" data-cms-ai="0">Corporate Policies</a> |
<a class="Link" href="https://www.nvidia.com/en-us/product-security/" target="_blank" rel="noopener"
data-cms-ai="0">Product Security</a> |
<a class="Link" href="https://www.nvidia.com/en-us/contact/" target="_blank" rel="noopener"
data-cms-ai="0">Contact</a>
</p>
</div>
</div>
</footer>
</div>
</div>
</section>
</div>
<script>
jQuery(function () {
SphinxRtdTheme.Navigation.enable(true);
});
</script>
</body>
</html>