diff --git a/Translated_Book/ch02/2.6使用滑动窗口进行数据采样.ipynb b/Translated_Book/ch02/2.6使用滑动窗口进行数据采样.ipynb index 7c1aeb6..c8d29f4 100644 --- a/Translated_Book/ch02/2.6使用滑动窗口进行数据采样.ipynb +++ b/Translated_Book/ch02/2.6使用滑动窗口进行数据采样.ipynb @@ -75,7 +75,7 @@ "import tiktoken\n", "\n", "tokenizer = tiktoken.get_encoding(\"gpt2\")\n", - "with open(\"/Users/zhihu123/Project/other/llms-from-scratch-cn/ch02/01_main-chapter-code/the-verdict.txt\", \"r\", encoding=\"utf-8\") as f:\n", + "with open(\"the-verdict.txt\", \"r\", encoding=\"utf-8\") as f:\n", " raw_text = f.read()\n", "enc_text = tokenizer.encode(raw_text)\n", "print(len(enc_text))" @@ -441,7 +441,7 @@ } ], "source": [ - "with open(\"/Users/zhihu123/Project/other/llms-from-scratch-cn/ch02/01_main-chapter-code/the-verdict.txt\", \"r\", encoding=\"utf-8\") as f:\n", + "with open(\"the-verdict.txt\", \"r\", encoding=\"utf-8\") as f:\n", " raw_text = f.read()\n", " dataloader = create_dataloader_v1(\n", " raw_text, batch_size=1, max_length=4, stride=1, shuffle=False)\n", diff --git a/Translated_Book/ch05/5.1 在未标记的数据上进行预训练.ipynb b/Translated_Book/ch05/5.1 在未标记的数据上进行预训练.ipynb new file mode 100644 index 0000000..c1662ca --- /dev/null +++ b/Translated_Book/ch05/5.1 在未标记的数据上进行预训练.ipynb @@ -0,0 +1,1950 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "bae559a1", + "metadata": {}, + "source": [ + "# 第五章 在未 token 的数据上进行预训练" + ] + }, + { + "cell_type": "markdown", + "id": "e3b02c54", + "metadata": {}, + "source": [ + "**本章介绍**:" + ] + }, + { + "cell_type": "markdown", + "id": "bd2ed84e", + "metadata": {}, + "source": [ + "- 计算训练集和验证集的损失,以评估训练过程中 LLM 生成文本的质量\n", + "- 实现训练函数并对 LLM 进行预训练\n", + "- 保存和加载模型权重,以便继续训练 LLM\n", + "- 加载 OpenAI 的预训练权重" + ] + }, + { + "cell_type": "markdown", + "id": "73c209fb", + "metadata": {}, + "source": [ + "在前几章中,我们实现了数据采样、注意力机制,并编写了LLM 架构的代码。本章我们将主要关注如何实现训练函数并对LLM 进行预训练,如图 5.1 所示。" + ] + }, + { + "cell_type": "markdown", + "id": "29f55f31", + "metadata": {}, + "source": [ + "**图 5.1 对于构建 LLM 的三个主要阶段的心智模型,包括在通用文本数据集上预训练 LLM,以及在 token 的数据集上对其进行微调。本章将主要关注 LLM 的预训练,包括实现训练代码,评估性能,以及保存和加载模型权重。**" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "5f4fdd61", + "metadata": {}, + "source": [ + "![fig5.1](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-1.jpg?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "783519e0", + "metadata": {}, + "source": [ + "如图 5.1 所示,我们将进一步学习基本的模型评估技术,以便衡量生成文本的质量,这是在训练过程中优化LLM的关键步骤。此外,我们还将探讨如何加载预训练的权重,这将为我们在后续章节中对LLM进行微调提供坚实的基础。" + ] + }, + { + "cell_type": "markdown", + "id": "f582a3ea", + "metadata": {}, + "source": [ + "**权重参数(Weight parameters)**" + ] + }, + { + "cell_type": "markdown", + "id": "56f117c0", + "metadata": {}, + "source": [ + "在 LLM 和其他深度学习模型的背景下,权重(*weights*)指的是学习过程中需要调整的可训练参数。这些权重也被称为权重参数(*weight parameters*)或 简称为参数(*parameters*)。在像 PyTorch 这样的框架中,这些权重存储在线性层中,例如我们在第 3 章中实现多头注意力模块和第 4 章中的 GPTModel 时使用的。在初始化一个层(`new_layer = torch.nn.Linear(...)`)后,我们可以通过` .weight `属性访问其权重,即`new_layer.weight`。此外,为了方便,PyTorch允 许直接访问模型的所有可训练参数,包括权重和偏置,通过 `model.parameters()` 方法,我们将在后面实现模型训练时使用。" + ] + }, + { + "cell_type": "markdown", + "id": "708fa8b4", + "metadata": {}, + "source": [ + "## 5.1 评估文本生成模型" + ] + }, + { + "cell_type": "markdown", + "id": "6220ac20", + "metadata": {}, + "source": [ + "我们将从上一章的代码出发,介绍如何使用 LLM 进行文本生成,然后讨论评估生成文本质量的基本方法。本节以及本章剩余部分的内容概述如图 5.2 所示。" + ] + }, + { + "cell_type": "markdown", + "id": "2058d5ab", + "metadata": {}, + "source": [ + "**图 5.2 本章的主题内容如下。我们首先回顾上一章的文本生成内容,然后实现在预训练阶段进行模型评估的基本技术。**" + ] + }, + { + "cell_type": "markdown", + "id": "0f0d0449", + "metadata": {}, + "source": [ + "![fig5.2](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-2.jpg?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "ff6e7627", + "metadata": {}, + "source": [ + "如图 5.2 所示,下一小节我们将回顾上一章末尾设置的文本生成内容,然后在后续的小节中深入研究文本评估和计算训练及验证损失。" + ] + }, + { + "cell_type": "markdown", + "id": "6aefa0d9", + "metadata": {}, + "source": [ + "### 5.1.1 使用 GPT 生成文本" + ] + }, + { + "cell_type": "markdown", + "id": "1beee80f", + "metadata": {}, + "source": [ + "在本节,我们将初始化 LLM,并简要回顾在第四章中实现的文本生成过程。首先,我们将初始化一个 GPT 模型,该模型将在本章中被评估和训练。我们将使用第四章中的 GPTModel 类和 GPT_CONFIG_124M 字典来完成模型的初始化:" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "id": "64eae149", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "data": { + "text/plain": [ + "GPTModel(\n", + " (tok_emb): Embedding(50257, 768)\n", + " (pos_emb): Embedding(256, 768)\n", + " (drop_emb): Dropout(p=0.1, inplace=False)\n", + " (trf_blocks): Sequential(\n", + " (0): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (1): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (2): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (3): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (4): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (5): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (6): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (7): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (8): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (9): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (10): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (11): TransformerBlock(\n", + " (att): MultiHeadAttention(\n", + " (W_query): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_key): Linear(in_features=768, out_features=768, bias=False)\n", + " (W_value): Linear(in_features=768, out_features=768, bias=False)\n", + " (out_proj): Linear(in_features=768, out_features=768, bias=True)\n", + " (dropout): Dropout(p=0.1, inplace=False)\n", + " )\n", + " (ff): FeedForward(\n", + " (layers): Sequential(\n", + " (0): Linear(in_features=768, out_features=3072, bias=True)\n", + " (1): GELU()\n", + " (2): Linear(in_features=3072, out_features=768, bias=True)\n", + " (3): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (norm1): LayerNorm()\n", + " (norm2): LayerNorm()\n", + " (drop_resid): Dropout(p=0.1, inplace=False)\n", + " )\n", + " )\n", + " (final_norm): LayerNorm()\n", + " (out_head): Linear(in_features=768, out_features=50257, bias=False)\n", + ")" + ] + }, + "execution_count": 21, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import torch\n", + "from chapter04 import GPTModel\n", + "GPT_CONFIG_124M = {\n", + "\"vocab_size\": 50257,\n", + "\"context_length\": 256, #A\n", + "\"emb_dim\": 768,\n", + "\"n_heads\": 12,\n", + "\"n_layers\": 12,\n", + "\"drop_rate\": 0.1, #B\n", + "\"qkv_bias\": False\n", + "}\n", + "torch.manual_seed(123)\n", + "model = GPTModel(GPT_CONFIG_124M)\n", + "model.eval()" + ] + }, + { + "cell_type": "markdown", + "id": "9d97ef59", + "metadata": {}, + "source": [ + "对于 GPT_CONFIG_124M 字典,我们与上一章相比做出的唯一调整是将上下文长度(context_length)缩减到 256 个 tokens。这种改变降低了模型训练的计算压力,使得在普通的笔记本电脑上进行训练成为可能。" + ] + }, + { + "cell_type": "markdown", + "id": "f58ad472", + "metadata": {}, + "source": [ + "拥有 1.24 亿参数的 GPT-2 模型原本被配置为处理 1024 个 tokns。在训练过程结束后,我们将在本章末尾更新上下文大小设置,并加载预训练的权重,以便与配置为 1024 tokens 上下文长度的模型一起工作。" + ] + }, + { + "cell_type": "markdown", + "id": "933d6219", + "metadata": {}, + "source": [ + "借助 GPTmodel 实例,我们采用了上一章介绍的 `generate_text_simple `函数,并引入了两个实用的函数,`text_to_token_ids` 和 `token_ids_to_text`。这些函数方便我们在文本和 token 表示之间进行转换,我们将在本章中频繁使用它们。为了提供更清晰的理解,我们在深入代码之前,通过图 5.3 来展示这个过程。" + ] + }, + { + "cell_type": "markdown", + "id": "099fb30b", + "metadata": {}, + "source": [ + "**图 5.3 文本生成过程包括将文本编码为 token ID,随后被 LLM 处理为 logit 向量。之后这些 logit 向量被转换回 token ID,最后再被解码为文本形式。**" + ] + }, + { + "cell_type": "markdown", + "id": "ee7b74da", + "metadata": {}, + "source": [ + "![fig5.3](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-3.png?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "bac8aaf9", + "metadata": {}, + "source": [ + "图 5.3 描绘了使用 GPT 模型进行的文本生成的三步。首先,如第二章所述,分词器将输入文本转换为一系列的 token ID。其次,模型接收这些 token ID,并生成相应的 logit,这些 logit 是向量,代表词汇表中每个令牌的概率分布,如第四章所述。最后,这些 logit 被转换回 token ID,分词器将其解码为人类可读的文本,从而完成从文本输入到文本输出的循环。" + ] + }, + { + "cell_type": "markdown", + "id": "6f5c65de", + "metadata": {}, + "source": [ + "我们实现了如下文本生成过程的代码:" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "id": "f955fbdf", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Output text:\n", + " Every effort moves you rentingetic wasnم refres RexMeCHicular stren\n" + ] + } + ], + "source": [ + "import tiktoken\n", + "from chapter04 import generate_text_simple\n", + "\n", + "def text_to_token_ids(text, tokenizer):\n", + " encoded = tokenizer.encode(text, allowed_special={'<|endoftext|>'})\n", + " encoded_tensor = torch.tensor(encoded).unsqueeze(0) # 添加批次维度\n", + " return encoded_tensor\n", + "\n", + "def token_ids_to_text(token_ids, tokenizer):\n", + " flat = token_ids.squeeze(0) # 删除批次维度\n", + " return tokenizer.decode(flat.tolist())\n", + "\n", + "start_context = \"Every effort moves you\"\n", + "tokenizer = tiktoken.get_encoding(\"gpt2\")\n", + "\n", + "token_ids = generate_text_simple(\n", + " model=model,\n", + " idx=text_to_token_ids(start_context, tokenizer),\n", + " max_new_tokens=10,\n", + " context_size=GPT_CONFIG_124M[\"context_length\"]\n", + ")\n", + "print(\"Output text:\\n\", token_ids_to_text(token_ids, tokenizer))" + ] + }, + { + "cell_type": "markdown", + "id": "fac02310", + "metadata": {}, + "source": [ + "使用前面的代码,模型会生成以下文本:" + ] + }, + { + "cell_type": "markdown", + "id": "32ee5054", + "metadata": {}, + "source": [ + "```\n", + "Output text:\n", + "Every effort moves you rentingetic wasnم refres RexMeCHicular stren\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "af55c67f", + "metadata": {}, + "source": [ + "从输出结果来看,模型显然还无法生成连贯的文本,因为它还未经过训练。为了定义何为\"连贯的\"或\"高质量的\"文本,我们需要实现一种数值化的方法来评估生成的内容。这种方法将使我们能够在整个训练过程中监控并提升模型的性能。" + ] + }, + { + "cell_type": "markdown", + "id": "d6ced9d3", + "metadata": {}, + "source": [ + "接下来的部分将介绍我们如何为生成的输出计算损失指标(*loss metric*)。这个损失作为训练进度的衡量和成功的标志。此外,在后续关于微调 LLM 的章节中,我们将回顾评估模型质量的其他方法。" + ] + }, + { + "cell_type": "markdown", + "id": "a6ac4dbe", + "metadata": { + "metadata": {} + }, + "source": [ + "### 5.1.2 计算文本生成损失" + ] + }, + { + "cell_type": "markdown", + "id": "6a758181", + "metadata": {}, + "source": [ + "在本节中,我们将深入探讨一种通过计算文本生成损失,以量化评估训练过程中生成的文本质量的技术。我们将通过一个实际的例子,逐步深入解析这个主题,以便让概念更加清晰并易于实践。我们先简短回顾下第二章的数据加载和第四章的`generate_text_simple`函数生成文本。" + ] + }, + { + "cell_type": "markdown", + "id": "7a6d3342", + "metadata": {}, + "source": [ + "图 5.4 以五步流程清晰地描绘了从输入文本到 LLM 生成文本的整个过程。" + ] + }, + { + "cell_type": "markdown", + "id": "b1f5e953", + "metadata": { + "metadata": {}, + "vscode": { + "languageId": "markdown" + } + }, + "source": [ + "**图 5.4 对于图片左侧显示的三个输入,我们会为每一个输入 token 计算一个向量,该向量包含对应于词汇表中每个 token 的概率分数。每个向量中概率分数最高的索引位置代表最可能的下一个 token ID。选择与最高概率分数相关联的这些 token ID,并将其映射回一个文本,这个文本就代表模型生成的文本。**" + ] + }, + { + "cell_type": "markdown", + "id": "a8a713e2", + "metadata": {}, + "source": [ + "![fig5.4](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-4.jpg?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "8dd4c3f7", + "metadata": {}, + "source": [ + "图 5.4 中的文本生成流程详细描述了第四章中 `generate_text_simple` 函数的内部工作原理。在本节后面计算生成文本质量的损失之前,我们需要先执行这些相同的初始步骤。" + ] + }, + { + "cell_type": "markdown", + "id": "5bd17c35", + "metadata": {}, + "source": [ + "图 5.4 以一个只有 7 个 toekns 的小型词汇表为例,概述了文本生成过程,以便在单页上展示此图像。然而,我们的 GPT 模型使用的是一个包含 50,257 个单词的大型词汇表;因此,在接下来的代码中, token ID 的范围将是 0 到 50,256,而不仅仅是 0 到 6。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "此外,为了简化,图 5.4 仅展示了一个文本示例(\"every effort moves\")。在接下来实现图 5.4 步骤的代码示例中,我们将使用两个输入示例(\"every effort moves\" 和 \"I really like\")作为 GPT 模型的输入。" + ] + }, + { + "cell_type": "markdown", + "id": "664aaa24", + "metadata": {}, + "source": [ + "考虑两个输入示例,这些示例已经被转换为对应的 token ID,对应于图 5.4 中的步骤 1:" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "id": "ce0dbb8d", + "metadata": { + "metadata": {} + }, + "outputs": [], + "source": [ + "inputs = torch.tensor([[16833, 3626, 6100], # [\"every effort moves\",\n", + " [40, 1107, 588]]) # \"I really like\"]" + ] + }, + { + "cell_type": "markdown", + "id": "ffa56920", + "metadata": {}, + "source": [ + "\"目标(`targets`)\"包含我们希望模型生成的生成对应输入的 token ID:\n", + "Matching these inputs, the `targets` contain the token IDs we aim for the model to produce:" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "id": "df02aa75", + "metadata": { + "metadata": {} + }, + "outputs": [], + "source": [ + "targets = torch.tensor([[3626, 6100, 345 ], # [\" effort moves you\",\n", + " [588, 428, 11311]]) # \" really like chocolate\"]" + ] + }, + { + "cell_type": "markdown", + "id": "2089e68e", + "metadata": {}, + "source": [ + "请注意,目标与输入相同,只不过是向前移动了一个位置,这是我们在第二章实现数据加载器时所讨论过的概念。这种移位策略对于训练模型预测序列中的下一个元素至关重要。" + ] + }, + { + "cell_type": "markdown", + "id": "246c2bdf", + "metadata": {}, + "source": [ + "我们将输入送入模型以计算两个输入示例的逻辑向量,每个示例由三个 toekns组成,并应用 softmax 函数将这些逻辑值转换为概率分数,这对应于图 5.4 中的步骤 2:" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "de5d2a00", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "torch.Size([2, 3, 50257])\n" + ] + } + ], + "source": [ + "with torch.no_grad(): #A\n", + " logits = model(inputs)\n", + "probas = torch.softmax(logits, dim=-1) # 词表中每个 token 的概率\n", + "print(probas.shape)" + ] + }, + { + "cell_type": "markdown", + "id": "99cd63e6", + "metadata": {}, + "source": [ + "由此得出的概率分数(probas)张量维度如下:" + ] + }, + { + "cell_type": "markdown", + "id": "c82fedd0", + "metadata": {}, + "source": [ + "`torch.Size([2, 3, 50257])`" + ] + }, + { + "cell_type": "markdown", + "id": "07faf720", + "metadata": {}, + "source": [ + "第一个数字 2,代表输入中的两个示例(行),也被称为批量大小。第二个数字 3,代表每个输入(行)中的 token 数量。最后一个数字则对应于嵌入的维度,这是由词表大小决定的,正如我们在前面的章节中讨论的。" + ] + }, + { + "cell_type": "markdown", + "id": "ba6370a8", + "metadata": {}, + "source": [ + "在通过 softmax 函数将逻辑值转换为概率后,使用我们在第四章中实现的 `generate_text_simple` 函数将这些概率分数再次转换为文本,如图 5.4 的步骤 3-5 所示。" + ] + }, + { + "cell_type": "markdown", + "id": "05668740", + "metadata": {}, + "source": [ + "我们可以通过对概率分数应用 argmax 函数来实现步骤 3 和步骤 4,从而获得相应的 token ID:" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "id": "bed6affe", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Token IDs:\n", + " tensor([[[16657],\n", + " [ 339],\n", + " [42826]],\n", + "\n", + " [[49906],\n", + " [29669],\n", + " [41751]]])\n" + ] + } + ], + "source": [ + "token_ids = torch.argmax(probas, dim=-1, keepdim=True)\n", + "print(\"Token IDs:\\n\", token_ids)" + ] + }, + { + "cell_type": "markdown", + "id": "92f47d35", + "metadata": {}, + "source": [ + "考虑到我们有两个输入批次,每个批次都包含 3 个 token,将 argmax 函数应用于概率分数(如图 5.4 的步骤 3 所示)会产生两组输出,每组都包含3个预测的 token ID:" + ] + }, + { + "cell_type": "markdown", + "id": "caae4dad", + "metadata": {}, + "source": [ + "```\n", + "Token IDs:\n", + "tensor([[[16657], # First batch\n", + " [ 339],\n", + " [42826]],\n", + " [[49906], # Second batch\n", + " [29669],\n", + " [41751]]])\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "bf4c07d1", + "metadata": {}, + "source": [ + "最后,第 5 步将 token ID 转换回文本:" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "8d529bbb", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Targets batch 1: effort moves you\n", + "Outputs batch 1: Armed heNetflix\n" + ] + } + ], + "source": [ + "print(f\"Targets batch 1: {token_ids_to_text(targets[0], tokenizer)}\")\n", + "print(f\"Outputs batch 1: {token_ids_to_text(token_ids[0].flatten(), tokenizer)}\")" + ] + }, + { + "cell_type": "markdown", + "id": "55640177", + "metadata": {}, + "source": [ + "当我们解码这些 token 时,我们发现这些输出 token 与我们希望模型生成的目标 token 完全不同:" + ] + }, + { + "cell_type": "markdown", + "id": "e11ae407", + "metadata": {}, + "source": [ + "```\n", + "Targets batch 1: effort moves you\n", + "Outputs batch 1: Armed heNetflix\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "9a968f03", + "metadata": {}, + "source": [ + "模型产生的随机文本与目标文本不同,这是因为它还没有经过训练。现在,我们将通过一种被称为“损失”的方式,对模型生成的文本性能进行数值化评估,如图5.4所示。这种方法不仅对于衡量生成文本的质量有重要作用,同时也是实现后续训练函数的基础。我们将利用这个函数来更新模型的权重,从而提升生成文本的质量。" + ] + }, + { + "cell_type": "markdown", + "id": "401b9da4", + "metadata": { + "vscode": { + "languageId": "markdown" + } + }, + "source": [ + "**图 5.5 在本节的剩余部分,我们将实现文本评估函数。在接下来的一节中,我们将把这个评估函数应用到用于模型训练的整个数据集上。**" + ] + }, + { + "cell_type": "markdown", + "id": "f1d15903", + "metadata": { + "metadata": {} + }, + "source": [ + "![fig5.5](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-5.png?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "1b5b7f61", + "metadata": { + "metadata": {}, + "vscode": { + "languageId": "markdown" + } + }, + "source": [ + "在本节剩余部分中,我们将实现文本评估过程的部分,如图5.5所示。这个过程是为了衡量生成的 token 与正确预测(目标)之间的“距离”。在本章后续的训练函数中,我们将利用这些信息来调整模型权重,以便生成的文本更接近(理想情况下与)目标文本。" + ] + }, + { + "cell_type": "markdown", + "id": "542d3ae9", + "metadata": {}, + "source": [ + "模型训练的目标是提升正确目标 token ID 对应索引位置的 softmax 概率,如图 5.6 所示。这个 softmax 概率也被应用于我们在本节后续部分要实现的评估指标中,用于对模型生成的输出进行数值评估:正确位置的概率越高,效果就越好。" + ] + }, + { + "cell_type": "markdown", + "id": "a70c573c", + "metadata": {}, + "source": [ + "**图 5.6 未进过训练时,模型随机生成下一个 token 的概率向量。模型训练的目标是最大化目标 token ID 对应的概率值。**" + ] + }, + { + "cell_type": "markdown", + "id": "a16fe1d9", + "metadata": {}, + "source": [ + "![fig5.6](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-6.jpg?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "810b4cee", + "metadata": {}, + "source": [ + "请注意,图 5.6 展示了一个只有 7 个 token 的紧凑词汇表的 softmax 概率,以便将所有信息都整合到一个图形中。这意味着初始的随机值将大约在 1/7(约等于 0.14)附近。" + ] + }, + { + "cell_type": "markdown", + "id": "21753a03", + "metadata": {}, + "source": [ + "不过,我们在 GPT-2 模型中使用的词汇有 50,257 个 tokens,因此大多数初始概率都会在 0.00002(1/50,257 )附近。" + ] + }, + { + "cell_type": "markdown", + "id": "ff61eb16", + "metadata": {}, + "source": [ + "对于两个输入文本中的每一个,我们可以通过以下代码打印与目标 token 对应的初始 softmax 概率分数:" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "id": "a37ab176", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Text 1: tensor([7.4541e-05, 3.1061e-05, 1.1563e-05])\n", + "Text 2: tensor([3.9836e-05, 1.6783e-05, 4.7559e-06])\n" + ] + } + ], + "source": [ + "text_idx = 0\n", + "target_probas_1 = probas[text_idx, [0, 1, 2], targets[text_idx]]\n", + "print(\"Text 1:\", target_probas_1)\n", + "text_idx = 1\n", + "target_probas_2 = probas[text_idx, [0, 1, 2], targets[text_idx]]\n", + "print(\"Text 2:\", target_probas_2)" + ] + }, + { + "cell_type": "markdown", + "id": "742c3dbf", + "metadata": {}, + "source": [ + "每个批次的 3 个目标 token ID 概率如下:" + ] + }, + { + "cell_type": "markdown", + "id": "2199e6c0", + "metadata": {}, + "source": [ + "```\n", + "Text 1: tensor([7.4541e-05, 3.1061e-05, 1.1563e-05])\n", + "Text 2: tensor([3.9836e-05, 1.6783e-05, 4.7559e-06])\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "b3d38314", + "metadata": {}, + "source": [ + " LLM 的训练目标是最大化这些概率值,尽可能使它们接近1。这样,模型在生成下一个 token 时,将始终选择目标 token —— 即句子中的下一个词。" + ] + }, + { + "cell_type": "markdown", + "id": "c2b81f29", + "metadata": {}, + "source": [ + "**反向传播(Backpropagation)**" + ] + }, + { + "cell_type": "markdown", + "id": "6b08f388", + "metadata": {}, + "source": [ + "我们如何才能最大化目标 tokens 对应的 softmax 概率值呢?总的来说,我们会更新模型的权重,使得模型对我们希望生成的各个 token ID 输出更高的值。权重的更新是通过一种名为反向传播的过程来完成的,这是训练深度神经网络的标准技术(关于反向传播和模型训练的更多详细信息,请参见附录 A 的 A.3 至 A.7 节)。" + ] + }, + { + "cell_type": "markdown", + "id": "5d808b98", + "metadata": {}, + "source": [ + "反向传播需要一个损失函数,该函数用于计算模型预测输出(在这里,是目标 token ID 对应的概率)与实际期望输出之间的差距。这个损失函数用于衡量模型预测结果与目标值之间的偏离程度。" + ] + }, + { + "cell_type": "markdown", + "id": "a0177e07", + "metadata": {}, + "source": [ + "在本节剩余部分,我们将计算两个示例批次,即 `target_probas_1` 和 `target_probas_2` 的概率分数的损失。主要步骤已在图 5.7 中展示。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**图 5.7 计算损失包含多个步骤。步骤 1 至 3 用于计算目标张量相对应的 token 概率。然后,在步骤 4 至 6 中,这些概率经过对数转换并取平均。**" + ] + }, + { + "cell_type": "markdown", + "id": "03e5ab98", + "metadata": {}, + "source": [ + "![fig5.7](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-7.jpg?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "81ad3584", + "metadata": {}, + "source": [ + "由于我们已经按照图 5.7 中的步骤 1-3 计算出了 `target_probas_1` 和 `target_probas_2`,接下来我们将进行步骤 4,对这些概率分数应用对数函数。" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "a052472f", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor([ -9.5042, -10.3796, -11.3677, -10.1307, -10.9951, -12.2561])\n" + ] + } + ], + "source": [ + "log_probas = torch.log(torch.cat((target_probas_1, target_probas_2)))\n", + "print(log_probas)" + ] + }, + { + "cell_type": "markdown", + "id": "f935fe09", + "metadata": {}, + "source": [ + "结果如下:" + ] + }, + { + "cell_type": "markdown", + "id": "f9709916", + "metadata": {}, + "source": [ + "```\n", + "tensor([ -9.5042, -10.3796, -11.3677, -10.1308, -10.9951, -12.2561])\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "fb412c76", + "metadata": {}, + "source": [ + "在数学优化过程中,处理概率得分的对数比直接处理得分本身更为便捷。这个主题超出了本书的讨论范围,但我在一次讲座中对此进行了详细阐述,你可以在附录B的参考资料部分找到相关链接。" + ] + }, + { + "cell_type": "markdown", + "id": "87175393", + "metadata": {}, + "source": [ + "接下来,我们通过计算平均值将这些对数概率合并为一个分数(图 5.7 中的步骤 5):" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "78673f2a", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor(-10.7722)\n" + ] + } + ], + "source": [ + "avg_log_probas = torch.mean(log_probas)\n", + "print(avg_log_probas)" + ] + }, + { + "cell_type": "markdown", + "id": "e9d0cd22", + "metadata": {}, + "source": [ + "得出的平均对数概率得分如下:" + ] + }, + { + "cell_type": "markdown", + "id": "d1717f1d", + "metadata": {}, + "source": [ + "`tensor(-10.7722)`" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们的目标是通过在训练过程中更新模型的权重,使平均对数概率尽可能接近 0,这部分我们将在 5.2 节中实现。" + ] + }, + { + "cell_type": "markdown", + "id": "05e24ea8", + "metadata": {}, + "source": [ + "然而,在深度学习中,常见的做法并不是提升平均对数概率至 0,而是降低负平均对数概率至 0。负平均对数概率即平均对数概率乘以 -1,这对应于图 5.7 中的第 6 步:" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "0c071178", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor(10.7722)\n" + ] + } + ], + "source": [ + "neg_avg_log_probas = avg_log_probas * -1\n", + "print(neg_avg_log_probas)" + ] + }, + { + "cell_type": "markdown", + "id": "907ea370", + "metadata": {}, + "source": [ + "这打印了张量`(10.7722)`" + ] + }, + { + "cell_type": "markdown", + "id": "e5c57f70", + "metadata": {}, + "source": [ + "这个负值(-10.7722 变成 10.7722)在深度学习中被称为交叉熵损失(*cross entropy*)。" + ] + }, + { + "cell_type": "markdown", + "id": "52ff9142", + "metadata": {}, + "source": [ + "PyTorch 在这里派上了用场,因为它已经内置了一个 `cross_entropy` 函数,可以为我们处理图 5.7 中的所有这 6 个步骤。" + ] + }, + { + "cell_type": "markdown", + "id": "7d8a8f12", + "metadata": {}, + "source": [ + "**交叉熵损失(Cross entropy loss)**" + ] + }, + { + "cell_type": "markdown", + "id": "53ea2a44", + "metadata": {}, + "source": [ + "交叉熵损失在机器学习和深度学习中是一种常用的度量方法,用于衡量两个概率分布之间的差异——通常是标签的真实分布(在这里,是数据集中的 token)和模型的预测分布(例如,由 LLM 生成的 token 概率)。" + ] + }, + { + "cell_type": "markdown", + "id": "02b5c1ab", + "metadata": {}, + "source": [ + "在机器学习领域,尤其是在像 PyTorch 这样的框架中,`cross_entropy` 函数用于计算离散结果的度量值,这与目标 token 在模型生成的 token 概率下的负平均对数概率类似。因此,交叉熵和负平均对数概率这两个术语在实践中经常被互换使用。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "在应用交叉熵函数之前,让我们简单回顾一下 logits 和目标张量的形状:" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "id": "886cbd2a", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Logits shape: torch.Size([2, 3, 50257])\n", + "Targets shape: torch.Size([2, 3])\n" + ] + } + ], + "source": [ + "print(\"Logits shape:\", logits.shape)\n", + "print(\"Targets shape:\", targets.shape)" + ] + }, + { + "cell_type": "markdown", + "id": "bfb02d0e", + "metadata": {}, + "source": [ + "形状如下:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "```\n", + "Logits shape: torch.Size([2, 3, 50257])\n", + "Targets shape: torch.Size([2, 3])\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "711c2449", + "metadata": {}, + "source": [ + "如我们所见,logits 张量有三个维度:批量大小、token 数量和词汇表大小。而 targets 张量有两个维度:批量大小和 token 数量。" + ] + }, + { + "cell_type": "markdown", + "id": "038c15b1", + "metadata": {}, + "source": [ + "对于 PyTorch 中的交叉熵损失函数,我们希望通过在批次维度上合并这些张量来扁平化这些张量:" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "id": "9282e764", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Flattened logits: torch.Size([6, 50257])\n", + "Flattened targets: torch.Size([6])\n" + ] + } + ], + "source": [ + "logits_flat = logits.flatten(0, 1)\n", + "targets_flat = targets.flatten()\n", + "print(\"Flattened logits:\", logits_flat.shape)\n", + "print(\"Flattened targets:\", targets_flat.shape)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "得到的张量维度如下:" + ] + }, + { + "cell_type": "markdown", + "id": "4a1a4bc9", + "metadata": {}, + "source": [ + "```\n", + "Flattened logits: torch.Size([6, 50257])\n", + "Flattened targets: torch.Size([6])\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "edbbad9f", + "metadata": {}, + "source": [ + "请注意,目标值是我们期望 LLM 生成的 token ID,而 logits 则包含了模型在经过 softmax 函数获取概率分数之前的未经缩放的输出值。" + ] + }, + { + "cell_type": "markdown", + "id": "5b12c46a", + "metadata": {}, + "source": [ + "在之前,我们应用了 softmax 函数,选择了与目标 ID 对应的概率分数,并计算了负平均对数概率。PyTorch 的 `cross_entropy` 函数将为我们处理所有这些步骤:" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "id": "4bd2964f", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "tensor(10.7722)\n" + ] + } + ], + "source": [ + "loss = torch.nn.functional.cross_entropy(logits_flat, targets_flat)\n", + "print(loss)" + ] + }, + { + "cell_type": "markdown", + "id": "79090bfa", + "metadata": {}, + "source": [ + "产生的损失与我们之前手动实现图 5.7 中所示的各个步骤时获得的损失相同:" + ] + }, + { + "cell_type": "markdown", + "id": "8d36aaf7", + "metadata": {}, + "source": [ + "```\n", + "tensor(10.7722)\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "e68aea09", + "metadata": {}, + "source": [ + "**困惑度(Perplexity)**" + ] + }, + { + "cell_type": "markdown", + "id": "bfdbd3b3", + "metadata": {}, + "source": [ + "困惑度是一种常用的评估指标,经常与交叉熵损失一起用于评价如 LLM 这类任务的模型性能。它提供了一种更易于理解的方式,帮助我们理解模型在预测序列中下一个 token 时的不确定性。" + ] + }, + { + "cell_type": "markdown", + "id": "13622f5e", + "metadata": {}, + "source": [ + "困惑度衡量的是模型预测的概率分布与数据集中实际单词分布的匹配程度。与损失类似,较低的困惑度表明模型的预测更接近实际的分布。" + ] + }, + { + "cell_type": "markdown", + "id": "e3d21e8b", + "metadata": {}, + "source": [ + "困惑度可以通过公式 `perplexity = torch.exp(loss)` 来计算。当我们将这个公式应用到之前计算的损失值时,得到的结果是 `tensor(47678.8633)`。" + ] + }, + { + "cell_type": "markdown", + "id": "31060ae7", + "metadata": {}, + "source": [ + "困惑度通常被认为比原始损失值更易于理解,因为它代表了模型在每一步中对有效词汇量的不确定性。在这个例子中,这意味着模型在词汇表中的 47,678 个单词或 token 中,不确定哪一个会被生成为下一个 token。" + ] + }, + { + "cell_type": "markdown", + "id": "3823685d", + "metadata": {}, + "source": [ + "在本节中,我们计算了两个小文本输入的损失,以作说明。下一节,我们将对整个训练集和验证集进行损失计算。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 5.1.3 计算训练集和验证集损失" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "在本节中,我们首先准备了将在本章后面用于训练 LLM 的训练集和验证集。接着,我们计算了训练集和验证集的交叉熵,如图 5.8 所示,这是模型训练过程中的重要组成部分。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**图 5.8 在前一节中,我们已经计算了交叉熵损失,现在我们将这种损失计算方法应用到我们即将用于模型训练的整个文本数据集上。**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![fig5.8](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-8.jpg?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "d4e10d71", + "metadata": {}, + "source": [ + "为了计算图 5.8 中展示的训练集和验证集的损失,我们使用了一个非常小的文本数据集——Edith Wharton 的短篇故事 \"The Verdict\",这是我们在第二章已经使用过的数据集。选择公共领域的文本,我们避开了任何与使用权相关的问题。此外,我们选择这样一个小数据集的原因是,它允许我们在标准的笔记本电脑上,即使没有高端的 GPU,也能在几分钟内执行代码示例,这对于教学目的来说非常有利。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "对于对此感兴趣的读者,您也可以利用本书提供的附加代码,准备一个更大规模的数据集。这个数据集由 Project Gutenberg 的超过 60,000 本公共领域图书组成,并可以在这些数据上训练一个 LLM (详细内容请参见附录 D)。" + ] + }, + { + "cell_type": "markdown", + "id": "a8acee64", + "metadata": {}, + "source": [ + "**预训练 LLM 的成本**" + ] + }, + { + "cell_type": "markdown", + "id": "f1e93779", + "metadata": {}, + "source": [ + "为了更好地理解我们项目的规模,我们可以参考一下训练拥有 70 亿参数的 Llama 2 模型,这是一个相对知名且公开可用的 LLM。这个模型在昂贵的 A100 GPU 上运行了 184,320 个小时,处理了 2 万亿个 token。在撰写本文时,AWS 上运行一个 8xA100 云服务器的费用大约为每小时 30 美元。粗略估计,这样一个 LLM 的总训练成本大约为 690,000 美元(计算方式为 184,320 小时除以 8,然后乘以 30 美元)。" + ] + }, + { + "cell_type": "markdown", + "id": "0b93bbc9", + "metadata": {}, + "source": [ + "下面的代码将加载我们在第 2 章中使用过的短篇故事 \"The Verdict\":" + ] + }, + { + "cell_type": "code", + "execution_count": 35, + "id": "d6926455", + "metadata": { + "metadata": {} + }, + "outputs": [], + "source": [ + "file_path = \"the-verdict.txt\"\n", + "with open(file_path, \"r\", encoding=\"utf-8\") as file:\n", + " text_data = file.read()" + ] + }, + { + "cell_type": "markdown", + "id": "5d6c9e5d", + "metadata": {}, + "source": [ + "加载数据集后,我们可以检查数据集中的字符数和 token 数:" + ] + }, + { + "cell_type": "code", + "execution_count": 36, + "id": "a6f673dc", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Characters: 20479\n", + "Tokens: 5145\n" + ] + } + ], + "source": [ + "total_characters = len(text_data)\n", + "total_tokens = len(tokenizer.encode(text_data))\n", + "print(\"Characters:\", total_characters)\n", + "print(\"Tokens:\", total_tokens)" + ] + }, + { + "cell_type": "markdown", + "id": "24b958a9", + "metadata": {}, + "source": [ + "输出如下:" + ] + }, + { + "cell_type": "markdown", + "id": "41fe0f35", + "metadata": {}, + "source": [ + "```\n", + "Characters: 20479\n", + "Tokens: 5145\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "821ba1c8", + "metadata": {}, + "source": [ + "尽管这段文本只有 5145 个 token,对于训练一个大型 LLM 来说,可能显得太少了。然而,正如我们之前提到的,这是出于教学目的,使我们能够在几分钟而非几周的时间内运行代码。此外,我们将在本章的最后,将 OpenAI 的预训练权重加载到我们的 GPTModel 代码中。\n" + ] + }, + { + "cell_type": "markdown", + "id": "189c247c", + "metadata": {}, + "source": [ + "接下来,我们将数据集分为训练集和验证集,并利用第二章中的数据加载器来为 LLM 训练准备批次数据。这个过程在图 5.9 中进行了直观的展示。" + ] + }, + { + "cell_type": "markdown", + "id": "e6c35863", + "metadata": {}, + "source": [ + "**图 5.9 在准备数据加载器的过程中,我们首先将输入文本分割为训练集和验证集。接着,我们对文本进行 token 化处理(为了简化,这里仅展示了训练集部分的处理过程),并将 token 化后的文本划分为用户指定长度的块(在此例中为 6)。最后,我们打乱各行的顺序,并将划分后的文本组织成批次(在此例中,批次大小为 2),这样我们就可以用它们进行模型训练了。**" + ] + }, + { + "cell_type": "markdown", + "id": "bed2f796", + "metadata": {}, + "source": [ + "![fig5.9](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-9.jpg?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "352860c9", + "metadata": {}, + "source": [ + "为了便于可视化,我们在图 5.9 中将最大长度设定为 6,这主要是由于空间限制。然而,在我们实际实现的数据加载器中,我们将最大长度设定为 LLM 所支持的 256 个 token 的上下文长度,这样可以让 LLM 在训练过程中接触到更长的文本。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**使用可变长度训练**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们采用了相似大小的数据块来训练模型,这主要是出于简化和效率的考虑。然而,在实际操作中,使用不同长度的输入来训练 LLM 也是有益的,这有助于模型在使用时能更好地适应各种类型的输入。" + ] + }, + { + "cell_type": "markdown", + "id": "4e2d431e", + "metadata": {}, + "source": [ + "为了实现图 5.9 中展示的数据分割和加载,我们首先定义了一个 train_ratio,将 90% 的数据用于训练,其余的 10% 作为模型训练过程中的验证数据:" + ] + }, + { + "cell_type": "code", + "execution_count": 37, + "id": "b4c6fbca", + "metadata": { + "metadata": {} + }, + "outputs": [], + "source": [ + "train_ratio = 0.90\n", + "split_idx = int(train_ratio * len(text_data))\n", + "train_data = text_data[:split_idx]\n", + "val_data = text_data[split_idx:]" + ] + }, + { + "cell_type": "markdown", + "id": "6560b8c0", + "metadata": {}, + "source": [ + "使用 train_data 和 val_data 子集,我们现在可以创建相应的数据加载器,复用第二章中的 `create_dataloader_v1` 代码:" + ] + }, + { + "cell_type": "code", + "execution_count": 40, + "id": "32f557f7", + "metadata": { + "metadata": {} + }, + "outputs": [], + "source": [ + "from chapter02 import create_dataloader_v1\n", + "torch.manual_seed(123)\n", + "\n", + "train_loader = create_dataloader_v1(\n", + " train_data,\n", + " batch_size=2,\n", + " max_length=GPT_CONFIG_124M[\"context_length\"],\n", + " stride=GPT_CONFIG_124M[\"context_length\"],\n", + " drop_last=True,\n", + " shuffle=True\n", + " )\n", + "val_loader = create_dataloader_v1(\n", + " val_data,\n", + " batch_size=2,\n", + " max_length=GPT_CONFIG_124M[\"context_length\"],\n", + " stride=GPT_CONFIG_124M[\"context_length\"],\n", + " drop_last=False,\n", + " shuffle=False\n", + " )" + ] + }, + { + "cell_type": "markdown", + "id": "c1839c4b", + "metadata": {}, + "source": [ + "在上述代码中,我们选择了较小的批量大小,以降低计算资源的消耗,因为我们处理的是一个非常小的数据集。然而,在实际操作中,使用 1024 或更大的批量大小来训练 LLM 是常见的做法。" + ] + }, + { + "cell_type": "markdown", + "id": "ef2e5b0f", + "metadata": {}, + "source": [ + "作为一项可选的检查,我们可以遍历数据加载器,以确保它们是正确创建的:" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "id": "25ce9931", + "metadata": { + "metadata": {} + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Train loader:\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "\n", + "Validation loader:\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n" + ] + } + ], + "source": [ + "print(\"Train loader:\")\n", + "for x, y in train_loader:\n", + " print(x.shape, y.shape)\n", + " \n", + "print(\"\\nValidation loader:\")\n", + "for x, y in val_loader:\n", + " print(x.shape, y.shape)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "我们可以看到如下输出:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "```\n", + "Train loader:\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "Validation loader:\n", + "torch.Size([2, 256]) torch.Size([2, 256])\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "21880e54", + "metadata": {}, + "source": [ + "根据前述代码的运行结果,我们共得到了 9 个训练批次,每个批次包含 2 个样本,每个样本有 256 个 token。由于我们仅为验证过程分配了 10% 的数据,因此只有一个验证批次,其中包含 2 个输入示例。" + ] + }, + { + "cell_type": "markdown", + "id": "a0725e62", + "metadata": {}, + "source": [ + "正如我们预期的那样,输入数据(x)和目标数据(y)具有相同的形状(批次大小乘以每个批次中的 token 数量),因为目标就是输入数据向后移动一个位置,这一点我们在第二章中已经讨论过。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "接下来,我们将实现一个实用函数,用于计算训练和验证加载器返回的特定批次的交叉熵损失:" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "id": "a7bf3e7d", + "metadata": { + "metadata": {} + }, + "outputs": [], + "source": [ + "def calc_loss_batch(input_batch, target_batch, model, device):\n", + " input_batch, target_batch = input_batch.to(device), target_batch.to(device) #A\n", + " logits = model(input_batch)\n", + " loss = torch.nn.functional.cross_entropy(\n", + " logits.flatten(0, 1), target_batch.flatten()\n", + " )\n", + " return loss" + ] + }, + { + "cell_type": "markdown", + "id": "063e4c17", + "metadata": {}, + "source": [ + "现在,我们可以使用这个用于计算单个批次损失的实用函数` calc_loss_batch`,来实现以下函数`calc_loss_loader`,该函数计算给定数据加载器采样的所有批次的损失:" + ] + }, + { + "cell_type": "markdown", + "id": "daa26ed3", + "metadata": {}, + "source": [ + "**代码列表 5.2 计算训练和验证损失的函数**" + ] + }, + { + "cell_type": "code", + "execution_count": 43, + "id": "8ed33e8a", + "metadata": { + "metadata": {} + }, + "outputs": [], + "source": [ + "def calc_loss_loader(data_loader, model, device, num_batches=None):\n", + " total_loss = 0.\n", + " if num_batches is None:\n", + " num_batches = len(data_loader) #A\n", + " else:\n", + " num_batches = min(num_batches, len(data_loader)) #B\n", + " for i, (input_batch, target_batch) in enumerate(data_loader):\n", + " if i < num_batches:\n", + " loss = calc_loss_batch(input_batch, target_batch, model, device)\n", + " total_loss += loss.item() #C\n", + " else:\n", + " break\n", + " return total_loss / num_batches #D" + ] + }, + { + "cell_type": "markdown", + "id": "8193f725", + "metadata": {}, + "source": [ + "默认情况下,`calc_loss_batch` 函数会遍历给定数据加载器中的所有批次,将各批次的损失累积在 `total_loss` 变量中,然后计算并平均所有批次的损失。另外,我们也可以通过 `num_batches` 参数来指定较少的批次数量,以便在模型训练过程中加快评估速度。" + ] + }, + { + "cell_type": "markdown", + "id": "d59aa004", + "metadata": {}, + "source": [ + "现在,我们将这个 `calc_loss_batch` 函数应用于训练集和验证集的加载器,看看它在实际操作中的表现:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5f08bc4c", + "metadata": { + "metadata": {} + }, + "outputs": [], + "source": [ + "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\") #A\n", + "model.to(device)\n", + "train_loss = calc_loss_loader(train_loader, model, device) #B\n", + "val_loss = calc_loss_loader(val_loader, model, device)\n", + "print(\"Training loss:\", train_loss)\n", + "print(\"Validation loss:\", val_loss)" + ] + }, + { + "cell_type": "markdown", + "id": "a4d897af", + "metadata": {}, + "source": [ + "产生的损失值如下:" + ] + }, + { + "cell_type": "markdown", + "id": "1e531252", + "metadata": {}, + "source": [ + "```\n", + "Training loss: 10.98758347829183\n", + "Validation loss: 10.98110580444336\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "e3427743", + "metadata": {}, + "source": [ + "由于模型尚未经过训练,因此损失值相对较高。作为对比,如果模型能够学习到如何按照训练集和验证集中的顺序生成下一个 token,那么损失值将会接近于 0。" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "现在我们已经掌握了一种衡量生成文本质量的方法,接下来,我们将训练 LLM 以减少这种损失,从而使其在生成文本方面表现得更好,如图 5.10 所示。" + ] + }, + { + "cell_type": "markdown", + "id": "48abd6ac", + "metadata": {}, + "source": [ + "**图 5.10 我们已经回顾了文本生成过程,并实现了基本的模型评估技术以计算训练集和验证集的损失。接下来,我们将进入了解训练函数,并对 LLM 进行预训练。**" + ] + }, + { + "cell_type": "markdown", + "id": "1edd95f9", + "metadata": {}, + "source": [ + "![fig5.10](https://github.com/Pr04Ark/llms-from-scratch-cn/blob/trans01/Translated_Book/img/fig-5-10.jpg?raw=true)" + ] + }, + { + "cell_type": "markdown", + "id": "1721518a", + "metadata": {}, + "source": [ + "如图 5.10 所示,接下来的部分将重点放在 LLM 的预训练上。模型训练完成后,我们将采用不同的文本生成策略,并保存及加载预训练的模型权重。" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.14" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}