[update] readme

This commit is contained in:
jingyaogong 2026-04-04 11:25:21 +08:00
parent 2ab6455d9d
commit cacf1d4cd0
2 changed files with 4 additions and 2 deletions

View File

@ -34,7 +34,7 @@
* 此开源项目旨在完全从 0 开始,仅用 3 块钱成本与 2 小时训练时间,即可训练出规模约为 64M 的超小语言模型 MiniMind。
* MiniMind 系列极其轻量,主线最小版本体积约为 GPT-3 的 $\frac{1}{2700}$,力求让普通个人 GPU 也能快速完成训练与复现。
* 项目同时开源了大模型的极简结构与完整训练链路,覆盖 MoE、数据清洗、预训练Pretrain、监督微调SFT、LoRA、RLHFDPO、RLAIFPPO / GRPO / CISPO、Tool Use、Agentic RL、自适应思考与模型蒸馏等全过程代码。
* MiniMind 同时拓展了视觉多模态版本 [MiniMind-V](https://github.com/jingyaogong/minimind-v)。
* MiniMind 同时拓展了视觉多模态版本 [MiniMind-V](https://github.com/jingyaogong/minimind-v)、扩散语言模型MiniMind-dLM、线性模型MiniMind-Linear详见 [Discussion](https://github.com/jingyaogong/minimind/discussions)
* 项目所有核心算法代码均从 0 使用 PyTorch 原生实现,不依赖第三方库提供的高层抽象接口。
* 这不仅是一个大语言模型全阶段开源复现项目,也是一套面向 LLM 入门与实践的教程。
* 希望此项目能为更多人提供一个可复现、可理解、可扩展的起点,一起感受创造的乐趣,并推动更广泛 AI 社区的进步。
@ -94,6 +94,7 @@
- 支持在 C-Eval、C-MMLU、OpenBookQA 等第三方测评集上进行评测,并支持通过 YaRN 实现 RoPE 长文本外推。
- 提供兼容 OpenAI API 协议的极简服务端,便于接入 FastGPT、Open-WebUI 等第三方 Chat UI并支持 `reasoning_content`、`tool_calls`、`open_thinking`。
- 提供基于 Streamlit 的极简聊天 WebUI支持思考展示、工具选择与多轮 Tool Call。
- 包含实验性拓展:离散扩散语言模型([dLM](https://github.com/jingyaogong/minimind/discussions/618))与线性注意力模型([Linear Attention](https://github.com/jingyaogong/minimind/discussions/704)),均可基于主线 AR 模型进行续训。
#### 🎉 已发布模型列表

View File

@ -34,7 +34,7 @@
* This open-source project aims to train an ultra-small language model MiniMind with approximately 64M parameters entirely from scratch, using only 3 CNY in cost and 2 hours of training time.
* The MiniMind series is extremely lightweight, with the smallest version on the main branch being approximately $\frac{1}{2700}$ the size of GPT-3, striving to enable even ordinary personal GPUs to quickly complete training and reproduction.
* The project also open-sources the minimalist structure and complete training pipeline of large models, covering the entire process code for MoE, data cleaning, Pretraining, Supervised Fine-Tuning (SFT), LoRA, RLHF (DPO), RLAIF (PPO / GRPO / CISPO), Tool Use, Agentic RL, Adaptive Thinking, and Model Distillation.
* MiniMind has also been extended to a visual multimodal version [MiniMind-V](https://github.com/jingyaogong/minimind-v).
* MiniMind has also been extended to a visual multimodal version [MiniMind-V](https://github.com/jingyaogong/minimind-v), a diffusion language model (MiniMind-dLM), and a linear attention model (MiniMind-Linear), See [Discussion](https://github.com/jingyaogong/minimind/discussions) for details.
* All core algorithm code in the project is implemented from scratch using native PyTorch, without relying on high-level abstract interfaces provided by third-party libraries.
* This is not only a full-stage open-source reproduction project for large language models, but also a tutorial oriented towards LLM introduction and practice.
* We hope this project can provide a reproducible, understandable, and extensible starting point for more people, to share the joy of creation together and promote the progress of the broader AI community.
@ -94,6 +94,7 @@ Meanwhile, third-party large model frameworks and tool libraries, such as `trans
- Supports evaluation on third-party benchmark suites such as C-Eval, C-MMLU, OpenBookQA, etc., and supports RoPE long context extrapolation through YaRN.
- Provides a minimalist server compatible with the OpenAI API protocol, convenient for integrating with third-party Chat UIs such as FastGPT, Open-WebUI, etc., and supports `reasoning_content`, `tool_calls`, `open_thinking`.
- Provides a minimalist chat WebUI based on Streamlit, supporting thinking display, tool selection, and multi-turn Tool Call.
- Includes experimental extensions: diffusion language model ([dLM](https://github.com/jingyaogong/minimind/discussions/618)) and linear attention model ([Linear Attention](https://github.com/jingyaogong/minimind/discussions/704)), both can be continued-trained from the main AR model.
#### 🎉 Released Model List