[update] readme

This commit is contained in:
jingyaogong 2026-04-19 14:51:47 +08:00
parent 487f78754d
commit 1ea113ea2c
2 changed files with 8 additions and 16 deletions

View File

@ -39,7 +39,7 @@
* 这不仅是一个大语言模型全阶段开源复现项目,也是一套面向 LLM 入门与实践的教程。
* 希望此项目能为更多人提供一个可复现、可理解、可扩展的起点,一起感受创造的乐趣,并推动更广泛 AI 社区的进步。
> 注:本项目基于 Apache 2.0 协议开源,完全免费“2小时” 基于 NVIDIA 3090 硬件设备单卡预估“3块钱” 指 GPU 服务器租用成本,具体规格详情见下文
> 注:本项目基于 Apache 2.0 协议开源,完全免费。“2 小时” 指 SFT 阶段在单张 NVIDIA 3090 上跑完 `1 epoch` 的实测耗时“3 块钱” 指对应时段的 GPU 租用成本
---
@ -167,9 +167,10 @@ minimind2系列旧模型均经过权重映射+微调训练QKVO线性层校
</details>
<details>
<summary> <b>2025-02-09</b> </summary>
<details>
<summary> <b>More...</b> </summary>
**2025-02-09**
- 迎来发布以来重大更新Release minimind2 Series。
- 代码几乎全部重构,使用更简洁明了的统一结构。
如有旧代码的兼容性需要,可访问[🔗旧仓库内容🔗](https://github.com/jingyaogong/minimind/tree/6e9cd28ef9b34a0a10afbdf6f59e65cb6e628efb)。
@ -183,11 +184,6 @@ minimind2系列旧模型均经过权重映射+微调训练QKVO线性层校
- minimind2具备一定的英文能力
- 更新minimind2与第三方模型的基于更多大模型榜单测试性能的结果。
</details>
<details>
<summary> <b>More...</b> </summary>
**2024-10-05**
- 为MiniMind拓展了多模态能力之---视觉
- 移步孪生项目[minimind-v](https://github.com/jingyaogong/minimind-v)查看详情!

View File

@ -39,7 +39,7 @@
* This is not only a full-stage open-source reproduction project for large language models, but also a tutorial oriented towards LLM introduction and practice.
* We hope this project can provide a reproducible, understandable, and extensible starting point for more people, to share the joy of creation together and promote the progress of the broader AI community.
> Note: This project is open-sourced under the Apache 2.0 license and is completely free; "2 hours" is estimated based on NVIDIA 3090 hardware (single GPU), and "3 CNY" refers to GPU server rental cost. See below for detailed specifications.
> Note: This project is open-sourced under the Apache 2.0 license and is completely free. "2 hours" refers to the measured time to run `1 epoch` of the SFT stage on a single NVIDIA 3090, and "3 CNY" refers to the corresponding GPU rental cost for that duration.
---
@ -166,9 +166,10 @@ After this update, maintenance for the entire `minimind-v1` series will be disco
</details>
<details>
<summary> <b>2025-02-09</b> </summary>
<details>
<summary> <b>More...</b> </summary>
**2025-02-09**
- Major update since release, Release minimind2 Series.
- Code almost entirely refactored, using a more concise and clear unified structure.
For compatibility needs with old code, visit [🔗Old Repository Content🔗](https://github.com/jingyaogong/minimind/tree/6e9cd28ef9b34a0a10afbdf6f59e65cb6e628efb).
@ -182,11 +183,6 @@ After this update, maintenance for the entire `minimind-v1` series will be disco
- minimind2 has certain English language capability!
- Updated benchmark test performance results of minimind2 vs third-party models on more LLM leaderboards.
</details>
<details>
<summary> <b>More...</b> </summary>
**2024-10-05**
- Extended multimodal capability for MiniMind --- Vision
- Visit the sibling project [minimind-v](https://github.com/jingyaogong/minimind-v) for details!