From c60a738c8b60f8d05e4927e667be0131409172a2 Mon Sep 17 00:00:00 2001 From: AlonsoGuevara Date: Fri, 25 Apr 2025 23:11:04 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20microsof?= =?UTF-8?q?t/graphrag@56e0fad218a47105fbda5f55e61c06585cb3375e=20?= =?UTF-8?q?=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- config/yaml/index.html | 2 +- search/search_index.json | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/config/yaml/index.html b/config/yaml/index.html index 1d405757..0067eeb6 100644 --- a/config/yaml/index.html +++ b/config/yaml/index.html @@ -1655,7 +1655,7 @@
  • max_node_freq_std float | None - The maximum standard deviation of node frequency to allow.
  • min_node_degree int - The minimum node degree to allow.
  • max_node_degree_std float | None - The maximum standard deviation of node degree to allow.
  • -
  • min_edge_weight_pct int - The minimum edge weight percentile to allow.
  • +
  • min_edge_weight_pct float - The minimum edge weight percentile to allow.
  • remove_ego_nodes bool - Remove ego nodes.
  • lcc_only bool - Only use largest connected component.
  • diff --git a/search/search_index.json b/search/search_index.json index 56c8e6d4..ffca58f7 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config": {"lang": ["en"], "separator": "[\\s\\-]+", "pipeline": ["stopWordFilter"]}, "docs": [{"location": "", "title": "Welcome to GraphRAG", "text": "

    \ud83d\udc49 Microsoft Research Blog Post \ud83d\udc49 GraphRAG Accelerator \ud83d\udc49 GraphRAG Arxiv

    Figure 1: An LLM-generated knowledge graph built using GPT-4 Turbo.

    GraphRAG is a structured, hierarchical approach to Retrieval Augmented Generation (RAG), as opposed to naive semantic-search approaches using plain text snippets. The GraphRAG process involves extracting a knowledge graph out of raw text, building a community hierarchy, generating summaries for these communities, and then leveraging these structures when perform RAG-based tasks.

    To learn more about GraphRAG and how it can be used to enhance your language model's ability to reason about your private data, please visit the Microsoft Research Blog Post.

    "}, {"location": "#solution-accelerator", "title": "Solution Accelerator \ud83d\ude80", "text": "

    To quickstart the GraphRAG system we recommend trying the Solution Accelerator package. This provides a user-friendly end-to-end experience with Azure resources.

    "}, {"location": "#get-started-with-graphrag", "title": "Get Started with GraphRAG \ud83d\ude80", "text": "

    To start using GraphRAG, check out the Get Started guide. For a deeper dive into the main sub-systems, please visit the docpages for the Indexer and Query packages.

    "}, {"location": "#graphrag-vs-baseline-rag", "title": "GraphRAG vs Baseline RAG \ud83d\udd0d", "text": "

    Retrieval-Augmented Generation (RAG) is a technique to improve LLM outputs using real-world information. This technique is an important part of most LLM-based tools and the majority of RAG approaches use vector similarity as the search technique, which we call Baseline RAG. GraphRAG uses knowledge graphs to provide substantial improvements in question-and-answer performance when reasoning about complex information. RAG techniques have shown promise in helping LLMs to reason about private datasets - data that the LLM is not trained on and has never seen before, such as an enterprise\u2019s proprietary research, business documents, or communications. Baseline RAG was created to help solve this problem, but we observe situations where baseline RAG performs very poorly. For example:

    To address this, the tech community is working to develop methods that extend and enhance RAG. Microsoft Research\u2019s new approach, GraphRAG, creates a knowledge graph based on an input corpus. This graph, along with community summaries and graph machine learning outputs, are used to augment prompts at query time. GraphRAG shows substantial improvement in answering the two classes of questions described above, demonstrating intelligence or mastery that outperforms other approaches previously applied to private datasets.

    "}, {"location": "#the-graphrag-process", "title": "The GraphRAG Process \ud83e\udd16", "text": "

    GraphRAG builds upon our prior research and tooling using graph machine learning. The basic steps of the GraphRAG process are as follows:

    "}, {"location": "#index", "title": "Index", "text": ""}, {"location": "#query", "title": "Query", "text": "

    At query time, these structures are used to provide materials for the LLM context window when answering a question. The primary query modes are:

    "}, {"location": "#prompt-tuning", "title": "Prompt Tuning", "text": "

    Using GraphRAG with your data out of the box may not yield the best possible results. We strongly recommend to fine-tune your prompts following the Prompt Tuning Guide in our documentation.

    "}, {"location": "#versioning", "title": "Versioning", "text": "

    Please see the breaking changes document for notes on our approach to versioning the project.

    Always run graphrag init --root [path] --force between minor version bumps to ensure you have the latest config format. Run the provided migration notebook between major version bumps if you want to avoid re-indexing prior datasets. Note that this will overwrite your configuration and prompts, so backup if necessary.

    "}, {"location": "blog_posts/", "title": "Microsoft Research Blog", "text": "