From d52d98058b9432af048feef276dee4581d59a0ad Mon Sep 17 00:00:00 2001 From: darthtrevino Date: Fri, 5 Apr 2024 22:13:47 +0000 Subject: [PATCH] =?UTF-8?q?Deploying=20to=20gh-pages=20from=20@=20microsof?= =?UTF-8?q?t/graphrag@a7a0721198739bf52683e3a14d3b58c343361971=20?= =?UTF-8?q?=F0=9F=9A=80?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- posts/get_started/index.html | 39 +++++++++++++++++++----------------- 1 file changed, 21 insertions(+), 18 deletions(-) diff --git a/posts/get_started/index.html b/posts/get_started/index.html index 628979c2..afca5a9b 100644 --- a/posts/get_started/index.html +++ b/posts/get_started/index.html @@ -323,29 +323,32 @@ It shows how to use the system to index some text, and then use the indexed data
  • GRAPHRAG_LLM_DEPLOYMENT_NAME - Deployment name for the Chat Completions model. Only required for Azure OpenAI users.
  • GRAPHRAG_EMBEDDING_DEPLOYMENT_NAME - Deployment name for the Embeddings model. Only required for Azure OpenAI users.
  • -

    OpenAI

    +

    OpenAI and Azure OpenAI

    +

    To get started, let's set the base environment variables.

    -
    export GRAPHRAG_API_KEY=<api_key> && \
    -export GRAPHRAG_LLM_MODEL=<chat_completions_model> && \
    -export GRAPHRAG_EMBEDDING_MODEL=<embeddings_model> && \
    +  
    export GRAPHRAG_API_KEY="<api_key>" && \
    +export GRAPHRAG_LLM_MODEL="<chat_completions_model>" && \
     export GRAPHRAG_LLM_MODEL_SUPPORTS_JSON="True" && \
    +export GRAPHRAG_EMBEDDING_MODEL="<embeddings_model>" && \
     export GRAPHRAG_INPUT_TYPE="text"
    -

    Azure OpenAI

    +

    In addition, Azure OpenAI users should set the following env-vars.

    -
    export GRAPHRAG_API_KEY=<api_key> && \
    -export GRAPHRAG_LLM_DEPLOYMENT_NAME=<chat_completions_model> && \
    -export GRAPHRAG_EMBEDDING_DEPLOYMENT_NAME=<embeddings_model> && \
    -export GRAPHRAG_INPUT_TYPE="text" && \
    -export GRAPHRAG_API_BASE="http://<domain>.openai.azure.com"
    +
    export GRAPHRAG_API_BASE="https://<domain>.openai.azure.com" && \
    +export GRAPHRAG_API_VERSION="2024-02-15-preview" && \
    +export GRAPHRAG_LLM_API_TYPE = "azure_openai_chat" && \
    +export GRAPHRAG_LLM_DEPLOYMENT_NAME="<chat_completions_deployment_name>" && \
    +export GRAPHRAG_EMBEDDING_API_TYPE = "azure_openai_embedding" && \
    +export GRAPHRAG_EMBEDDING_DEPLOYMENT_NAME="<embeddings_deployment_name>"
    -
    @@ -355,9 +358,9 @@ For more details about using the CLI, refer to the -
    python -m graphrag.index --root ./ragtest
    +
    python -m graphrag.index --root ./ragtest
    - @@ -370,24 +373,24 @@ Once the pipeline is complete, you should see a new folder called ./ragtes

    Here is an example using Global search to ask a high-level question:

    -
    python -m graphrag.query \
    +  
    python -m graphrag.query \
     --data ./ragtest/output/<timestamp>/artifacts \
    ---method global\
    +--method global \
     "What are the top themes in this story?"
    -

    Here is an example using Local search to ask a more specific question about a particular character:

    -
    python -m graphrag.query \
    +  
    python -m graphrag.query \
     --data ./ragtest/output/<timestamp>/artifacts \
     --method local \
     "Who is Scrooge, and what are his main relationships?"
    -