diff --git a/posts/get_started/index.html b/posts/get_started/index.html index 628979c2..afca5a9b 100644 --- a/posts/get_started/index.html +++ b/posts/get_started/index.html @@ -323,29 +323,32 @@ It shows how to use the system to index some text, and then use the indexed data
  • GRAPHRAG_LLM_DEPLOYMENT_NAME - Deployment name for the Chat Completions model. Only required for Azure OpenAI users.
  • GRAPHRAG_EMBEDDING_DEPLOYMENT_NAME - Deployment name for the Embeddings model. Only required for Azure OpenAI users.
  • -

    OpenAI

    +

    OpenAI and Azure OpenAI

    +

    To get started, let's set the base environment variables.

    -
    export GRAPHRAG_API_KEY=<api_key> && \
    -export GRAPHRAG_LLM_MODEL=<chat_completions_model> && \
    -export GRAPHRAG_EMBEDDING_MODEL=<embeddings_model> && \
    +  
    export GRAPHRAG_API_KEY="<api_key>" && \
    +export GRAPHRAG_LLM_MODEL="<chat_completions_model>" && \
     export GRAPHRAG_LLM_MODEL_SUPPORTS_JSON="True" && \
    +export GRAPHRAG_EMBEDDING_MODEL="<embeddings_model>" && \
     export GRAPHRAG_INPUT_TYPE="text"
    -

    Azure OpenAI

    +

    In addition, Azure OpenAI users should set the following env-vars.

    -
    export GRAPHRAG_API_KEY=<api_key> && \
    -export GRAPHRAG_LLM_DEPLOYMENT_NAME=<chat_completions_model> && \
    -export GRAPHRAG_EMBEDDING_DEPLOYMENT_NAME=<embeddings_model> && \
    -export GRAPHRAG_INPUT_TYPE="text" && \
    -export GRAPHRAG_API_BASE="http://<domain>.openai.azure.com"
    +
    export GRAPHRAG_API_BASE="https://<domain>.openai.azure.com" && \
    +export GRAPHRAG_API_VERSION="2024-02-15-preview" && \
    +export GRAPHRAG_LLM_API_TYPE = "azure_openai_chat" && \
    +export GRAPHRAG_LLM_DEPLOYMENT_NAME="<chat_completions_deployment_name>" && \
    +export GRAPHRAG_EMBEDDING_API_TYPE = "azure_openai_embedding" && \
    +export GRAPHRAG_EMBEDDING_DEPLOYMENT_NAME="<embeddings_deployment_name>"
    -
    @@ -355,9 +358,9 @@ For more details about using the CLI, refer to the -
    python -m graphrag.index --root ./ragtest
    +
    python -m graphrag.index --root ./ragtest
    - @@ -370,24 +373,24 @@ Once the pipeline is complete, you should see a new folder called ./ragtes

    Here is an example using Global search to ask a high-level question:

    -
    python -m graphrag.query \
    +  
    python -m graphrag.query \
     --data ./ragtest/output/<timestamp>/artifacts \
    ---method global\
    +--method global \
     "What are the top themes in this story?"
    -

    Here is an example using Local search to ask a more specific question about a particular character:

    -
    python -m graphrag.query \
    +  
    python -m graphrag.query \
     --data ./ragtest/output/<timestamp>/artifacts \
     --method local \
     "Who is Scrooge, and what are his main relationships?"
    -