graphrag/tests/fixtures/text/settings.yml
Derek Worthen c644338bae
Refactor config (#1593)
* Refactor config

- Add new ModelConfig to represent LLM settings
    - Combines LLMParameters, ParallelizationParameters, encoding_model, and async_mode
- Add top level models config that is a list of available LLM ModelConfigs
- Remove LLMConfig inheritance and delete LLMConfig
    - Replace the inheritance with a model_id reference to the ModelConfig listed in the top level models config
- Remove all fallbacks and hydration logic from create_graphrag_config
    - This removes the automatic env variable overrides
- Support env variables within config files using Templating
    - This requires "$" to be escaped with extra "$" so ".*\\.txt$" becomes ".*\\.txt$$"
- Update init content to initialize new config file with the ModelConfig structure

* Use dict of ModelConfig instead of list

* Add model validations and unit tests

* Fix ruff checks

* Add semversioner change

* Fix unit tests

* validate root_dir in pydantic model

* Rename ModelConfig to LanguageModelConfig

* Rename ModelConfigMissingError to LanguageModelConfigMissingError

* Add validationg for unexpected API keys

* Allow skipping pydantic validation for testing/mocking purposes.

* Add default lm configs to verb tests

* smoke test

* remove config from flows to fix llm arg mapping

* Fix embedding llm arg mapping

* Remove timestamp from smoke test outputs

* Remove unused "subworkflows" smoke test properties

* Add models to smoke test configs

* Update smoke test output path

* Send logs to logs folder

* Fix output path

* Fix csv test file pattern

* Update placeholder

* Format

* Instantiate default model configs

* Fix unit tests for config defaults

* Fix migration notebook

* Remove create_pipeline_config

* Remove several unused config models

* Remove indexing embedding and input configs

* Move embeddings function to config

* Remove skip_workflows

* Remove skip embeddings in favor of explicit naming

* fix unit test spelling mistake

* self.models[model_id] is already a language model. Remove redundant casting.

* update validation errors to instruct users to rerun graphrag init

* instantiate LanguageModelConfigs with validation

* skip validation in unit tests

* update verb tests to use default model settings instead of skipping validation

* test using llm settings

* cleanup verb tests

* remove unsafe default model config

* remove the ability to skip pydantic validation

* remove None union types when default values are set

* move vector_store from embeddings to top level of config and delete resolve_paths

* update vector store settings

* fix vector store and smoke tests

* fix serializing vector_store settings

* fix vector_store usage

* fix vector_store type

* support cli overrides for loading graphrag config

* rename storage to output

* Add --force flag to init

* Remove run_id and resume, fix Drift config assignment

* Ruff

---------

Co-authored-by: Nathan Evans <github@talkswithnumbers.com>
Co-authored-by: Alonso Guevara <alonsog@microsoft.com>
2025-01-21 17:52:06 -06:00

55 lines
1.4 KiB
YAML

models:
default_chat_model:
type: ${GRAPHRAG_LLM_TYPE}
api_key: ${GRAPHRAG_API_KEY}
api_base: ${GRAPHRAG_API_BASE}
api_version: ${GRAPHRAG_API_VERSION}
deployment_name: ${GRAPHRAG_LLM_DEPLOYMENT_NAME}
model: ${GRAPHRAG_LLM_MODEL}
tokens_per_minute: ${GRAPHRAG_LLM_TPM}
requests_per_minute: ${GRAPHRAG_LLM_RPM}
model_supports_json: true
parallelization_num_threads: 50
parallelization_stagger: 0.3
async_mode: threaded
default_embedding_model:
type: ${GRAPHRAG_EMBEDDING_TYPE}
api_key: ${GRAPHRAG_API_KEY}
api_base: ${GRAPHRAG_API_BASE}
api_version: ${GRAPHRAG_API_VERSION}
deployment_name: ${GRAPHRAG_EMBEDDING_DEPLOYMENT_NAME}
model: ${GRAPHRAG_EMBEDDING_MODEL}
tokens_per_minute: ${GRAPHRAG_EMBEDDING_TPM}
requests_per_minute: ${GRAPHRAG_EMBEDDING_RPM}
parallelization_num_threads: 50
parallelization_stagger: 0.3
async_mode: threaded
vector_store:
type: "azure_ai_search"
url: ${AZURE_AI_SEARCH_URL_ENDPOINT}
api_key: ${AZURE_AI_SEARCH_API_KEY}
container_name: "simple_text_ci"
claim_extraction:
enabled: true
embeddings:
model_id: "default_embedding_model"
community_reports:
prompt: "prompts/community_report.txt"
max_length: 2000
max_input_length: 8000
storage:
type: file
base_dir: "output"
reporting:
type: file
base_dir: "logs"
snapshots:
embeddings: True