graphrag/tests
Derek Worthen 2b70e4a4f3
Tokenizer (#2051)
* Add LiteLLM chat and embedding model providers.

* Fix code review findings.

* Add litellm.

* Fix formatting.

* Update dictionary.

* Update litellm.

* Fix embedding.

* Remove manual use of tiktoken and replace with
Tokenizer interface. Adds support for encoding
and decoding the models supported by litellm.

* Update litellm.

* Configure litellm to drop unsupported params.

* Cleanup semversioner release notes.

* Add num_tokens util to Tokenizer interface.

* Update litellm service factories.

* Cleanup litellm chat/embedding model argument assignment.

* Update chat and embedding type field for litellm use and future migration away from fnllm.

* Flatten litellm service organization.

* Update litellm.

* Update litellm factory validation.

* Flatten litellm rate limit service organization.

* Update rate limiter - disable with None/null instead of 0.

* Fix usage of get_tokenizer.

* Update litellm service registrations.

* Add jitter to exponential retry.

* Update validation.

* Update validation.

* Add litellm request logging layer.

* Update cache key.

* Update defaults.

---------

Co-authored-by: Alonso Guevara <alonsog@microsoft.com>
2025-09-22 13:55:14 -06:00
..
fixtures Switch from Poetry to uv for package management (#2008) 2025-08-13 18:57:25 -06:00
integration Custom vector store schema implementation (#2062) 2025-09-19 10:11:34 -07:00
notebook Convert CLI to Typer app (#1305) 2024-10-24 14:22:32 -04:00
smoke Switch from Poetry to uv for package management (#2008) 2025-08-13 18:57:25 -06:00
unit Tokenizer (#2051) 2025-09-22 13:55:14 -06:00
verbs Fix id baseline (#2036) 2025-08-27 11:15:21 -07:00
__init__.py Create Language Model Providers and Registry methods. Remove fnllm coupling (#1724) 2025-02-20 08:56:20 -06:00
conftest.py Add Cosmos DB storage/cache option (#1431) 2024-12-19 13:43:21 -06:00
mock_provider.py Support OpenAI reasoning models (#1841) 2025-04-22 14:15:26 -07:00