graphrag/unified-search-app
Nathan Evans 710fdad6f0
Some checks are pending
Python Build and Type Check / python-ci (ubuntu-latest, 3.11) (push) Waiting to run
Python Build and Type Check / python-ci (ubuntu-latest, 3.13) (push) Waiting to run
Python Build and Type Check / python-ci (windows-latest, 3.11) (push) Waiting to run
Python Build and Type Check / python-ci (windows-latest, 3.13) (push) Waiting to run
Python Integration Tests / python-ci (ubuntu-latest, 3.13) (push) Waiting to run
Python Integration Tests / python-ci (windows-latest, 3.13) (push) Waiting to run
Python Notebook Tests / python-ci (ubuntu-latest, 3.13) (push) Waiting to run
Python Notebook Tests / python-ci (windows-latest, 3.13) (push) Waiting to run
Python Smoke Tests / python-ci (ubuntu-latest, 3.13) (push) Waiting to run
Python Smoke Tests / python-ci (windows-latest, 3.13) (push) Waiting to run
Python Unit Tests / python-ci (ubuntu-latest, 3.13) (push) Waiting to run
Python Unit Tests / python-ci (windows-latest, 3.13) (push) Waiting to run
Input factory (#2168)
* Update input factory to match other factories

* Move input config alongside input readers

* Move file pattern logic into InputReader

* Set encoding default

* Clean up optional column configs

* Combine structured data extraction

* Remove pandas from input loading

* Throw if empty documents

* Add json lines (jsonl) input support

* Store raw data

* Fix merge imports

* Move metadata handling entirely to chunking

* Nicer automatic title

* Typo

* Add get_property utility for nested dictionary access with dot notation

* Update structured_file_reader to use get_property utility

* Extract input module into new graphrag-input monorepo package

- Create new graphrag-input package with input loading utilities
- Move InputConfig, InputFileType, InputReader, TextDocument, and file readers (CSV, JSON, JSONL, Text)
- Add get_property utility for nested dictionary access with dot notation
- Include hashing utility for document ID generation
- Update all imports throughout codebase to use graphrag_input
- Add package to workspace configuration and release tasks
- Remove old graphrag.index.input module

* Rename ChunkResult to TextChunk and add transformer support

- Rename chunk_result.py to text_chunk.py with ChunkResult -> TextChunk
- Add 'original' field to TextChunk to track pre-transform text
- Add optional transform callback to chunker.chunk() method
- Add add_metadata transformer for prepending metadata to chunks
- Update create_chunk_results to apply transforms and populate original
- Update sentence_chunker and token_chunker with transform support
- Refactor create_base_text_units to use new transformer pattern
- Rename pluck_metadata to get/collect methods on TextDocument

* Back-compat comment

* Align input config type name with other factory configs

* Add MarkItDown support

* Remove pattern default from MarkItDown reader

* Remove plugins flag (implicit disabled)

* Format

* Update verb tests

* Separate storage from input config

* Add empty objects for NaN raw_data

* Fix smoke tests

* Fix BOM in csv smoke

* Format
2026-01-12 12:47:57 -08:00
..
app Input factory (#2168) 2026-01-12 12:47:57 -08:00
images Unified search added to graphrag (#1862) 2025-04-07 11:59:02 -06:00
.vsts-ci.yml Update .vsts-ci.yml (#1874) 2025-04-10 10:31:03 -06:00
Dockerfile Switch from Poetry to uv for package management (#2008) 2025-08-13 18:57:25 -06:00
pyproject.toml Python update (3.13) (#2149) 2025-12-15 15:39:38 -08:00
README.md Nov 2025 housekeeping (#2120) 2025-11-06 10:03:22 -08:00
uv.lock Remove graph embedding and UMAP (#2048) 2025-09-09 15:35:43 -07:00

Unified Search

Unified demo for GraphRAG search comparisons.

⚠️ This app is maintained for demo/experimental purposes and is not supported. Issue filings on the GraphRAG repo may not be addressed.

Requirements:

  • Python 3.11
  • UV

This sample app is not published to pypi, so you'll need to clone the GraphRAG repo and run from this folder.

We recommend always using a virtual environment:

  • uv venv --python 3.11
  • source .venv/bin/activate

Run index

Use GraphRAG to index your dataset before running Unified Search. We recommend starting with the Getting Started guide.

Datasets

Unified Search supports multiple GraphRAG indexes by using a directory listing file. Create a listing.json file in the root folder where all your datasets are stored (locally or in blob storage), with the following format (one entry per dataset):

[{
    "key": "<key_to_identify_dataset_1>",
    "path": "<path_to_dataset_1>",
    "name": "<name_to_identify_dataset_1>",
    "description": "<description_for_dataset_1>",
    "community_level": "<integer for community level you want to filter>"
},{
    "key": "<key_to_identify_dataset_2>",
    "path": "<path_to_dataset_2>",
    "name": "<name_to_identify_dataset_2>",
    "description": "<description_for_dataset_2>",
    "community_level": "<integer for community level you want to filter>"
}]

For example, if you have a folder of GraphRAG indexes called "projects" and inside that you ran the Getting Started instructions, your listing.json in the projects folder could look like:

[{
    "key": "christmas-demo",
    "path": "christmas",
    "name": "A Christmas Carol",
    "description": "Getting Started index of the novel A Christmas Carol",
    "community_level": 2
}]

Data Source Configuration

The expected format of the projects folder will be the following:

  • projects_folder
    • listing.json
    • dataset_1
      • settings.yaml
      • .env (optional if you declare your environment variables elsewhere)
      • output
      • prompts
    • dataset_2
      • settings.yaml
      • .env (optional if you declare your environment variables elsewhere)
      • output
      • prompts
    • ...

Note: Any other folder inside each dataset folder will be ignored but will not affect the app. Also, only the datasets declared inside listing.json will be used for Unified Search.

Storing your datasets

You can host Unified Search datasets locally or in a blob.

1. Local data folder

  1. Create a local folder with all your data and config as described above
  2. Tell the app where your folder is using an absolute path with the following environment variable:
  • DATA_ROOT = <data_folder_absolute_path>

2. Azure Blob Storage

  1. If you want to use Azure Blob Storage, create a blob storage account with a "data" container and upload all your data and config as described above
  2. Run az login and select an account that has read permissions on that storage
  3. You need to tell the app what blob account to use using the following environment variable:
  • BLOB_ACCOUNT_NAME = <blob_storage_name>
  1. (optional) In your blob account you need to create a container where your projects live. We default this to data as mentioned in step one, but if you want to use something else you can set:
  • BLOB_CONTAINER_NAME = <blob_container_with_projects>

Run the app

Install all the dependencies: uv sync

Run the project using streamlit: uv run poe start

How to use it

Initial page

Configuration panel (left panel)

When you run the app you will see two main panels at the beginning. The left panel provides several configuration options for the app and this panel can be closed:

  1. Datasets: Here all the datasets you defined inside the listing.json file are shown in order inside the dropdown.
  2. Number of suggested questions: this option let the user to choose how many suggested question can be generated.
  3. Search options: This section allows to choose which searches to use in the app. At least one search should be enabled to use the app.

Searches panel (right panel)

In the right panel you have several functionalities.

  1. At the top you can see general information related to the chosen dataset (name and description).
  2. Below the dataset information there is a button labeled "Suggest some questions" which analyzes the dataset using global search and generates the most important questions (the number of questions generated is the amount set in the configuration panel). If you want to select a question generated you have to click the checkbox at the left side of the question to select it.
  3. A textbox that it is labeled as "Ask a question to compare the results" where you can type the question that you want to send.
  4. Two tabs called Search and Community Explorer:
    1. Search: Here all the searches results are displayed with their citations.
    2. Community Explorer: This tab is divided in two sections: Community Reports List, and Selected Report.
Suggest some question clicked

Suggest some question clicked

Selected question clicked

Selected question clicked

Community Explorer tab

Community Explorer tab