You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues and this bug is not already filed.
My model is hosted on OpenAI or Azure. If not, please look at the "model providers" issue and don't file a new one here.
I believe this is a legitimate bug, not just a question. If this is a question, please use the Discussions area.
Describe the issue
I used the deepseek API to generate a graph. When I query, I can get a normal response based on the knowledge graph. However, when I change the model to the local Ollama model, it does not respond based on the graph, but only responds based on the model itself. Why is that?
Steps to reproduce
No response
GraphRAG Config Used
models:
default_chat_model:
type: openai_chat # or azure_openai_chatapi_base: http://localhost:11434/v1 # api_version: 2024-05-01-previewauth_type: api_key # or azure_managed_identityapi_key: your_key # audience: "https://cognitiveservices.azure.com/.default"# organization: <organization_id>model: deepseek-v3:latest# deployment_name: <azure_model_deployment_name>encoding_model: cl100k_base # automatically set by tiktoken if left undefinedmodel_supports_json: false # recommended if this is available for your model.concurrent_requests: 25# max number of simultaneous LLM requests allowedasync_mode: threaded # or asyncioretry_strategy: nativemax_retries: -1# set to -1 for dynamic retry logic (most optimal setting based on server response)tokens_per_minute: 0# set to 0 to disable rate limitingrequests_per_minute: 0# set to 0 to disable rate limitingdefault_embedding_model:
type: openai_embedding # or azure_openai_embeddingapi_base: https://open.bigmodel.cn/api/paas/v4# api_version: 2024-05-01-previewauth_type: api_key # or azure_managed_identityapi_key: your_key# audience: "https://cognitiveservices.azure.com/.default"# organization: <organization_id>model: embedding-2# deployment_name: <azure_model_deployment_name>encoding_model: cl100k_base # automatically set by tiktoken if left undefinedmodel_supports_json: true # recommended if this is available for your model.concurrent_requests: 25# max number of simultaneous LLM requests allowedasync_mode: threaded # or asyncioretry_strategy: nativemax_retries: -1# set to -1 for dynamic retry logic (most optimal setting based on server response)tokens_per_minute: 0# set to 0 to disable rate limitingrequests_per_minute: 0# set to 0 to disable rate limiting### Input settings ###input:
type: file # or blobfile_type: text # [csv, text, json]base_dir: "input"chunks:
size: 1200overlap: 100group_by_columns: [id]### Output/storage settings ##### If blob storage is specified in the following four sections,## connection_string and container_name must be providedoutput:
type: file # [file, blob, cosmosdb]base_dir: "output"cache:
type: file # [file, blob, cosmosdb]base_dir: "cache"reporting:
type: file # [file, blob, cosmosdb]base_dir: "logs"vector_store:
default_vector_store:
type: lancedbdb_uri: output/lancedbcontainer_name: defaultoverwrite: True### Workflow settings ###embed_text:
model_id: default_embedding_modelvector_store_id: default_vector_storeextract_graph:
model_id: default_chat_modelprompt: "prompts/extract_graph.txt"entity_types: [organization,person,geo,event]max_gleanings: 1summarize_descriptions:
model_id: default_chat_modelprompt: "prompts/summarize_descriptions.txt"max_length: 500extract_graph_nlp:
text_analyzer:
extractor_type: regex_english # [regex_english, syntactic_parser, cfg]cluster_graph:
max_cluster_size: 10extract_claims:
enabled: falsemodel_id: default_chat_modelprompt: "prompts/extract_claims.txt"description: "Any claims or facts that could be relevant to information discovery."max_gleanings: 1community_reports:
model_id: default_chat_modelgraph_prompt: "prompts/community_report_graph.txt"text_prompt: "prompts/community_report_text.txt"max_length: 2000max_input_length: 8000embed_graph:
enabled: false # if true, will generate node2vec embeddings for nodesumap:
enabled: false # if true, will generate UMAP embeddings for nodes (embed_graph must also be enabled)snapshots:
graphml: falseembeddings: false### Query settings ##### The prompt locations are required here, but each search method has a number of optional knobs that can be tuned.## See the config docs: https://microsoft.github.io/graphrag/config/yaml/#querylocal_search:
chat_model_id: default_chat_modelembedding_model_id: default_embedding_modelprompt: "prompts/local_search_system_prompt.txt"global_search:
chat_model_id: default_chat_modelmap_prompt: "prompts/global_search_map_system_prompt.txt"reduce_prompt: "prompts/global_search_reduce_system_prompt.txt"knowledge_prompt: "prompts/global_search_knowledge_system_prompt.txt"drift_search:
chat_model_id: default_chat_modelembedding_model_id: default_embedding_modelprompt: "prompts/drift_search_system_prompt.txt"reduce_prompt: "prompts/drift_search_reduce_prompt.txt"basic_search:
chat_model_id: default_chat_modelembedding_model_id: default_embedding_modelprompt: "prompts/basic_search_system_prompt.txt"
Logs and screenshots
This is using the local ollama model,it‘ s error
This is using the deepseek api
Additional Information
GraphRAG Version: v2.1.0
Operating System: Ubuntu 20.04
Python Version: 3.11
Related Issues:
The text was updated successfully, but these errors were encountered:
plkgq
added
the
triage
Default label assignment, indicates new issue needs reviewed by a maintainer
label
Apr 10, 2025
Do you need to file an issue?
Describe the issue
I used the deepseek API to generate a graph. When I query, I can get a normal response based on the knowledge graph. However, when I change the model to the local Ollama model, it does not respond based on the graph, but only responds based on the model itself. Why is that?
Steps to reproduce
No response
GraphRAG Config Used
Logs and screenshots
This is using the local ollama model,it‘ s error

This is using the deepseek api

Additional Information
The text was updated successfully, but these errors were encountered: