A fully localized web research and report writing assistant

Olama Deep Researcher is an open source project developed by the LangChain team that aims to provide a fully localized web research and report writing assistant. The tool combines the local large language model (LLM) and a web search engine to automatically perform information retrieval, summary, and iterative optimization, and ultimately generate a Markdown format research report containing cited sources.

Project core functions

  • Fully local operation: Use local LLM hosted by Ollama or LMStudio to ensure data privacy without having to send data to the cloud.
  • Intelligent iteration research: Automatically optimize search strategies through multiple rounds of iteration (default 3 rounds, configurable) to fill knowledge gaps.
  • Multiple search engine support: DuckDuckGo is used by default, and Tavily or Perplexity can also be configured as a search engine.
  • Visualize the research process: Monitor the research progress in real time through LangGraph Studio, including the generation of search results, summaries and final reports.
  • Academic output: Automatically generate research reports in Markdown format with full citations for further editing and sharing.

ˇWork process overview

  1. generate a search query: After the user enters a research topic, LLM generates an initial web search query.
  2. Get and summarize search results: Obtain relevant web content through the configured search engine and summarize it by LLM.
  3. Identify knowledge gaps and iteratively optimize: LLM analyzes the summary, identifies information gaps, generates new search queries, and repeats the above steps for optimization.
  4. Generate final report: After multiple iterations, a Markdown format research report containing all cited sources is generated.

Quick Getting Started Guide (Take Mac as an example)

  1. Install the Ollama app: From Ollama official website Download and install apps for Mac.
  2. Pull local LLM models: For example, use the following command to pull the DeepSeek R1 model:
    ollama pull deepseek-r1:8b

    (GitHub – langchain-ai/local-deep-researcher: Fully local web research and report writing assistant)

  3. Configure search engine API (optional): Register Tavily, obtain the API key, and set environment variables TAVILY_API_KEY
  4. Clone project and start service
    git clone https://github.com/langchain-ai/ollama-deep-researcher.git
    cd ollama-deep-researcher
    python -m venv .venv
    source .venv/bin/activate
    pip install -e .
    langgraph dev

    (GitHub – langchain-ai/ollama-deep-researcher: Fully local web research and report writing assistant)

  5. Visit LangGraph Studio: Open in browser http://127.0.0.1:2024, enter LangGraph Studio, configure research parameters and start using them. (올라마 딥 리서치, 오픈AI 딥 리서처의 오픈 소스 대안)

🧠Project inspiration and technical background

The design inspiration of Olama Deep Researcher comes from the IterDRAG method, which decomposes queries into sub-queries, gradually obtains and summarizes information, and builds a complete knowledge graph. The project uses LangGraph to build a research flow chart and combines local LLM to achieve automated information retrieval and summary. (ollama-deep-researcher/README.md at main · langchain-ai/ollama-deep-researcher · GitHub, [AI 오픈소스] Ollama Deep Researcher – 최강의 로컬 웹 리서치 및 보고서 작성 도우미)

📁Project resources and links

Github:https://github.com/langchain-ai/ollama-deep-researcher

Oil tubing:

Scroll to Top