Dive: Open source MCP host desktop application

The OpenAgentPlatform-Dive integrates seamlessly with any LLM that supports function invocation:

Project introduction

Dive Is an open source desktop application positioned as MCP Host(That is, the desktop client hosting MCP Server), the main features are:

  • Supports multiple LLM (Large Language Models): Compatible with ChatGPT, Anthropic Claude, Ollama, local models, etc., provided that the model has the ability to function calling.
  • cross-platform: Provides native installation packages for Windows, macOS and Linux.
  • Using Model Context Protocol (MCP): Support stdio and SSE modes for easy connection to MCP servers (local or remote).
  • Language diversity interface: Support Traditional Chinese, Simplified Chinese, English, Spanish, Japanese, Korean, etc.
  • feature-rich: Includes multi-API-key management, customized system prompts, MCP tools configured in the UI, automatic update mechanisms, etc.

🧠Dive MCP Host (another related project)

In fact, starting from v0.8.0, MCP Host has been rewritten from TypeScript to Python and split into independent projects. dive-mcp-host

  • Provide unified API interface: Supports functions such as managing multiple LLMs, session persistence, HTTP Websocket services, multi-threaded conversations, and user management.
  • Use with Dive Desktop: The desktop is responsible for task presentation and tool invocation, and MCP Host provides back-end functional support.
  • development environment: Applies to Python ≥ 3.12, supports command line tools and HTTP services.

v0.8.0 update highlights (April 21, 2025)

According to the official update log and Reddit community feedback:

  • Launch the Python version of MCP Host (dive-mcp-host) to better support the popular LangChain in the Python ecosystem
  • Enhance the LLM setting interface: You can edit API keys, customize model IDs, and choose whether to verify whether the model supports tool calling
  • Enhance tool management: You can directly add, edit, and delete MCP tools within the UI, and support JSON editing and form mode.
  • Although Python refactoring briefly interrupted development, it laid the foundation for subsequent stronger functions.

Quick Getting Started Guide

  1. Installation and download: Go to the Release page and select the corresponding platform package (.exe/.dmg/.AppImage
    • Add multiple LLM API keys and model IDs to the GUI.
    • Configure the MCP server, connect to local (Python backend) or remote SSE type servers, and add tools (such as fetch, yt‑dl, etc.)
    • Using Python projects dive-mcp-host Start an HTTP service or command-line interaction

🌍Community feedback and ecological location

  • The community considers it a “Lightweight, simple, truly usable local LLM + function tool integrated solution
  • Recommended as a standard MCP Client example on Hacker News and registration platforms such as PulseMCP.

Summary

abilitydescribed
Multiple model supportAny LLM that supports function calling
Desktop back-end separationFront-end GUI + Python backend
Tool chain is openSupport custom tool plug-ins
Active communityGitHubˇ 1.4k, Fork107, Reddit and developers continue to follow up

It is suitable for developers and advanced users who want to quickly build a local LLM + tool calling system and want a GUI operating experience.

If you want to do next:

  • trial experience: Install the latest Release, connect to the local or remote MCP Server, and invoke a simple tool (such as fetch).
  • development and debugging: Clone project, overwrite the tool configuration, or try to connect to another LLM using the Python version of MCP Host.
  • participate in community: Submit issues, PR, or go to Discord/Reddit to communicate with developers.

Github:https://github.com/OpenAgentPlatform/Dive

Oil tubing:

Scroll to Top