Olama serve is an Olama forwarding agent

Used to add API key authentication to native Ollama services. This project solves the problem that Olama officially does not provide API key verification, allowing you to more securely deploy Olama services and prevent unauthorized access.

Project overview

Olama’s official service does not provide an API key authentication mechanism by default, which means that anyone who knows the API endpoint can access your Olama service, posing a security risk. (Lyman Zhao lymanzhao – GitHub)

The project implements a simple forwarding server through the FastAPI, and all requests require a valid API key to access the Ollama service, thereby enhancing security.

🔧Core functions

  1. API key authentication: All requests must carry a valid API key.
  2. multi-user support: Supports multiple API keys, each key associated with a specific user, making it easy to distinguish the access and usage of different users.
  3. session management: Use an IP-based trust system to reduce the need for duplicate authentications.
  4. client compatibility: Compatible with LangChain and other clients.
  5. logging: Record all requests and responses in detail for easy monitoring and troubleshooting.
  6. Streaming response support: Fully supports Ollama’s streaming response function.
  7. health check: Provide health check endpoints to monitor the status of proxy services and backend Olama services. (Lyman Zhao lymanzhao – GitHub, lib-ai-app-examples-utils-fwk.md – uptonking/note4yaoo – GitHub)

🚀Installation and configuration

environmental requirements

  • Python 3.8+
  • Ollama services installed and running

Install dependencies

pip install fastapi uvicorn httpx

profile

create config.py File, and set your API key and Ollama API address. For example:

#Set API key
VALID_API_KEYS = ["your_api_key1", "your_api_key2"]

#Set the Ollama API address
OLLAMA_API_URL = "http://localhost:11434"

run the service

uvicorn ollama_serve:app --host 0.0.0.0 --port 8000

After startup, you can access the proxy service through a request with an API key, which forwards the request to the native Ollama service and returns a response. (lib-ai-app-examples-utils-fwk.md – uptonking/note4yaoo – GitHub)

🔗Project address

You can access the complete code and documentation for the project on GitHub:

👉 lymanzhao/Ollama-serve

Github:https://github.com/lymanzhao/Ollama-serve

Oil tubing:

Scroll to Top