A platform to compare various AI models

This project is an open source tool created by Ahmet Dedeler called “AI LLM Comparison”(the website is called Countless.dev, later renamed LLM Arena / llmarena.ai), used to compare multi-dimensional information such as prices, Token limits, and capabilities of large language models on the market.

🛠Project functions and highlights

1. Model price comparison and function list

  • Supports mainstream models for multiple providers such as OpenAI (GPT‑4, GPT‑3.5), Anthropic Claude, Google Gemini/PaLM, Meta LLaMA, Cohere Command, etc.
  • Display the price of each model (input/output per million tokens), Token limit (context window), whether multi-modal is supported (such as vision), common functions (chat, embedding, image/voice, etc.)

2. price calculator

  • Users can input their own usage (such as daily/monthly token usage) and estimate the cost of each model in real time to make cost comparison decisions.

3. Side contrast view (Versus)

  • You can compare two or more models side by side to intuitively evaluate which one is best for your application scenario.

4. real-time data update

  • The project uses BerriAI’s LiteLLM data source to update the basic information and prices of the model daily to ensure that the comparison data remains up to date
  • GitHub Actions has been set to automatically pull and update data every day on a regular basis, as of June 14, 2025, you can see the update record every morning.

Technology stack architecture

  • front end: Next.js v14 + TypeScript + Tailwind CSS + Radix UI, supporting responsive adaptation between desktop and mobile
  • data acquisition: Timing script (fetchLatestJson.js) Get original model information from external LiteLLM → Local conversion (transformModels.js) → Store JSON → Front-end reads and renders.
  • Continuous integrated deployment: GitHub Actions Workflow:Update AI Models Regularly update JSON data for automatic deployment
  • deployment operation: One click clone → npm install → npm run dev You can run it locally or deploy it to platforms such as Vercel.

Applicable scenarios

  • developers: Quickly select the most cost-effective model.
  • Enterprise/Product Manager: Compare the differences in output length, capacity, and cost between different models.
  • Researcher/learner: Compare multiple models when doing teaching, reporting, and learning to quickly understand differences.
  • 100% free, no account required, intuitive interface, no threshold for open source

🔁Background community response

  • originally named Countless.dev, released on Product Hunt in early December 2024, received good reviews from the community (rating 5/5)
    • Publisher Ahmet said on X: “Data is obtained from LiteLLM every day and is continuously updated.”
    • There are suggestions in the comments to improve table display performance, which have been optimized in subsequent iterations.
  • Later, the domain name was moved to llmarena.ai for more branding

Summary

modulefunction description
core functionsModel price power comparison, cost tracking, side by side comparison
Data update methodLiteLLM data is automatically pulled daily and updated through CI
technology stackNext.js、TS、Tailwind、Radix + GitHub Actions + Vercel
user valueFree, intuitive, and suitable for individual developers and organizations to select models

GitHub:https://github.com/Ahmet-Dedeler/ai-llm-comparison

Oil tubing:

Scroll to Top