An AI-assisted painting app made by Next.js

In the latest wave of AI applications, “AI helps me draw something” is often understood as generating pictures and illustrations. But in real engineering and product scenarios, what is more valuable is that AI helps you generate “structures” instead of pixels.

next-ai-draw-io It is an experimental project built around this idea.

It’s not about making a full draw.io alternative, but rather trying to answer a more specific question:

How to connect the natural language understanding capabilities of large models to an interactive web drawing tool?

The core problem to be solved by the project

In traditional drawing tools, the process typically looks like this:

  1. The user manually drags and drops the node
  2. Adjust the connection and layout
  3. Repeatedly modify the structure

In this project, the order is reversed:

  1. Users describe what they want to draw in natural language
  2. AI turns descriptions into structured graphical data
  3. Based on these structures, the front-end renders editable graphics

In other words, AI is not responsible for “drawing” but “understanding structure”.

The overall idea of the project

From an engineering point of view,next-ai-draw-io it is more like an AI web application paradigm demo than a single point of technology demonstration.

The entire link can be broken down into three layers:

1. Input layer: Natural language

The user enters plain text, such as:

“Draw a user login flow, including account entry, verification, success or failure branch.”

This step does not have any drawing logic, just plain text.

2. AI Layer: Text → structure

The role of the large model here is not a “painter”, but a “structure generator”.

Its core task is to convert text into data like this:

{
 "nodes": [
 { "id": "login", "label": "登录" },
 { "id": "check", "label": "校验账号" }
 ],
 "edges": [
 { "from": "login", "to": "check" }
 ]
}

This step is critical:
Once the output is stable structured data, all subsequent rendering, interaction, and editing can be done by the front-end.

3. Front-end layer: Structure → graphics

The frontend is built with Next.js:

  • Receive data returned by AI
  • Map nodes and edges into Canvas/SVG graphics
  • Support continued editing and adjustment of layouts

The focus here is not on “how beautiful the painting is”, but about:

How to make the AI’s output truly become a “usable UI state”.

The trade-offs behind technical selection

From the perspective of code structure and dependencies, this project has several obvious orientations:

  • Use Next.js
    • Ideal for quickly building AI web applications
    • API Route acts directly as an AI calling layer
  • AI only does “decision and generation”
    • Not involved in rendering
    • Uncoupled specific UI
  • The front-end is in full control of the interaction
    • Node drag
    • Visual feedback
    • Status management

This is actually the safest way to divide AI application engineering.

This project “doesn’t” do anything

Understanding a project often depends on what it deliberately does not do.

next-ai-draw-io It does not try:

  • Do a full graphics editor
  • Covers all draw.io features
  • Pursue complex layout algorithms

It’s more like a:

“If I were to make an AI-powered drawing tool, what should the minimum viable architecture look like?”

Summary

next-ai-draw-io It’s not a very complete tool, but it clearly shows a way to get AI application engineering right in:

  • AI is responsible for understanding and generating structures
  • The front-end is responsible for expression and interaction
  • The two are decoupled by a stable data structure

At a time when AI applications are increasing, this idea may be more worthy of attention than a specific model.

Github:https://github.com/DayuanJiang/next-ai-draw-io
Tubing:

Scroll to Top