Letta Code is a free and open source programming aid tool. Just execute the npm install-g@letta-ai/letta-code command to complete the installation and run it in the terminal.
It can create an intelligent agent that can remember your code base, usage preferences, and past work history across sessions, and can continue to learn and evolve through commands such as/init,/remember,/skill-which is different from tools such as Claude Code that only support single sessions. You can also easily switch between large models such as Claude and GPT.
With this tool, you can significantly improve coding efficiency: Intelligent agents are like a colleague with growing capabilities, continuously optimizing performance on repetitive tasks, increasing coding efficiency by up to 37% in benchmark tests.
For a while, we thought that AI writing code was almost over.
Open the editor, type in a prompt, and the model spits out a decent code. Then you copy, modify, and continue asking. The cycle goes back and forth. Efficiency does improve, but there is always a vague something wrong-it’s more like a fast-responding search engine than someone who’s actually involved in the project.
It wasn’t until I saw the letta-code project that I realized what the problem was.
The problem is not whether the model is strong or not, but whether it “remembers”.
Traditional AI programming tools are essentially short-term memory systems. You give it context, and it behaves smart; when the context is broken, it immediately becomes another “stranger.” If the project is a little more complex, if there are more documents, and if there are more historical changes, it will start to make up nonsense. It’s not that it is not smart enough, but that it has no “experience”.
What Letta Code wants to do is reverse this.
It does not make AI better at writing code, but makes AI a “programmer who can remember.”
You pass a line npm install -g @letta-ai/letta-code After loading it into the terminal, the first thing it does is not to generate code, but to start “getting to know your project.” you can use /init Deinitialize the context, using /remember Tell it some preferences, and even write some experiences, rules, and habits into its long-term memory. Gradually, it no longer just responds to your input, but begins to understand you with history.
This feeling is a bit subtle.
You are not “calling a tool”, you are “cultivating an assistant.”
More importantly, this assistant will grow.
When you repeatedly ask it to handle a certain type of task, such as organizing an API, refactoring a module, or writing a certain style of function, it doesn’t start from scratch every time. It will remember how you did it last time, where you changed it, and even what you didn’t like to write it. Next time, its output will quietly change. This change is not caused by prompt, but by “experience”.
This is also the most essential difference between it and tools like Claude Code. The latter is already strong, but more often, it is still a one-time collaborative relationship. You give the task, it gives the answer, and it ends. Letta Code tries to lengthen this relationship and keep AI in your project and continuously participating, rather than coming and leaving.
Of course, this does not mean that it is already a perfect “AI programmer.” It still relies on underlying models such as Claude or GPT, and you still need to guide it and correct it. But one thing has changed: it has begun to have “continuity.” Once there is continuity, the nature of many things will change.
You will start giving it some repetitive work because you know it will get better and better. You will also be more willing to “cooperate” with it rather than simply “use” it. Even at some point, you will find that it behaves a bit like a new person on the team-it needs to be taken at first, but with it, it can catch things on its own.
Behind this is actually a very important turn.
In the past, what we were optimizing was “single generation quality”, which means making AI as smart as possible in a round of conversations. Letta Code’s idea begins to optimize “long-term behavioral performance.” It’s not how well this sentence is answered, but whether this agent will become more and more like a reliable developer over the project cycle.
So the statement of “37% efficiency increase” is actually not that important. What’s really interesting is another thing: If an AI can remember you, understand you, and adapt to you across sessions, the efficiency improvements it brings are likely not linear, but cumulative.
In other words, it is no longer a matter of tools, but a matter of relationships.
When AI no longer just answers questions, but begins to “participate in history,” programming is already quietly changing.
Github:https://github.com/letta-ai/letta-code
Oil tubing: