
When you build AI agents, every token matters. Lower token usage means lower API costs, faster responses, and better performance. Although JSON has been the long-time standard for structured data, it’s not designed for LLMs. This is where TOON (Token-Oriented Object Notation) comes in — a lightweight, human-friendly, token-efficient format built specifically for language models.
Today, we’ll explore why TOON is becoming incredibly popular, how it works, and how you can start using it in your own AI workflows — including for tasks like image generation, where JSON is usually the default.
What’s Wrong With JSON for LLMs?
JSON is amazing for APIs, configuration files, software development, and databases. However, once you start using it inside an AI prompt, the weaknesses become obvious.
Here’s why:
- JSON uses braces
{}, brackets[], and double quotes"everywhere. - LLMs tokenize punctuation individually.
- Every repeated key adds more tokens.
- Deeply nested JSON grows fast.
Because of this, JSON often produces 30–60% more tokens than necessary.
A Real Example
JSON Example (User List):
{
"users": [
{ "id": 1, "name": "Alice", "role": "admin" },
{ "id": 2, "name": "Bob", "role": "user" }
]
}
- Tokens: ~51
- Characters: 118
Now compare that with TOON:
users[2]{id,name,role}:
1,Alice,admin
2,Bob,user
- Tokens: ~24
- Characters: 52
That’s more than 50% tokens saved, instantly.
Why TOON Matters for Modern AI Agents
The rise of AI agents brought a new bottleneck: token efficiency. Agents constantly exchange structured data with the LLM — user profiles, logs, tasks, tool outputs, memory states, and more. Consequently, every wasted token makes the system slower and more expensive.
This is where TOON helps.
Key Advantages of TOON
⭐ 1. Minimal Syntax
TOON strips out unnecessary characters.
Instead of braces and quotes, it uses indentation and tabular formatting.
⭐ 2. Faster Scanning
LLMs read TOON like a spreadsheet.
Rows and columns make patterns easier to detect and reason about.
⭐ 3. More Human-Friendly
Even non-technical users can understand TOON at a glance.
⭐ 4. Cheaper to Run
Fewer tokens =
- faster responses
- lower API billing
- more efficient memory usage
⭐ 5. Works Perfectly for Flat or Tabular Data
Not everything needs deep nesting.
Most AI agent data is simple and flat — which TOON handles extremely well.
When TOON is the Better Choice
TOON shines in scenarios where clarity and token efficiency matter most. Therefore, it’s ideal for:
- user lists
- product catalogs
- knowledge base snippets
- logs and event streams
- tool results
- classification datasets
- agent memory blocks
- summarized database exports
Example: Product Catalog in TOON
products[3]{id,name,price,stock}:
101,Wireless Mouse,19.99,34
102,Keyboard Pro,49.99,12
103,USB Hub 3.0,14.99,55
This entire structure is extremely compact and easy for the LLM to parse.
When JSON Is Still the Better Choice
Even though TOON is efficient, JSON still has a purpose.
Use JSON when you need:
- nested or hierarchical objects
- compatibility with APIs
- storage in databases
- mixed data types and custom structures
Therefore, JSON remains the right fit for many traditional applications.
TOON for Image Generation (Yes, It Works!)
Many people don’t realize this, but JSON is often used for image-generation APIs like Flux, SDXL, and Playground v3.
A typical JSON prompt looks like this:
{
"prompt": "a modern cyberpunk city with neon lights",
"width": 1024,
"height": 1024,
"steps": 30,
"style": "cinematic"
}
This is perfectly fine, but it uses more tokens than necessary.
TOON Version:
image{prompt,width,height,steps,style}:
"a modern cyberpunk city with neon lights",1024,1024,30,cinematic
This reduces token usage dramatically.
TOON Prompt Example for a Fashion Model
image{prompt,width,height,steps,style}:
"Indian traditional fashion shoot, full-body, elegant silk saree, soft studio lighting, realistic skin texture",1024,1024,28,cinematic
Your LLM can convert this into the JSON schema if the API requires it.
Because TOON is for LLM input — not API submission — it reduces tokens without breaking your workflow.
TOON for AI Agent Tools: A Practical Example
Let’s say your agent needs to summarize customer activity.
JSON:
{
"id": 551,
"name": "Sarah",
"actions": [
"viewed product",
"added to cart",
"removed from cart",
"purchased"
]
}
TOON:
activity{id,name,actions[4]}:
551,Sarah:
- viewed product
- added to cart
- removed from cart
- purchased
The TOON version is shorter, easier to read, and token-efficient.
Real-World Examples Where Teams Are Shifting to TOON
Here are practical areas where companies already use TOON:
✔ AI Zaps & Workflows
Tools like n8n, Zapier, and Make use TOON for sending compact data into LLM steps.
✔ Multi-Agent Systems
Memory and communication between agents become more efficient.
✔ E-commerce Workflows
Product lists reduce token usage heavily.
✔ Customer Support Agents
Ticket summaries and user profiles become faster to process.
✔ Local LLM Users
On-device models benefit from smaller prompts and fewer tokens.
How to Convert JSON to TOON (Simple Method)
JSON Input:
{
"id": 1,
"name": "Leo",
"skills": ["coding", "testing"]
}
TOON Output:
user{id,name,skills[2]}:
1,Leo:
- coding
- testing
If needed, the LLM can instantly convert this back to JSON for API usage.
Conclusion: TOON vs JSON — Which Should You Use?
Both formats have strengths. JSON is universal and reliable for APIs. TOON, however, is clearly better for LLMs because it’s token-efficient, faster to parse, and more readable.
Ultimately:
- Use TOON for LLM input, instructions, memory, and data exchange.
- Use JSON for API calls, database operations, and nested data.
In short, if you’re building AI agents that process structured data repeatedly, switching to TOON can reduce token usage by nearly 50–60% while making the workflow easier to maintain.
And that’s a massive win.