LLM-optimized API design
Predictable APIs across frameworks. Traditional component libraries are designed for humans who browse docs, copy examples, and tweak. An LLM-optimized library flips the priority: the model has to infer correct usage from minimal context and produce working code on the first try. The five principles below are what make that possible — and what shaped every API decision in Atelier UI.
1. One name for one concept — everywhere
If LlmButton takes a variant prop, then LlmBadge,
LlmAlert, and LlmCard use variant too.
If size controls scale on a button, it controls scale on every component
that has scale. No synonyms, no renamings.
The model only needs to learn the pattern once. Every subsequent component is a variation on the same schema — so the inference cost is near zero and hallucination risk drops to near zero.
// Three components, three different names
<Button kind="filled" scale="lg" />
<Badge type="success" />
<Alert level="warning" /> // One pattern, every component
<LlmButton variant="primary" size="lg" />
<LlmBadge variant="success" />
<LlmAlert variant="warning" /> variant. Every component with sizes uses size. No exceptions.
2. Narrow literal types, not wide strings
A prop typed as string gives the model nothing to work with. It has to guess
valid values from context, documentation, or training data — all unreliable. A prop typed
as 'primary' | 'secondary' | 'outline' is self-documenting: the model reads
the type and knows exactly what to write.
// What values are valid here?
variant: string
size: string
position: string // Self-documenting — no guessing needed
variant: 'primary' | 'secondary' | 'outline'
size: 'sm' | 'md' | 'lg'
position: 'top' | 'bottom' | 'left' | 'right' string, never numeric codes, never enums — they compile away and disappear from type information.
3. Sensible defaults so bare usage always works
Every input has a default. <llm-button>Click</llm-button> renders
a correct, styled, accessible button without a single prop. This matters for AI output
because the model can scaffold something working first and add customization in the next
turn — instead of needing every prop correct on the first attempt.
// Crashes or renders nothing without required props
<Button />
<Card type="?" padding="?">...</Card> // Always works — customize only what you need
<llm-button>Click</llm-button>
<llm-card>Content</llm-card> 4. A single reactivity model per framework
Mixing patterns forces the model to decide which approach to use — and it will sometimes guess wrong. Atelier UI picks one and uses it everywhere.
- Angular: Signals only —
input()signal input Angular’s modern reactive input primitive — `input()` returns a signal that components can `.read()` reactively. Replaces @Input() + getters. Part of Angular’s signal-based reactivity, introduced in v17 and stabilized in later versions. ,output(),model(). No legacy@Input/@Outputdecorators. - React: Props + callbacks. No class components, no legacy ref patterns.
- Vue: Composition API composition API Vue 3’s function-based component API using `<script setup>`, `ref()`, `computed()`, and `defineProps()` — replaces the Options API. Atelier Vue components assume composition API throughout. +
defineProps/defineEmits. No Options API.
5. The MCP server closes the loop
Even a perfectly designed API has details the model can't infer: exact default values, which props interact, what the output event payload looks like. The MCP server provides this at runtime — so the model queries it instead of guessing.
The five tools cover the full workflow: list_components for discovery,
search_components for intent matching, get_component_docs for
the full API, get_stories for usage patterns, and
get_theming_guide for design token names. Together they mean the model never
has to rely on training data for component-specific facts.