I Built Google's Missing A2UI Renderer for Jetpack Compose
Google's A2UI protocol has no Jetpack Compose renderer yet. I built one with 29 components, data binding, validation, and WebSocket streaming. Here's why it matters.
Google's A2UI protocol is the most important thing to happen to AI-powered mobile apps since SwiftUI. It lets AI agents generate real, native user interfaces — not text, not HTML blobs, not webviews — actual native components rendered by the platform's own toolkit.
There's just one problem: there's no Jetpack Compose renderer. Google's own roadmap puts it in Q2 2026. That's months away.
So I built one. It's open source, it supports 29 components, and it runs on Android and iOS today via Compose Multiplatform. Here's why A2UI matters, how the protocol works, and what I learned building the renderer.
What Is A2UI and Why Should You Care?
Right now, when an AI agent wants to show you something interactive — a form, a product card, a booking flow — it has two bad options:
- Return text and hope the app figures out how to display it
- Generate HTML and render it in a webview (slow, ugly, security nightmare)
A2UI (Agent-to-UI) is Google's answer: a declarative JSON protocol where agents describe what they want to show, and the app decides how to render it using its own native components. No code execution. No iframes. No trust issues.
Here's what that looks like in practice:
{
"root": "main",
"components": {
"main": {
"component": "Column",
"children": ["greeting", "action_btn"]
},
"greeting": {
"component": "Text",
"properties": { "text": "Hello! How can I help?" }
},
"action_btn": {
"component": "Button",
"properties": { "label": "Book a table" },
"actions": { "click": { "name": "start_booking" } }
}
}
}
The agent sends this JSON. Your app renders it as native Compose widgets — Text(), Button(), Column() — with your app's theme, typography, and design system. The user sees a native UI, not a chatbot response.
Why this is a big deal:
- Security: A2UI is declarative data, not executable code. The agent can only request components from your app's approved catalog. No UI injection, no XSS, no arbitrary code execution.
- Native feel: Components inherit your app's Material Design theme, accessibility features, and platform behavior. It looks and feels like your app, not a third-party embed.
- LLM-friendly: The flat component structure with ID references is trivial for LLMs to generate incrementally. Agents can stream UI updates as they think.
The Protocol Stack: Where A2UI Fits
A2UI doesn't work alone. It's one layer in the emerging agent protocol stack:
| Layer | Protocol | What It Does |
|---|---|---|
| Agent coordination | A2A | Agents talk to agents |
| Tools & data | MCP | Agents access tools, APIs, databases |
| Runtime connection | AG-UI | Bi-directional frontend ↔ agent communication |
| UI rendering | A2UI | Agents generate native UI |
Think of it this way: MCP tells the agent what data is available. A2UI tells the app what to show the user. AG-UI is the pipe between them. They're complementary, not competing.
Why I Built the Jetpack Compose Renderer
Google currently ships renderers for Web (Lit), Angular, and Flutter. React is in progress for Q1 2026. Jetpack Compose and SwiftUI are planned for Q2 2026.
But Kotlin and Jetpack Compose power the majority of new Android development. If you're building an AI-powered Android app today and want native agent-generated UIs, you're stuck.
I wanted to fix that. The result is a2ui-JetpackCompose — a full A2UI renderer built on Compose Multiplatform, targeting both Android and iOS from a single codebase.
What's Included
29 built-in components across five categories:
| Category | Components |
|---|---|
| Content | Text (h1-h5 variants), Image (icon/avatar/feature), Icon (28+ mapped), Video, AudioPlayer |
| Layout | Row, Column, Card, List (lazy, template-based), Tabs (swipeable), Modal, Divider |
| Input | Button (primary/borderless), TextField (standard/longText/number/obscured), CheckBox, ChoicePicker (radio/checkbox), Slider, DateTimeInput |
| Navigation | Scaffold, TopBar, BottomBar, Fab |
| Extended | Box, Scrollable, LazyColumn, LazyRow, Spacer, Switch, Dropdown, Progress, Loading |
Data binding via JSON Pointer (RFC 6901). Agent sends a data model, components bind to paths like "/user/name", and updates flow reactively.
14 validation functions built in: required, regex, length, numeric, email, formatString, formatNumber, formatCurrency, formatDate, pluralize, and, or, not, openUrl.
Real-time WebSocket streaming. Agents push surface updates incrementally — add, update, or remove components without re-rendering the entire UI.
Full design system override. Every component can be replaced with your own implementation. Your agent UIs match your app's design system exactly.
How It Works Under the Hood
1. Parse the Surface
The agent sends a JSON surface. The parser converts it to a typed Kotlin model:
val json = """{"root":"main","components":{...},"data":{...}}"""
val result = A2UIParser.parseDocument(json)
result.onSuccess { surface ->
// surface.root, surface.components, surface.data
}
2. Render Native Compose
The renderer walks the component tree from root, resolves data bindings, and produces native Compose widgets:
@Composable
fun AgentScreen(surface: A2UISurface) {
MaterialTheme {
A2UIRenderer(
surface = surface,
onAction = { event ->
// Forward to agent or handle locally
println("${event.actionName} on ${event.componentId}")
}
)
}
}
That's it. The A2UIRenderer composable takes a surface and renders the full component tree. User interactions fire A2UIActionEvent objects that you route back to the agent.
3. Custom Components
Want to register a custom star rating component that your agent can request?
registry.register("Rating") { component, surface, resolver, onAction, modifier ->
val rating = resolver.resolveFloat(component, "value") ?: 0f
StarRatingWidget(
rating = rating,
onRatingChanged = { newRating ->
onAction(A2UIActionEvent(component.id, "rate", mapOf("value" to newRating)))
},
modifier = modifier
)
}
Now any agent can include "component": "Rating" in its surface and your app renders your custom widget.
4. WebSocket Streaming
For real-time agent UIs, connect via WebSocket:
val stateManager = A2UIStateManager()
val connection = A2UIConnection(
stateManager = stateManager,
config = A2UIConnectionConfig(
host = "your-agent-server.com",
port = 443,
path = "/a2ui"
)
)
// Connect and collect surface updates
connection.connect(coroutineScope)
stateManager.surface.collect { surface ->
// Re-render with updated surface
}
The agent can send four message types: surface (full UI), commands (incremental updates), loading (progress indicators), and clear (reset).
Real-World Use Cases
Here's where this gets practical for production apps:
Customer support: Instead of a chatbot asking "What's your order number?", the agent sends a native form with a text field, order lookup button, and displays order details in a Card with Image components — all matching your app's design.
Healthcare intake: An agent generates a multi-step form with validated fields: date pickers for symptoms onset, checkbox groups for medications, sliders for pain levels. The validation engine enforces required, regex, and numeric rules before the user can submit.
E-commerce: Agent presents product recommendations as a lazy-scrolling list with images, prices, and "Add to cart" buttons. Each interaction routes back to the agent for personalized follow-ups.
A2UI vs. The Alternatives
| Approach | Native Feel | Security | Agent Control | Effort |
|---|---|---|---|---|
| A2UI | Full native | Declarative only (safe) | High — agents compose UIs | Medium (need renderer) |
| MCP Apps | Webview/iframe | Sandboxed | Medium — HTML resources | Low |
| Raw HTML | None | Risky (XSS, injection) | High — full control | Low |
| Text only | N/A | Safe | None | None |
| Custom JSON | Full native | You decide | High | Very high (build everything) |
A2UI hits the sweet spot: agents get real UI control, apps stay secure, and users get native experience. The only downside was the missing Jetpack Compose renderer — which is why I built one.
What's Next
The A2UI roadmap has Jetpack Compose and SwiftUI renderers planned for Q2 2026. The protocol itself is moving toward v1.0 by Q4 2026 with stability guarantees.
In the meantime, a2ui-JetpackCompose is available now under open source. It implements the v0.9 spec with 29 components, full data binding, validation, theming, and WebSocket streaming.
If you're building an Android app with AI agent features, you don't have to wait for Google. Clone the repo, register it as a dependency, and start rendering agent UIs natively today.
The agent-driven UI era is here. The question isn't whether your app will have AI-generated interfaces — it's whether they'll feel native or like an afterthought.
If you found this useful, I send a daily 5-minute AI briefing — subscribe here.
Disclosure: Some links in this post are affiliate links. If you sign up through them, I may earn a small commission at no extra cost to you. I only recommend tools I've actually used or thoroughly evaluated. This post was written with AI assistance and reviewed for accuracy.