If you've opened a tokens.json file in the last year and seen a $value and $type next to every entry, you've already met the W3C Design Tokens Community Group format. It is the JSON dialect that Style Dictionary, Tokens Studio, Cobalt, Specify, Supernova, Penpot, and the new wave of AI design tools have quietly aligned on. It is also the format that the Google Labs DESIGN.md spec uses for its YAML token block, which is why a Taste Profile ships with a tokens.json that conforms to it.
This post is a practical tour. What the format is, what shape a real token file takes, the parts that trip people up, and the parts that are still in flux. By the end, you should be able to look at any modern design token file and read it the same way you read a package.json.
Why a standard exists in the first place
Before DTCG, every tool invented its own JSON shape. Style Dictionary used one structure. Tokens Studio used another. Each design system tool exported something subtly different, and the only way to move tokens between tools was to write a converter, lose information, or rebuild by hand.
That was tolerable when design tokens were a niche concern. It stopped being tolerable the moment AI design tools entered the loop. Claude Design, Cursor, v0, Lovable, ChatGPT, Figma Make: each of them ingests a brand somehow. If every tool wanted a different token format, you'd be maintaining five exports of the same palette.
The W3C Design Tokens Community Group (DTCG) was formed in 2020 specifically to fix this. Its draft specification defines a single JSON shape that every tool can read and write. The spec is still a community draft (not a ratified W3C standard), but adoption is far enough along that you can treat it as the working contract.
The shape of a token
A DTCG token is an object with at least two reserved keys: $value and $type.
{
"color": {
"primary": {
"$value": "#2563EB",
"$type": "color"
}
}
}The $ prefix is how the spec marks reserved metadata. Anything starting with $ is "talking to the parser." Anything else is part of your token tree.
That tiny convention matters. It means a parser can walk your file and confidently say "this is a token, that is a group, this thing over here is a description." No guesswork, no naming heuristics, no tool-specific magic strings.
Here are the keys you'll see most:
| Key | Purpose |
|---|---|
$value | The actual value (a hex, a number, an alias, a composite object). |
$type | The token type. Tells tools how to render, transform, and validate. |
$description | Human-readable note. Often the most useful field for AI tools. |
$extensions | Tool-specific metadata, namespaced (e.g. com.tokens-studio.modifiers). |
That's most of what you need to read a token file fluently.
Groups and naming
A DTCG file is a tree. Anything that isn't a token is a group.
{
"color": {
"primary": {
"$value": "#2563EB",
"$type": "color"
},
"neutral": {
"100": { "$value": "#F1F5F9", "$type": "color" },
"900": { "$value": "#0F172A", "$type": "color" }
}
}
}color is a group. color.primary is a token. color.neutral is a group containing two tokens. The dot-path you'd use to reference a token (color.neutral.900) is just the chain of object keys.
Two practical rules for naming:
- Don't put
$at the start of a name. It's reserved. - Use kebab-case or camelCase consistently. The spec is case-sensitive and does not normalise.
You can also lift a $type to the group level so child tokens inherit it:
{
"color": {
"$type": "color",
"primary": { "$value": "#2563EB" },
"neutral-900": { "$value": "#0F172A" }
}
}Less repetition, same meaning.
Token types you'll actually use
The spec defines a small set of primitive types and a slightly larger set of composite types.
Primitives:
color(hex, rgb, hsl; alpha allowed)dimension(CSS lengths:16px,1rem,0.5em)fontFamily(string or array of fallbacks)fontWeight(number or named likeregular,bold)duration(200ms,0.3s)cubicBezier([0.4, 0, 0.2, 1])number(raw numeric, useful for line-height ratios)
Composites (objects whose $value is itself a structured value):
typography: bundles font family, size, weight, line height, letter spacingshadow: x, y, blur, spread, color (and arrays of these for layered shadows)gradient: stops with color and positiontransition: duration, delay, timing functionborder: color, style, widthstrokeStyle: dashed, dotted, etc.
A typography token, for example, looks like this:
{
"typography": {
"heading-xl": {
"$type": "typography",
"$value": {
"fontFamily": "{font.family.serif}",
"fontSize": "48px",
"fontWeight": 300,
"lineHeight": "1.1",
"letterSpacing": "-0.02em"
}
}
}
}That single token captures everything a heading needs. Your CSS pipeline can flatten it into custom properties, your Figma plugin can apply it as a text style, your AI tool can read it as one coherent unit.
Aliases: the part that earns the format its keep
The single biggest reason DTCG matters is aliases. A token can reference another token by curly-brace path:
{
"color": {
"primary-600": { "$value": "#2563EB", "$type": "color" },
"accent": { "$value": "{color.primary-600}", "$type": "color" }
}
}color.accent is now an alias for color.primary-600. Change the underlying hex once, and every alias that points at it updates.
This is how you build a real design system instead of a long list of hex codes. Your raw palette holds the literal values. A second tier of semantic tokens (background, foreground, accent, border, surface) points at the raw layer. Themes, dark mode, brand variants: all of them are just different aliases pointing at the same primitives.
The convention most teams settle on:
tier 1 → primitives (color.blue.500, space.4, font.size.16)
tier 2 → semantic (color.accent, color.background, space.gutter)
tier 3 → component (button.primary.background, card.padding)Tier 1 changes rarely. Tier 2 carries the brand. Tier 3 is where components live. AI tools, surprisingly, get a lot of mileage out of tier 2: when Claude Design sees a token called background-muted, it knows what to reach for in a way that "neutral-50" never quite communicates.
A real, complete example
Here's a small but complete tokens.json showing primitives, semantics, a typography composite, and aliases working together.
{
"$schema": "https://schemas.tokens.dtcg.org/v1/tokens.schema.json",
"color": {
"$type": "color",
"blue-600": { "$value": "#2563EB" },
"blue-700": { "$value": "#1D4ED8" },
"neutral-50": { "$value": "#F8FAFC" },
"neutral-900": { "$value": "#0F172A" },
"background": { "$value": "{color.neutral-50}" },
"foreground": { "$value": "{color.neutral-900}" },
"accent": { "$value": "{color.blue-600}" },
"accent-hover": { "$value": "{color.blue-700}" }
},
"font": {
"family": {
"$type": "fontFamily",
"serif": { "$value": ["Source Serif 4", "Georgia", "serif"] },
"sans": { "$value": ["Inter", "system-ui", "sans-serif"] }
}
},
"typography": {
"$type": "typography",
"heading-xl": {
"$value": {
"fontFamily": "{font.family.serif}",
"fontSize": "48px",
"fontWeight": 300,
"lineHeight": "1.1"
}
},
"body": {
"$value": {
"fontFamily": "{font.family.sans}",
"fontSize": "16px",
"fontWeight": 400,
"lineHeight": "1.6"
}
}
},
"radius": {
"$type": "dimension",
"sm": { "$value": "8px" },
"md": { "$value": "16px" },
"full": { "$value": "9999px" }
}
}That's a working brand foundation in under fifty lines. Add a couple of shadow tokens, a duration scale for motion, and you have something most tools will accept directly.
How tokens get into code
Authoring a tokens.json is half the job. The other half is turning it into the formats your stack actually consumes: CSS custom properties, a Tailwind config, a JS object, an iOS plist, an Android resource file.
The dominant tool here is Style Dictionary, Amazon's open-source token transformer. You feed it a DTCG file, configure platforms, and it writes everything else. A typical setup outputs:
tokens.css(custom properties under:root)tokens.ts(typed JS object for use in code)tailwind.config.js(extending the Tailwind theme)
Other production options are Cobalt (a faster, opinionated alternative built on the DTCG spec) and Terrazzo (a newer compiler with a cleaner plugin API). All of them ingest DTCG. The choice mostly comes down to ergonomics.
The point is: you write the brand once in DTCG, and your build step fans it out to every platform. When the brand changes, you change one file.
The composite type pitfall
The most common mistake I see in token files: people pack every typography decision into a flat list of primitives instead of using the typography composite type.
Bad:
{
"heading-xl-font": { "$value": "Source Serif 4", "$type": "fontFamily" },
"heading-xl-size": { "$value": "48px", "$type": "dimension" },
"heading-xl-weight": { "$value": 300, "$type": "fontWeight" },
"heading-xl-line-height": { "$value": "1.1", "$type": "number" }
}Good (the version above):
{
"heading-xl": {
"$type": "typography",
"$value": {
"fontFamily": "{font.family.serif}",
"fontSize": "48px",
"fontWeight": 300,
"lineHeight": "1.1"
}
}
}Why does it matter? Because a tool reading the bad version has no idea those four tokens belong together. Figma's text styles, Style Dictionary's typography transforms, and AI tools all want the bundled version. The composite type encodes the relationship the spec was designed to capture.
The same logic applies to shadow, border, gradient, transition. If the values belong together as a single design decision, use the composite type.
Where the spec is still moving
Honest section. The DTCG spec is a draft, not a finished standard. There are a handful of things that are still settling:
- Modes (light/dark, brand variants). The spec has a draft proposal but tools have shipped their own conventions in the meantime. Tokens Studio uses sets. Style Dictionary uses themes. The draft
$modeproposal will probably win, but right now you'll see all three. - Math expressions. You cannot natively write
space.4 * 2in a token. Some tools allow it via extensions. The spec is debating whether to bless it. - Animation tokens beyond duration and easing. Springs, keyframes, and complex motion don't have first-class types yet.
- Token aliasing across files. Cross-file references work in most tools, but the spec is still nailing down the exact semantics.
None of these block you from shipping a real token system. They do mean you should keep your primary token file simple and lean on tool extensions only when you have to.
Why this matters for AI tools
Pulling the thread back to the Taste Profile angle: AI design tools are converging on DTCG specifically because it's the only format with momentum. Claude Design's Skills can ingest a tokens.css generated from a DTCG file. Figma Make and v0 can read tokens. Cursor and Claude Code happily consume any JSON your prompt points at, but if you want a portable brand that survives a tool migration, DTCG is the format that earns the bet.
A Taste Profile ships with a DTCG-conformant tokens.json, a generated tokens.css, a Tailwind config, and the DESIGN.md that wraps the prose around them. The token file is interchangeable: drop it into Style Dictionary, into Cobalt, into Tokens Studio, into a Figma sync, and the same values come out the other side.
That portability is the entire reason to write tokens in DTCG instead of inventing your own shape. A token file is a long-lived artifact. You will move tools. You will swap pipelines. The format you author in is the one decision that locks in for years.
Where to go next
If you want to write a token file today:
- Open the DTCG draft spec and skim the section on token types. It's surprisingly readable.
- Look at a couple of real public examples: GitHub Primer, Salesforce Lightning, Adobe Spectrum.
- Pick a transformer (Style Dictionary or Cobalt) and run it locally with a tiny tokens file.
Then read the rest of the system around it. Tokens are the values. The reasoning lives in DESIGN.md, and the install path into AI tools lives in your Skill. Tokens without narrative produce a palette. Tokens with narrative produce a brand.
Either way, write them once, in DTCG, and let the tools come to you.
