Websites declare capabilities. Agents that can follow them, do. The rest reveal themselves.
Click refresh. Watch the left side change. Watch the right side not.
This is what AI agents deal with. Every redesign. Every A/B test.
Every website redesign. Every A/B test. Every dynamic load. The agent parses HTML, guesses at selectors, retries when it fails.
This is how browser copilots, shopping agents, and every automation tool works right now. It doesn't scale. It can't.
IntentQL is a simple proposal: websites publish an /agent.json file declaring what they can do. Agents read the contract and call stable endpoints. The UI becomes irrelevant to automation.
But here's what we learned from testing: Contracts don't enforce compliance — they filter for it. Agents that read the contract respected every constraint. Agents that couldn't fetch it failed gracefully or revealed themselves as incapable.
That's the real value. Not universal compliance. Visible capability.
A machine-readable file declaring what the site can do. Intents, endpoints, parameters, constraints.
GET /agent.json
Normal REST endpoints that do the work. Search products, check stock, get details. No DOM required.
GET /api/products
The website looks however you want. Redesign freely. Agents don't care — they're not looking at it.
/* whatever */
Clear declarations instead of guessing. Agents that read contracts succeed. Agents that don't, fail visibly. Know which is which.
See which agents can follow instructions — and which can't. Audit your agent-readiness. Redesign your UI without breaking capable integrations.
Contracts, not heuristics. Audit trails showing what was declared and what was attempted. The kind of evidence procurement and legal require.
The only question is whether it's open or proprietary.