I'm not a developer — I came up with this idea and built an interactive
prototype with Claude's help. Posting here because this needs real engineers
to take it somewhere.
The idea: a declare-ai.json file any creator or platform can publish alongside
content — declaring what percentage was written, coded, illustrated, or
researched by AI vs humans, which tools were used, and who the human
contributors are. An embeddable widget displays it as a collapsible pie chart.
A browser extension auto-detects it on any page. A community forum handles
disputes.
Think Creative Commons, but for AI provenance.
The demo is a full interactive developer briefing — architecture, JSON schema,
forum mockup with a dispute thread example, tech stack, and phased roadmap.
The widget on the page declares itself.
MIT licensed. Looking for developers who want to own this with me.
Genuinely open to all feedback — including "this already exists and here's
why it won't work."
Interesting concept. The biggest challenge I see isn’t the JSON schema or the widget, those are straightforward — but incentive alignment.
Why would creators or platforms voluntarily disclose AI usage if it could reduce perceived authenticity or trust?
For Creative Commons, the incentive was legal clarity and distribution benefits. For something like this to become a real standard, there probably needs to be either:
- Regulatory pressure
- Platform-level enforcement
- Search ranking or discoverability benefits
Otherwise, it risks becoming an “honesty-only” protocol that mainly compliant actors adopt, while bad actors ignore it.
Have you thought about what mechanism could drive adoption beyond goodwill?
The idea: a declare-ai.json file any creator or platform can publish alongside content — declaring what percentage was written, coded, illustrated, or researched by AI vs humans, which tools were used, and who the human contributors are. An embeddable widget displays it as a collapsible pie chart. A browser extension auto-detects it on any page. A community forum handles disputes.
Think Creative Commons, but for AI provenance.
The demo is a full interactive developer briefing — architecture, JSON schema, forum mockup with a dispute thread example, tech stack, and phased roadmap. The widget on the page declares itself.
MIT licensed. Looking for developers who want to own this with me.
Genuinely open to all feedback — including "this already exists and here's why it won't work."
Why would creators or platforms voluntarily disclose AI usage if it could reduce perceived authenticity or trust?
For Creative Commons, the incentive was legal clarity and distribution benefits. For something like this to become a real standard, there probably needs to be either:
- Regulatory pressure
- Platform-level enforcement
- Search ranking or discoverability benefits
Otherwise, it risks becoming an “honesty-only” protocol that mainly compliant actors adopt, while bad actors ignore it.
Have you thought about what mechanism could drive adoption beyond goodwill?