AI Visibility Journal & 0-4 Scoring System
If you can’t measure it, you can’t improve it. Traditional SEO gave us dashboards full of rankings, traffic, and conversions. AI search gives us silence. Your content either appears in an answer or it doesn’t, and most of the time you have no idea which happened. The AI Visibility Journal changes that. It’s a simple tracking matrix that makes your retrieval performance visible, comparable, and actionable over time.
Building Your AI Visibility Journal
Each row represents a single test. The columns capture enough context that another teammate could reproduce your work and reach the same conclusion.
At minimum, include:
- Prompt (verbatim): The exact wording you used.
- Intent tag: A short label explaining why the prompt matters (e.g., “core purchase,” “integration setup,” “competitive compare”).
- Date and time: Note the timezone. If you average multiple runs, store each run or the averaged score with run count.
- Platform and mode: E.g., “Perplexity Pro,” “ChatGPT-4,” “Gemini.” Note version if surfaced.
- Answer context notes: Brief observations such as “long-form synthesis,” “list-style answer,” “follow-up questions asked.”
- Citations observed: List any sources the system shows.
- Competitor presence: Names and whether they were mentioned or linked.
- Your visibility score (0-4): The standardized measure below.
- Change log reference: A pointer to any site, competitor, or platform changes that may explain movement.
- Archive link: Where you stored the answer text or screenshot for reference.
The 0-4 Scoring System
AI systems can place your content in one of four visibility states: Cited, Mentioned, Paraphrased, or Omitted. This scoring system extends those states into a numeric scale that’s easy to track across teams and over time.
- 0 points: No sign of your brand or content. Not mentioned, not linked, not paraphrased.
- 1 point: Paraphrased retrieval only. Your content is clearly used, but there’s no brand mention or link.
- 2 points: Mentioned by name in the answer, but not linked.
- 3 points: Linked directly, but not mentioned in the text of the answer.
- 4 points: Both mentioned by name and linked in the same answer.
Scoring Guidelines
When you suspect paraphrase, look for unique phrases, data points, or examples that you can reasonably trace to your own material: product-specific terminology, proprietary frameworks, distinctive phrasing. If you aren’t confident, leave it at 0 and add a comment rather than forcing a 1. False positives will pollute your trend line.
When an answer mentions you and links to a third party (e.g., a news article covering your work), score the mention as a 2 and note the external link in “Citations observed.”
If the answer links to you indirectly via a third-party aggregator or cached copy, treat it as a 3 but add a note that the link is indirect. That nuance will inform your optimization plan.
Reading the Patterns
As you collect rows, two layers of insight emerge.
Horizontally: You’ll see platform variance. Perhaps you average 3-4 on Perplexity for high-intent prompts but sit at 1-2 on Gemini. That suggests your structure and trust signals are aligned with one retrieval layer but not the other.
Vertically: You’ll see topic variance within a single platform. Strong scores on “how-to” prompts but weak scores on “competitive comparisons” points to content and evidence gaps in evaluative contexts.
Both views are valuable. Together they reveal where to invest your next unit of effort.
Making It Executive-Ready
Group prompts into business-relevant clusters: product lines, customer journeys, regions. Average scores within each cluster. A rising average in a high-value cluster is a simple way to demonstrate progress without dragging leadership through every row. A flat or falling average flags a risk that’s easy to understand: “We’re losing visibility in the evaluation stage for our flagship product.”
Using the Journal as a Management Tool
If a prompt hovers at 1 for two consecutive cycles, that’s a cue to strengthen attribution cues: add clearer citations, tighten chunk boundaries, or improve schema so the system is more confident naming and linking you.
If a prompt turns into a 2 (mention without link), make the linked evidence and canonical source unmistakable. Think explicit author and organization identity, robust reference sections, and machine-friendly anchor cues.
Moving from 3 (link without mention) to 4 (link and mention) often requires brand clarity in headings and opening sentences. The synthesis layer tends to lift what it can easily quote by name.
When you use the journal consistently, the 0-4 score stops being an abstract metric and becomes a steering wheel. It tells you where you stand, how fast you’re moving, and which adjustments will produce the next visible win.
This framework is explored in depth inThe Machine Layer: How to Stay Visible and Trusted in the Age of AI Search– available on Amazon
