The full math behind every number we report — NVS, Citation Coverage, Sentiment Index, Authority, Quality Gate, Grade Scales. No black boxes.
VECTORY ENGINE v35 · UPDATED 2026‑03‑30A composite 0–100 measure of how well a brand is cited, sentimentally treated, and authority‑backed across the AI engines we track.
| Component | Weight | What it measures | Data source |
|---|---|---|---|
| Citation Coverage | Fraction of target queries where the brand is mentioned by AI engines | SONAR Observations | |
| Sentiment Index | Tone of mentions (1.0 = positive, 0.5 = neutral, 0.0 = negative) | LLM Classification | |
| Authority Score | Share of authoritative sources (Forbes, .gov, .edu, etc.) among citing sources | Source Influence Graph |
A 0–100 composite that decides whether a generated page is auto‑deployable, needs a small edit, or must be regenerated. Combines positive signals with deterministic penalties.
| Guard | Weight | What it evaluates | Deductions |
|---|---|---|---|
| Factual Accuracy | Do claims match site ground truth? No hallucinations? | −5…15 per hallucination −3…8 per unsupported claim |
|
| Brand Voice | Does content match brand tone? No filler language? | −5…10 per tone mismatch −2…5 per filler phrase |
|
| Citation Readiness | RAG retrievability — would AI engines cite this content? | −3…8 per vague/generic section |
Applied after the LLM evaluation. Calibrated against analysis of leaked system prompts from GPT‑5.2, Gemini 3, Perplexity Comet, and Claude Opus 4.
| Penalty | Range | Triggers | Engine impact |
|---|---|---|---|
| Anti‑Pattern | 0.00 – 0.15 | Hedge phrases (“It’s important to note”), buzzwords (“leading provider”, “world‑class”), sycophancy patterns | All engines lower citation probability for these patterns |
| Freshness | 0.00 – 0.10 | Stale dates (year < 2024) in content body | GPT‑5.2: P>10% instability → triggers web search re‑verification, bypasses content |
How often the brand domain shows up in AI engine responses, measured against every other domain that gets mentioned.
Counts how often the brand domain appears in AI engine responses across all SONAR observations. Competitors are all other domains mentioned, sorted by mention count (top 10 reported).
Weighted influence of the sources that cite you — not just a count, but how trustworthy they are inside each engine’s ranking model.
+ niche‑specific domains injected from SONAR enrichment per vertical
A 0.0 – 1.0 score for how AI engines emotionally frame the brand. Built from per‑mention classification across every SONAR observation.
| Index range | Verdict |
|---|---|
| ≥ 0.8 | Very positive — brand well‑received by AI models |
| ≥ 0.6 | Positive — mostly favorable mentions |
| ≥ 0.4 | Neutral — mixed or factual mentions |
| ≥ 0.2 | Negative — unfavorable mentions detected |
| < 0.2 | Very negative — brand has reputation issues in AI output |
An iterative patcher that re‑tests every generated page against ~10 target queries, patches weak spots, and rolls back if a patch makes things worse.
Final output grades — the same scale used inside the dashboard, audit reports, and the Quality Gate itself.
| Grade | Score | Verdict |
|---|---|---|
| A | ≥ 80 | Excellent — dominates AI visibility |
| B | ≥ 60 | Good — strong with room to grow |
| C | ≥ 40 | Average — visible but not dominant |
| D | ≥ 20 | Weak — significant gaps |
| F | < 20 | Critical — barely visible |
| Grade | Score | Action |
|---|---|---|
| A | ≥ 90 | Auto‑deploy ready |
| B | ≥ 75 | Deploy with minor review |
| C | ≥ 60 | Needs editing before deploy |
| D | ≥ 45 | Major rewrite required |
| F | < 45 | Regenerate from scratch |
Get a no‑BS NVS audit, a SONAR run across 6 LLMs, and the prioritized fix list — all built on the math above.