VΞCTORY
● METHODOLOGY · OPEN-BOOK

Scoring & Methodology

The full math behind every number we report — NVS, Citation Coverage, Sentiment Index, Authority, Quality Gate, Grade Scales. No black boxes.

VECTORY ENGINE v35 · UPDATED 2026‑03‑30
📑 Contents
01 NVS — Neuro Visibility Score 02 Content Quality Score 03 Deterministic Penalties 04 Share of Voice 05 Citation Authority 06 Sentiment Index 07 Quality Gate (Self‑Test Loop) 08 Grade Scales

🎯 NVS — Neuro Visibility Score

A composite 0–100 measure of how well a brand is cited, sentimentally treated, and authority‑backed across the AI engines we track.

Core Formula (0–100)

NVS = 0.50 × CitationCoverage × 100
    + 0.30 × SentimentIndex  × 100
    + 0.20 × AuthorityScore  × 100
All inputs are normalized to 0.0 – 1.0 before weighting. Final score is clamped to [0, 100] and rounded to 1 decimal.
ComponentWeightWhat it measuresData source
Citation Coverage
50%
Fraction of target queries where the brand is mentioned by AI engines SONAR Observations
Sentiment Index
30%
Tone of mentions (1.0 = positive, 0.5 = neutral, 0.0 = negative) LLM Classification
Authority Score
20%
Share of authoritative sources (Forbes, .gov, .edu, etc.) among citing sources Source Influence Graph

Content Quality Score

A 0–100 composite that decides whether a generated page is auto‑deployable, needs a small edit, or must be regenerated. Combines positive signals with deterministic penalties.

Composite Formula (0–100)

Composite = 0.35 × Factual_Accuracy
         + 0.25 × Brand_Voice
         + 0.40 × Citation_Readiness
          Anti_Pattern_Penalty × 100
          Freshness_Penalty    × 100
GuardWeightWhat it evaluatesDeductions
Factual Accuracy
35%
Do claims match site ground truth? No hallucinations? −5…15 per hallucination
−3…8 per unsupported claim
Brand Voice
25%
Does content match brand tone? No filler language? −5…10 per tone mismatch
−2…5 per filler phrase
Citation Readiness
40%
RAG retrievability — would AI engines cite this content? −3…8 per vague/generic section
LLM Rounding Bias Fix: If all three sub‑scores land on multiples of 5 (e.g. 60 / 75 / 70), a deterministic ±1…3 jitter is applied using issue count and verdict length as the seed. This kills the unrealistic “round number” bias common with LLM evaluators.

⚠️ Deterministic Penalties

Applied after the LLM evaluation. Calibrated against analysis of leaked system prompts from GPT‑5.2, Gemini 3, Perplexity Comet, and Claude Opus 4.

PenaltyRangeTriggersEngine impact
Anti‑Pattern 0.00 – 0.15 Hedge phrases (“It’s important to note”), buzzwords (“leading provider”, “world‑class”), sycophancy patterns All engines lower citation probability for these patterns
Freshness 0.00 – 0.10 Stale dates (year < 2024) in content body GPT‑5.2: P>10% instability → triggers web search re‑verification, bypasses content
// Example: content with 3 buzzwords + 2 stale dates
anti_pattern = 0.05 // 5% penalty
freshness    = 0.10 // 10% penalty
combined     = 0.15 −15 points from composite

// Composite 72 → 72 − 15 = 57 (Grade: D)

⚔️ Share of Voice (SoV)

How often the brand domain shows up in AI engine responses, measured against every other domain that gets mentioned.

SoV% = brand_mentions / total_mentions × 100

Counts how often the brand domain appears in AI engine responses across all SONAR observations. Competitors are all other domains mentioned, sorted by mention count (top 10 reported).

⚡ SoV = 0% is common for new or unknown brands. It means no AI engine currently mentions the brand domain in response to target queries — the exact gap VECTORY exists to close.

🏛️ Citation Authority Score

Weighted influence of the sources that cite you — not just a count, but how trustworthy they are inside each engine’s ranking model.

authority_score = Σ authority_source_influence / Σ total_influence

// Weighted by influence, not raw count
// authority = source ∈ base authority domains OR niche‑specific domains

Base authority domains (universal)

bloomberg.com forbes.com reuters.com wikipedia.org hbr.org github.com .gov .edu trustpilot.com g2.com

+ niche‑specific domains injected from SONAR enrichment per vertical

💬 Sentiment Index

A 0.0 – 1.0 score for how AI engines emotionally frame the brand. Built from per‑mention classification across every SONAR observation.

sentiment_index = (positive × 1.0 + neutral × 0.5) / total
Index rangeVerdict
≥ 0.8Very positive — brand well‑received by AI models
≥ 0.6Positive — mostly favorable mentions
≥ 0.4Neutral — mixed or factual mentions
≥ 0.2Negative — unfavorable mentions detected
< 0.2Very negative — brand has reputation issues in AI output

🔬 Quality Gate · Self‑Test Loop

An iterative patcher that re‑tests every generated page against ~10 target queries, patches weak spots, and rolls back if a patch makes things worse.

Iterative Content Patching

Self‑Test
Find Weak Queries
LLM Patch
Re‑Test
Compare & Track Best
threshold = 7 // out of ~10 target queries
max_iterations = 5

LOOP:
  if citeable_count < threshold:
    → generate patch sections for weak queries
    → merge + re‑test
    → track best_content (rollback if degradation)
    → check stop conditions
RETURN best_content

Stop conditions

Rollback strategy: The gate always tracks the best‑known content state. If a patch iteration causes degradation, it rolls back to the best state instead of keeping worsened content. Final output is always the best score achieved.

🏆 Grade Scales

Final output grades — the same scale used inside the dashboard, audit reports, and the Quality Gate itself.

NVS Score Grades

GradeScoreVerdict
A≥ 80Excellent — dominates AI visibility
B≥ 60Good — strong with room to grow
C≥ 40Average — visible but not dominant
D≥ 20Weak — significant gaps
F< 20Critical — barely visible

Content Quality Grades

GradeScoreAction
A≥ 90Auto‑deploy ready
B≥ 75Deploy with minor review
C≥ 60Needs editing before deploy
D≥ 45Major rewrite required
F< 45Regenerate from scratch

Want this run on your brand?

Get a no‑BS NVS audit, a SONAR run across 6 LLMs, and the prioritized fix list — all built on the math above.

See the pipeline