Audit & Scoring Methodology
This page defines what we evaluate during a GEO / AIO audit, how we score categories, and how results map to an implementation plan.
Scoring Categories
Each category is evaluated independently and contributes to overall readiness and visibility.
Bot Accessibility
- robots directives and indexability
- HTTP status consistency (200/301/404/5xx)
- render behavior (server-side content availability)
- crawl traps, blocked assets, inconsistent caching
Structured Data & Metadata
- schema coverage (Organization/Service/Product/FAQ/Article as applicable)
- entity completeness and accuracy
- metadata clarity (titles, descriptions, canonical tags)
- duplicate/conflicting markup removal
Content Discoverability
- internal linking patterns and crawl depth
- information architecture and topic clustering
- answer-first formatting and chunking
- content gaps for high-intent questions
Consistency & Trust Signals
- business identity consistency (name, location, phone, domain)
- authorship and editorial signals
- citations/references where appropriate
- cross-source corroboration and disambiguation
Technical Reliability
- performance and uptime indicators
- canonicalization & duplication controls
- cache correctness (bot vs human parity)
- security headers and safe rendering behavior
Critical Blockers
These issues can severely reduce AI visibility until resolved:
- Key pages blocked from crawling
- Widespread 4xx/5xx errors on important content
- Inconsistent canonicals producing duplicates
- Broken or misleading structured data
- Bot receives different content than users due to caching rules
Validation & Before/After Testing
- Recrawl verification (status codes, indexability, crawl depth)
- Structured data testing and schema validation
- Content clarity checks (definition blocks, entity attributes)
- Spot checks for external consistency signals (as applicable)
