SNTNL // Sentinel

Type: Alignment Engine  |  Version: v2.1 (Beta)  |  Stack: Python / OpenAI API

Python 3.10 Puppeteer (Headless) OpenAI API (GPT-4o) Pinecone (Vector DB)

1. The Engineering Architecture

SNTNL is not a standard crawler. It simulates an Inference Engine to judge content exactly how an LLM would ingest it. The pipeline consists of three stages:

  1. Extraction Layer: A headless browser strips the DOM of CSS/JS noise, extracting only the semantic payload (Text + Schema).
  2. Vector Analysis: The content is embedded and compared against the target Query Intent Vector to measure "Cognitive Alignment."
  3. adjudication Layer: An LLM agent acts as a judge, applying the SheepRank Rubric to score the entity's legitimacy.

2. The Output (Sample Log)

SNTNL generates a structured JSON audit. Below is a sample output from a recent scan, demonstrating the rigorous criteria applied.

root@sntnl:~# ./audit --url=target.com --keyword="AI SEO" // Initiating SheepRank Protocol v1... // Extraction complete. Tokens: 1,420. Schema found: Person, Article.
{ "target_entity": "BlackSheep SEO", "sheeprank_score": 87/100, "breakdown": { "entity_coverage": { "score": 0.9, "status": "PASS", "note": "Entity explicitly defined in JSON-LD via DefinedTerm." }, "cognitive_alignment": { "score": 0.75, "status": "WARN", "note": "Answer block found but exceeds 200 tokens. Risk of truncation." } }, "recommendation": "Refactor H2 intro to < 60 words for RAG optimization." }

3. The Scoring Matrix

Unlike subjective audits, SNTNL creates a deterministic score based on three binary checks:

Access: The SNTNL API is currently restricted to internal use for the Dojo Project and advisory partners.