Multi-API AI Content Checker

A unified AI detection dashboard that runs your text through multiple commercial detectors simultaneously: ZeroGPT, Sapling AI, Winston AI, and a custom ML model. It aggregates scores and stores scan history for consistent, multi-perspective analysis.

0+ Detection APIs
Unified Scoring
Full Scan History
Screenshot coming soon

What was broken.

No single AI detector is reliable enough to stake academic integrity decisions on. ZeroGPT might flag something that Sapling misses. Winston might agree with ZeroGPT but for different reasons. Faculty who want to be thorough end up copying text into 3-4 different websites, comparing results manually, and trying to make sense of contradictory scores. That's assuming they even have subscriptions to multiple services.

The process is slow, inconsistent, and leaves no audit trail. When an integrity case goes to committee, faculty need to show their work, which detectors they used, what scores they got, and whether the results were consistent. Without a unified system, that documentation is a mess of screenshots and browser tabs.

Single-Detector Unreliability

Every AI detector has blind spots. Relying on just one produces false positives and false negatives.

Manual Multi-Tool Workflow

Checking text across multiple detectors means copy-pasting into 3-4 websites and manually comparing contradictory results.

No Audit Trail

When cases go to integrity committees, faculty have no organized record of which tools they used and what they found.

Inconsistent Normalization

Each detector reports scores differently (0-1, 0-100, inverted scales). Comparing raw outputs is confusing and error-prone.

How we solved it.

01

Multi-API Aggregation

Built a Node.js backend that calls multiple detection APIs simultaneously (ZeroGPT, Sapling AI, Winston AI) and normalizes all results to a consistent 0-100% format. Each detector service returns a standardized {detector, status, aiScore, humanScore, raw} object.

Sapling recalculates overall scores from sentence-level analysis to prevent misleading results. Winston normalizes between 0-1 and 0-100 ranges automatically.
02

Custom ML Detector (DeTeCtive)

Built a separate Python Flask service with pre-computed embeddings for a custom machine learning detector that runs alongside the commercial APIs, providing an additional independent signal.

03

Unified Dashboard

React frontend displays all detector results side-by-side with individual scores, an aggregated consensus view, and visual indicators for agreement/disagreement between detectors.

04

Persistent Scan History

Every scan is saved to SQLite (or MySQL) with full results from all detectors, creating an audit trail for academic integrity proceedings.

Technologies Used

React 18 Vite Tailwind CSS Node.js Express SQLite Python Flask ZeroGPT API Sapling AI API Winston AI API

Facing a similar challenge?

Let's talk about how a unified AI detection workflow could work for your institution.

Start a Conversation

What it actually does.

Simultaneous Multi-API Scanning

Submit text once, get results from all enabled detectors in parallel. 30-second timeout per API, 60 seconds for the ML model.

Normalized Scoring

Every detector's output is converted to a consistent 0-100% AI/Human score, regardless of how the original API reports results.

Custom ML Detector

A Python-based detector using pre-computed embeddings that provides an independent, non-commercial detection signal alongside the API results.

Scan History & Audit Trail

Every scan is persistently stored with timestamps, input text, and per-detector results. Searchable history for integrity case documentation.

Detector Health Monitoring

Health check endpoint monitors API availability. Disabled detectors (Copyleaks, GPTZero) can be enabled when credentials are available.

Sentence-Level Analysis

Sapling integration provides per-sentence AI probability scores, showing which specific sentences are most likely AI-generated.

See it in action.

The numbers speak.

0+
Detection APIs
ZeroGPT, Sapling, Winston, DeTeCtive
0s
API Timeout
Fast parallel execution
0K
Character Limit
Up to 50,000 characters per scan
0%
Score Normalization
Consistent scale across all detectors
Instead of maintaining subscriptions to four different detection tools and manually comparing results, everything is in one place. The scan history has saved us multiple times when integrity cases went to the appeals committee.
AI
Academic Integrity Officer Online University

What we learned.

01

Score normalization is harder than it sounds

ZeroGPT returns a “fakePercentage,” Sapling returns a 0-1 probability, Winston returns a “human score” where higher means more human. Getting all of these onto a consistent scale where results are actually comparable required careful per-service normalization logic.

02

Sentence-level scores are more useful than document-level

A document might be 60% AI overall, but that doesn't help faculty. Sapling's per-sentence breakdown shows exactly which paragraphs are suspicious, that's actionable information for a conversation with a student.

03

The audit trail is the product

The detection scores themselves are useful, but the real value is the persistent, searchable history. Academic integrity cases can take weeks to resolve and go through multiple committees. Having a timestamped record of exactly what was scanned and what each detector found is what faculty actually need.

Want a unified AI
detection workflow?

Tell us about your academic integrity process and let's explore what a multi-detector aggregation system could look like for your institution.

No pitch. No pressure. Just a conversation about what might work.