A full-stack error detection and analysis tool that scans documents and code for issues, categorizes them by severity, and provides AI-powered fix suggestions. Built with a React frontend featuring 3D visualizations and a PHP backend for processing.
Document and code quality assurance at the institution was almost entirely manual. Reviewers would open files, Word documents, PDFs, raw text, and scan line by line for formatting errors, broken references, inconsistent styling, and structural problems. With hundreds of templates and documents in active use, this process consumed enormous amounts of time and still missed issues regularly.
The bigger problem was propagation. When an error existed in a template, every document built from that template inherited the same issue. A misformatted heading, a broken link, or an incorrect style rule would replicate across dozens of courses before anyone caught it. By the time someone noticed, the damage was already widespread and the cleanup was painful.
There was no systematic way to catch these issues before they spread. Quality assurance was reactive. The team only found problems after they caused visible harm. The institution needed a tool that could scan documents at scale, categorize what it found by severity, and suggest fixes before errors had a chance to spread.
Reviewing documents line by line was time-consuming and inconsistent, with different reviewers catching different issues on different days.
Errors in templates silently replicated across every course built from them, turning one mistake into dozens before anyone noticed.
Without automated scanning, there was no systematic way to catch formatting inconsistencies, broken references, or structural problems.
Problems were only discovered after they caused visible issues. QA was a firefighting exercise, not a prevention strategy.
Built a processing pipeline that accepts Word documents (via Mammoth), PDFs (via PDF.js), and raw text. It extracts structured content and metadata from each format for consistent analysis regardless of the input type.
Developed a rule-based and AI-assisted engine that scans parsed content for issues and categorizes each finding by type (formatting, structural, reference, style) and severity (critical, warning, info) to prioritize what matters most.
I integrated AI suggestion logic that doesn't just flag problems. It proposes concrete fixes. Each error comes with a specific recommendation, so reviewers spend less time figuring out solutions on their own.
Built a React frontend with Three.js-powered 3D error visualizations that map detected issues spatially, plus Framer Motion animations throughout the interface to make complex data easier to work with.
Let's talk about how automated error detection could improve quality assurance at your institution.
Start a ConversationUpload Word documents, PDFs, or paste raw text. Mammoth and PDF.js handle format-specific parsing so every document type gets the same thorough analysis.
Every detected error is classified by type and severity: critical, warning, or informational. That way the team focuses on high-impact issues first and triages efficiently.
Each detected issue comes with a concrete fix recommendation generated by AI. Not just a flag, but a solution the reviewer can apply right away.
Three.js renders an interactive 3D map of detected errors. Reviewers get a spatial overview of where problems cluster and how severe the overall document health is.
Upload and scan multiple documents at once. The platform queues files, processes them in parallel, and delivers consolidated results. Especially useful for auditing entire template libraries.
Generate detailed error reports that can be exported and shared. Each report includes error locations, severity levels, descriptions, and suggested fixes for full documentation.
The main analysis view showing detected errors categorized by severity, with inline fix suggestions and a summary overview of document health.
An interactive Three.js-powered 3D map that spatially renders detected issues. Reviewers can explore error clusters and severity distributions visually.
Multi-document upload with queued processing, consolidated results, and exportable reports for sharing with stakeholders.
Error detection went from a manual, hit-or-miss process to something systematic. Template quality improved noticeably, and we stopped finding the same issues repeated in dozens of courses because problems were caught at the source.
Listing every error equally is overwhelming and counterproductive. When the team could filter by severity (critical issues first, warnings second, informational last) they actually fixed the important stuff instead of drowning in noise. The categorization engine was the single most useful feature.
A flat list of errors hides structural patterns. When errors are mapped spatially in 3D, clusters emerge. You can see that all the formatting issues are in section headers, or that broken references pile up in the appendix. That spatial awareness changes how people approach fixes.
The leap from "here's what's wrong" to "here's how to fix it" made a huge difference in adoption. Once the platform started showing fix suggestions alongside each error, resolution times dropped because reviewers didn't have to figure out the right fix themselves.
Tell us about your document review process. We'd like to explore how automated error detection and AI-powered fix suggestions could save your team real time.
No pitch. No pressure. Just a conversation about what might work.