AI Course Content Analyzer

An AI-powered analysis engine that unpacks LMS course export files, extracts discussions, assignments, and learning objectives, then uses Google Gemini to evaluate content quality, engagement potential, and difficulty alignment. The result: opaque course packages become clear quality reports.

AI Powered Analysis
0+ Export Formats
0% Objective Scoring
Screenshot coming soon

What was broken.

Course export files (.imscc packages, ZIP archives, Common Cartridge bundles) are the lingua franca of LMS content migration. But they're also black boxes. Nobody was opening them up to look at what was actually inside, let alone evaluating whether the content was any good.

The QA team at the institution received dozens of these packages every semester from faculty who were moving courses between platforms, archiving old sections, or sharing content across departments. Each package could contain hundreds of files: discussion prompts, assignment rubrics, quizzes, learning objectives, media assets. Reviewing even one package thoroughly took hours. Reviewing all of them was simply not happening.

The result was predictable. Misaligned learning objectives slipped through. Discussion prompts that hadn't been updated in years kept getting recycled. Assignments with vague rubrics went unquestioned because nobody had the bandwidth to question them. Quality review was manual, subjective, and never done the same way twice.

Course Packages Were Black Boxes

Export files sat in folders unopened. Nobody had the tools or time to unpack and analyze what was actually inside them.

No Systematic Quality Evaluation

There was no standard way to measure content quality. Reviews were ad hoc, inconsistent, and dependent on whoever happened to look.

Misaligned Objectives Went Unnoticed

Learning objectives rarely matched the assessments they were supposed to support. Nobody caught it because nobody was checking.

Manual Review Was Subjective

Quality depended on who reviewed it and when. One reviewer's "good enough" was another's "needs major revision."

How we solved it.

01

Export Format Parsing & Extraction

Built a universal parser that handles .imscc (Common Cartridge), raw ZIP exports, and other LMS-specific archive formats. The system unpacks each file, walks the manifest, and extracts every discussion prompt, assignment, quiz, rubric, and learning objective into a structured, queryable format.

Design choice: I stored extracted content in SQLite per-package for fast local analysis, with MySQL for cross-package aggregation and historical tracking.
02

Content Classification & Mapping

After extraction, every piece of content is classified by type and mapped to its associated learning objectives. The system identifies which assessments connect to which objectives, flags orphaned content that isn't tied to any goal, and detects duplicated or near-duplicate material across sections.

03

Google Gemini AI Analysis

Each extracted element is sent through Google Gemini with carefully engineered prompts that evaluate content quality, engagement potential, cognitive difficulty level (mapped to Bloom's Taxonomy), and alignment between stated objectives and actual assessments. The AI produces structured scores and written rationale for every evaluation.

04

Report Generation & Recommendations

All analysis results compile into exportable quality reports, whether for a single course, a department, or the whole catalog. Each report surfaces specific issues, ranks them by impact, and gives concrete recommendations the QA team and faculty can act on right away.

Reports include visual breakdowns of objective alignment, difficulty distribution, and engagement scoring so patterns are obvious at a glance.

Technologies Used

PHP MySQL SQLite Google Gemini AI Common Cartridge (.imscc) LMS Export Parsing ZIP Processing Content Analysis

Sitting on course packages you've never analyzed?

I can show you how automated content analysis catches quality issues that manual reviews miss.

Start a Conversation

What it actually does.

Multi-Format Import

Handles .imscc Common Cartridge packages, raw ZIP exports, and other LMS-specific archive formats. Upload it and the system figures out the rest.

Automatic Content Extraction

Pulls out every discussion prompt, assignment brief, project description, quiz, rubric, and learning objective, all structured and ready for analysis.

AI Objective Analysis

Google Gemini evaluates whether learning objectives are measurable, appropriately scoped, and actually aligned with the assessments in the course.

Engagement Scoring

Rates discussion prompts and assignments on engagement potential. Flat, low-effort prompts that won't generate real student discussion get flagged.

Difficulty Alignment

Maps content to Bloom's Taxonomy levels and flags mismatches, like a course claiming higher-order thinking but only assessing recall and comprehension.

Batch Processing & Export

Analyze dozens of course packages in a single run. Results compile into exportable reports that the QA team and faculty can review, share, and act on.

See it in action.

The numbers speak.

0%
Faster Reviews
From hours per package to minutes
0+
Formats Supported
.imscc, .zip, and LMS-specific exports
0%
Objective Coverage
Every objective checked for alignment
0+
Courses Analyzed
Batch processed in a single run

What I learned.

01

The hardest part wasn't AI, it was parsing

Common Cartridge sounds standard, but every LMS implements it differently. Some nest content three levels deep, others flatten everything. Some embed objectives in metadata, others inline them in HTML. Getting the parser to handle every real-world variation was far more work than building the AI analysis layer. The lesson: never underestimate the gap between a spec and how people actually use it.

02

Scores without explanations get ignored

Early versions gave numeric scores for engagement and alignment. Faculty glanced at them and moved on. When we added Gemini-generated rationale ("This discussion prompt asks for a personal opinion but doesn't require evidence or peer response, limiting critical engagement"), the feedback became useful. Numbers tell you something is off. Explanations tell you what to fix.

03

Batch processing changed the conversation entirely

When the QA team could analyze an entire department's course packages in one run, the conversation shifted from "is this one course okay?" to "what patterns do we see across the program?" That aggregate view revealed systemic issues, like an entire department recycling the same low-engagement discussion prompts, that individual reviews would never surface.

Want this for
your institution?

Tell us about your content quality challenges and we'll sketch out what an analysis solution could look like for your setup.

No pitch. No pressure. Just a conversation about what might work.