An AI-powered analysis engine that unpacks LMS course export files, extracts discussions, assignments, and learning objectives, then uses Google Gemini to evaluate content quality, engagement potential, and difficulty alignment. The result: opaque course packages become clear quality reports.
Course export files (.imscc packages, ZIP archives, Common Cartridge bundles) are the lingua franca of LMS content migration. But they're also black boxes. Nobody was opening them up to look at what was actually inside, let alone evaluating whether the content was any good.
The QA team at the institution received dozens of these packages every semester from faculty who were moving courses between platforms, archiving old sections, or sharing content across departments. Each package could contain hundreds of files: discussion prompts, assignment rubrics, quizzes, learning objectives, media assets. Reviewing even one package thoroughly took hours. Reviewing all of them was simply not happening.
The result was predictable. Misaligned learning objectives slipped through. Discussion prompts that hadn't been updated in years kept getting recycled. Assignments with vague rubrics went unquestioned because nobody had the bandwidth to question them. Quality review was manual, subjective, and never done the same way twice.
Export files sat in folders unopened. Nobody had the tools or time to unpack and analyze what was actually inside them.
There was no standard way to measure content quality. Reviews were ad hoc, inconsistent, and dependent on whoever happened to look.
Learning objectives rarely matched the assessments they were supposed to support. Nobody caught it because nobody was checking.
Quality depended on who reviewed it and when. One reviewer's "good enough" was another's "needs major revision."
Built a universal parser that handles .imscc (Common Cartridge), raw ZIP exports, and other LMS-specific archive formats. The system unpacks each file, walks the manifest, and extracts every discussion prompt, assignment, quiz, rubric, and learning objective into a structured, queryable format.
After extraction, every piece of content is classified by type and mapped to its associated learning objectives. The system identifies which assessments connect to which objectives, flags orphaned content that isn't tied to any goal, and detects duplicated or near-duplicate material across sections.
Each extracted element is sent through Google Gemini with carefully engineered prompts that evaluate content quality, engagement potential, cognitive difficulty level (mapped to Bloom's Taxonomy), and alignment between stated objectives and actual assessments. The AI produces structured scores and written rationale for every evaluation.
All analysis results compile into exportable quality reports, whether for a single course, a department, or the whole catalog. Each report surfaces specific issues, ranks them by impact, and gives concrete recommendations the QA team and faculty can act on right away.
I can show you how automated content analysis catches quality issues that manual reviews miss.
Start a ConversationHandles .imscc Common Cartridge packages, raw ZIP exports, and other LMS-specific archive formats. Upload it and the system figures out the rest.
Pulls out every discussion prompt, assignment brief, project description, quiz, rubric, and learning objective, all structured and ready for analysis.
Google Gemini evaluates whether learning objectives are measurable, appropriately scoped, and actually aligned with the assessments in the course.
Rates discussion prompts and assignments on engagement potential. Flat, low-effort prompts that won't generate real student discussion get flagged.
Maps content to Bloom's Taxonomy levels and flags mismatches, like a course claiming higher-order thinking but only assessing recall and comprehension.
Analyze dozens of course packages in a single run. Results compile into exportable reports that the QA team and faculty can review, share, and act on.
Drag and drop .imscc or .zip files and watch the system unpack, classify, and catalog every piece of content. The extraction progress shows each element as it's identified: discussions, assignments, objectives, quizzes, all organized in real time.
Each content element receives scores for quality, engagement, and difficulty alignment. The AI provides written rationale alongside every score, so reviewers understand exactly why something was flagged and what to do about it.
Reports break down findings by category: objective alignment gaps, low-engagement prompts, difficulty mismatches. Visual charts and prioritized recommendations make it clear where to focus.
Common Cartridge sounds standard, but every LMS implements it differently. Some nest content three levels deep, others flatten everything. Some embed objectives in metadata, others inline them in HTML. Getting the parser to handle every real-world variation was far more work than building the AI analysis layer. The lesson: never underestimate the gap between a spec and how people actually use it.
Early versions gave numeric scores for engagement and alignment. Faculty glanced at them and moved on. When we added Gemini-generated rationale ("This discussion prompt asks for a personal opinion but doesn't require evidence or peer response, limiting critical engagement"), the feedback became useful. Numbers tell you something is off. Explanations tell you what to fix.
When the QA team could analyze an entire department's course packages in one run, the conversation shifted from "is this one course okay?" to "what patterns do we see across the program?" That aggregate view revealed systemic issues, like an entire department recycling the same low-engagement discussion prompts, that individual reviews would never surface.
Tell us about your content quality challenges and we'll sketch out what an analysis solution could look like for your setup.
No pitch. No pressure. Just a conversation about what might work.