The product
Five workflows. One engine.
Literaria is not a course library or a remediation tool. It is a content-transformation engine. Source goes in; compliant, bilingual, structured learning material comes out — with an audit trail attached. Every workflow is orchestrated by Perplexity Computer.
What it does
Ingests source documents — PDFs, regulations, curricula, manuals — and outputs structured learning modules with WCAG 2.1 AA conformant HTML5, semantic structure, ARIA labeling, machine-readable assessments, and embedded learning objectives.
- Claude Opus extracts semantic structure from the source
- GPT-5 generates accessible HTML5 with proper landmarks
- Gemini produces vision-based alt-text for figures
- axe-core + Pa11y validate every output before it lands
What it does
Computer researches the institution's locale via web search, then produces both versions in parallel. Puerto Rico classes get UPR examples and AODA / ADA Title II legal mapping. U.S. mainland gets HSI-relevant context. Latin America gets country-specific legal frames.
- Live web search for locale-specific examples
- Parallel EN + ES generation, not sequential translation
- Regional terminology and legal framework mapping
- Quality gates: linguistic review + cultural fit checks
What it does
A live agent inside Computer drives the class schedule. Session reminders ship in the student's language. When a class falls behind, Literaria reshuffles the schedule and notifies the instructor.
- Per-class schedule generation matched to the academic calendar
- Bilingual session reminders via email or LMS
- Adaptive replanning when students miss sessions
- Idempotent delivery — no duplicate fires possible
What it does
Parallel subagents research live WCAG case law via academic search, then produce bilingual PDF/UA-aligned reports validated against veraPDF. Every module ships with conformance evidence the institution can hand to OCR.
- WCAG conformance evidence per criterion
- Class-level completion analytics
- Learner accommodation records
- veraPDF-validated PDF/UA-aligned output
What it does
Every day, a Computer subagent re-reads W3C, WAI, ADA.gov, and the U.S. Access Board. When it detects a change, it cross-references which deployed modules cite the affected WCAG criteria — and flags the instructor with a remediation diff before the regulator does.
- Daily polite UA fetch with normalized SHA-256 deltas
- Severity logic: silent → info → warning → critical
- Critical = changed criterion maps to a live module
- Per-event remediation diff suggested by Computer
Why Computer
The four workflows are not chatbot calls.
They are orchestration tasks — research, generation, validation, scheduling, and publishing — running across multiple specialized models. A version of Literaria built on a single LLM API would not work. The regulatory-accuracy requirements demand verification loops and structured-output validation that benefit from Computer's orchestration model.
In the demo, Computer is visibly doing work: reading source documents, validating accessibility output against WCAG criteria, generating audit reports, scheduling delivery, and re-reading the regulatory frontier. Computer is not a chat interface for Literaria — it is the production infrastructure.