Documentation Summary
What We've Built
Clear AI v2 includes 19 production-ready modules organized into 5 categories, plus a complete 5-agent system and GraphQL API, all with comprehensive test coverage (960+ tests total).
Quick Navigation
🎯 For Building Conversational AI
- Response System - Structured responses
- Intent Classification - Understand user intent
- Confidence Scoring - Express uncertainty
- Progress Tracking - Multi-step tasks
- Conversation Utilities - Entity extraction
🧠 For Managing Context & Memory
- Context Management - Handle long conversations
- Memory Systems - Neo4j + Pinecone
- Embeddings - Ollama & OpenAI
🔄 For Complex Workflows
- Workflow Graphs - LangGraph-style state machines
- Checkpointing - Save/resume workflows
🏗️ For Production Infrastructure
- Token Management - Counting & budgets
- LLM Providers - OpenAI, Groq, Ollama
- Configuration - Environment management
- Observability - Langfuse tracing
🔧 Foundation & Tools
- Types - TypeScript interfaces
- Validation - Zod schemas
- Utilities - Common helpers
- Tools - MCP tools
- API - REST API
📖 Practical Guides
- Environment Setup - Service installation
- Testing Guide - Running and writing tests
- Configuration Guide - All config options
- Development Guide - Contributing to project
Module Status
Category | Modules | Unit Tests | Integration Tests | Status |
---|---|---|---|---|
Conversational | 5 | 92 | 20+ | ✅ Complete |
Context & Memory | 3 | 112 | 15+ | ✅ Complete |
Workflows | 2 | 35 | 10+ | ✅ Complete |
Infrastructure | 4 | 34+ | 25+ | ✅ Complete |
Foundation | 5 | 451+ | 30+ | ✅ Complete |
Shared Library | 19 | 724 | 100+ | ✅ Complete |
Agent System | 5 | 78 | 45+ | ✅ Complete |
GraphQL API | 1 | - | 62 | ✅ Complete |
Grand Total | 25 | 802 | 160+ | ✅ 960+ Tests |
Key Features
✅ Conversational: Ask questions, show progress, express uncertainty
✅ Context-Aware: Smart compression, long conversation support
✅ Memory: Episodic (Neo4j) + Semantic (Pinecone)
✅ Workflows: State graphs with conditional logic
✅ Multi-LLM: OpenAI, Groq, Ollama with fallback
✅ Cost Control: Token budgets and estimation
✅ Observable: Langfuse integration
✅ Type-Safe: Strict TypeScript throughout
✅ Well-Tested: 724 unit + 45 integration tests
Getting Started
- Installation & Setup
- Core Concepts (non-technical)
- Architecture (technical overview)
- Pick a module and start building!
Common Use Cases
Build a Customer Support Agent
Use: Response System, Intent Classification, Context Management
Create a Data Analysis Agent
Use: Progress Tracking, Confidence Scoring, Workflows
Implement Multi-Step Workflows
Use: Workflow Graphs, Checkpointing, Progress Tracking
Control AI Costs
Use: Token Management, Context Compression
Debug Production Issues
Use: Observability (Langfuse), Structured Logging
What's NOT Included
This is the shared library foundation. You'll build your specific agents (Orchestrator, Planner, Executor, Analyzer, Summarizer) on top of these modules.
The documentation focuses on what exists, not future features.
Need Help?
- Environment Setup: Setup Guide
- Running Tests: Testing Guide
- Configuration: Config Guide
- Development: Dev Guide
Ready to build? Start with Getting Started or jump to a specific module!