🧠Server Package - Intelligent AI Server
The Clear AI server is a comprehensive Express.js API that provides the backend services for the entire system with exceptional intelligence capabilities. Built with TypeScript and featuring advanced memory integration, it handles intelligent agent processing, memory management, relationship analysis, and sophisticated tool execution.
🧠Intelligence Overview​
The server package (@clear-ai/server
) is the intelligent backbone of the Clear AI system. It provides:
Advanced Agent Intelligence (8.5/10)​
- Perfect Intent Classification: 100% accuracy across all query types
- Hybrid Intelligence: Combines memory, reasoning, and tool execution
- Context-aware Processing: Maintains conversation context across sessions
- Relationship Understanding: Recognizes complex API data hierarchies
Memory System Integration (9/10)​
- Episodic Memory: Stores and retrieves conversation history
- Semantic Memory: Conceptual knowledge and relationships
- Cross-session Persistence: Maintains user preferences across sessions
- Intelligent Retrieval: Context-aware memory search and relevance scoring
API Relationship Intelligence (9/10)​
- Hierarchical Understanding: Users → Posts → Comments relationships
- Pattern Recognition: Identifies data flow and structure patterns
- Semantic Grouping: Categorizes API resources by function
- Multi-step Reasoning: Complex data traversal and analysis
Traditional Features​
- RESTful APIs: Comprehensive API endpoints for all functionality
- Tool Execution: Dynamic tool registration and execution
- Workflow Orchestration: LangGraph-powered AI workflows
- LLM Integration: Multiple LLM provider support
- User Management: User creation and management
- API Documentation: Swagger/OpenAPI documentation
Technology Stack​
Core Technologies​
- Node.js: JavaScript runtime
- Express.js: Web framework
- TypeScript: Type-safe server code
- Swagger/OpenAPI: API documentation
Additional Libraries​
- LangChain: LLM integration framework
- LangGraph: Workflow orchestration
- Axios: HTTP client for external APIs
- CORS & Helmet: Security middleware
- Morgan: Request logging
- Langfuse: LLM observability
Project Structure​
packages/server/
├── src/
│ ├── config/ # Configuration files
│ │ └── swagger.ts # Swagger configuration
│ ├── controllers/ # Route controllers
│ │ ├── healthController.ts
│ │ ├── toolExecutionController.ts
│ │ ├── langGraphController.ts
│ │ ├── mcpController.ts
│ │ └── userController.ts
│ ├── middleware/ # Express middleware
│ │ ├── errorHandler.ts
│ │ └── validation.ts
│ ├── routes/ # API routes
│ │ ├── healthRoutes.ts
│ │ ├── toolExecutionRoutes.ts
│ │ ├── langGraphRoutes.ts
│ │ ├── mcpRoutes.ts
│ │ └── userRoutes.ts
│ ├── services/ # Business logic
│ │ ├── toolExecutionService.ts
│ │ └── userService.ts
│ ├── types/ # TypeScript types
│ │ └── ...
│ └── index.ts # Application entry point
├── dist/ # Built files
├── logs/ # Application logs
├── package.json
├── tsconfig.json
└── env.example
🧠Intelligence API Endpoints​
Agent Query Processing​
POST /api/agent/execute
​
Execute intelligent query with memory context.
Request:
{
"query": "What do you remember about our previous discussion on machine learning?",
"options": {
"userId": "user-123",
"sessionId": "session-456",
"includeMemoryContext": true,
"includeReasoning": true,
"model": "gpt-4",
"temperature": 0.7
}
}
Response:
{
"success": true,
"data": {
"response": "Based on our previous discussion about machine learning...",
"intent": { "type": "hybrid", "confidence": 0.9 },
"memoryContext": { "episodic": [...], "semantic": [...] },
"reasoning": "Intent: hybrid | Reasoning: Combines memory recall with knowledge explanation",
"metadata": { "executionTime": 2150, "memoryRetrieved": 12, "toolsExecuted": 0 }
},
"message": "Query executed successfully"
}
Memory Management​
POST /api/memory/episodic
​
Store episodic memory with context.
Request:
{
"userId": "user-123",
"sessionId": "session-456",
"content": "User discussed machine learning algorithms",
"context": {
"topic": "AI/ML",
"importance": 0.9,
"conversation_turn": 5
},
"metadata": {
"source": "conversation",
"tags": ["machine-learning", "algorithms"]
}
}
POST /api/memory/semantic
​
Store semantic knowledge.
Request:
{
"userId": "user-123",
"concept": "Machine Learning",
"description": "A subset of AI that enables computers to learn from data",
"metadata": {
"category": "AI",
"confidence": 0.95
},
"relationships": {
"similar": ["Deep Learning", "Neural Networks"],
"parent": "Artificial Intelligence"
}
}
POST /api/memory/episodic/search
​
Search memories intelligently.
Request:
{
"userId": "user-123",
"query": "machine learning discussion",
"limit": 10
}
Relationship Analysis​
POST /api/agent/analyze-relationships
​
Analyze API data relationships.
Request:
{
"data": {
"users": [{"id": 1, "name": "Alice"}],
"posts": [{"id": 1, "userId": 1, "title": "My Post"}],
"comments": [{"id": 1, "postId": 1, "userId": 2, "text": "Great!"}]
},
"options": {
"userId": "user-123",
"includePatterns": true,
"includeSemanticGrouping": true
}
}
Traditional API Endpoints​
Health Endpoints​
GET /api/health
​
Basic health check endpoint.
Response:
{
"success": true,
"data": {
"status": "healthy",
"timestamp": "2023-01-01T00:00:00.000Z",
"uptime": 123.45
},
"message": "Server is running"
}
GET /api/health/detailed
​
Detailed health information including memory usage and system info.
Response:
{
"success": true,
"data": {
"status": "healthy",
"timestamp": "2023-01-01T00:00:00.000Z",
"uptime": 123.45,
"memory": {
"rss": 123456789,
"heapTotal": 123456789,
"heapUsed": 123456789
},
"version": "v18.0.0",
"environment": "development"
},
"message": "Detailed health check completed"
}
Tool Execution Endpoints​
GET /api/tools
​
Get all registered tools.
Response:
{
"success": true,
"data": [
{
"name": "calculator",
"description": "Perform basic arithmetic operations",
"parameters": {
"operation": "string",
"a": "number",
"b": "number"
}
}
],
"message": "Tools retrieved successfully"
}
POST /api/tools/execute
​
Execute a single tool.
Request:
{
"toolName": "calculator",
"args": {
"operation": "add",
"a": 5,
"b": 3
}
}
Response:
{
"success": true,
"data": {
"result": 8,
"executionTime": 45,
"toolName": "calculator"
},
"message": "Tool executed successfully"
}
POST /api/tools/execute-multiple
​
Execute multiple tools in parallel.
Request:
{
"tools": [
{
"toolName": "calculator",
"args": { "operation": "add", "a": 5, "b": 3 }
},
{
"toolName": "calculator",
"args": { "operation": "multiply", "a": 4, "b": 2 }
}
]
}
POST /api/tools/execute-sequential
​
Execute multiple tools sequentially.
POST /api/tools/execute-with-llm
​
Execute tool using LLM for parameter extraction.
Request:
{
"query": "Add 5 and 3",
"toolName": "calculator",
"options": {
"model": "gpt-3.5-turbo",
"temperature": 0.1
}
}
POST /api/tools/mcp
​
MCP-style execution based on natural language query.
Request:
{
"query": "Make an API call to https://jsonplaceholder.typicode.com/users/1"
}
LangGraph Workflow Endpoints​
POST /api/langgraph/execute
​
Execute a workflow using LangGraph.
Request:
{
"query": "Make an API call to https://jsonplaceholder.typicode.com/users/1",
"options": {
"model": "ollama",
"temperature": 0.1
}
}
Response:
{
"success": true,
"data": {
"success": true,
"finalResult": { "id": 1, "name": "Leanne Graham" },
"allResults": {
"api_call": { "status": 200, "data": { "id": 1, "name": "Leanne Graham" } }
},
"executionOrder": ["api_call"],
"errors": [],
"executionTime": 1234,
"traceId": "trace-123",
"messages": []
},
"message": "Workflow executed successfully"
}
GET /api/langgraph/stats
​
Get workflow statistics and available tools.
Response:
{
"success": true,
"data": {
"totalExecutions": 150,
"successfulExecutions": 142,
"failedExecutions": 8,
"averageExecutionTime": 1234,
"availableTools": 5,
"supportedModels": ["openai", "ollama", "mistral", "groq"]
},
"message": "Statistics retrieved successfully"
}
MCP Endpoints​
POST /api/mcp/execute
​
Execute an MCP tool.
Request:
{
"toolName": "api_call",
"arguments": {
"url": "https://api.example.com/data",
"method": "GET"
}
}
GET /api/mcp/tools
​
Get available MCP tools.
Response:
{
"success": true,
"data": [
{
"name": "api_call",
"description": "Make HTTP API calls to external services"
},
{
"name": "json_reader",
"description": "Parse and read JSON data with optional path extraction"
}
],
"message": "Tools retrieved successfully"
}
User Management Endpoints​
GET /api/users
​
Get all users.
POST /api/users
​
Create a new user.
Request:
{
"name": "John Doe",
"email": "john@example.com",
"role": "user"
}
GET /api/users/:id
​
Get user by ID.
PUT /api/users/:id
​
Update user.
DELETE /api/users/:id
​
Delete user.
Services​
Tool Execution Service​
The ToolExecutionService
handles all tool-related operations:
export class ToolExecutionService {
// Register a new tool
async registerTool(tool: Tool): Promise<void>
// Execute a single tool
async executeTool(toolName: string, args: any): Promise<ToolExecutionResponse>
// Execute multiple tools in parallel
async executeToolsParallel(requests: ToolExecutionRequest[]): Promise<ToolExecutionResponse[]>
// Execute multiple tools sequentially
async executeToolsSequential(requests: ToolExecutionRequest[]): Promise<ToolExecutionResponse[]>
// Execute tool with LLM parameter extraction
async executeToolWithLLM(query: string, toolName: string, options?: LLMOptions): Promise<ToolExecutionResponse>
// Get all registered tools
async getTools(): Promise<Tool[]>
// Get tool by name
async getTool(toolName: string): Promise<Tool | null>
// Clear all tools
async clearTools(): Promise<void>
}
User Service​
The UserService
handles user management:
export class UserService {
// Create a new user
async createUser(userData: CreateUserRequest): Promise<User>
// Get all users
async getUsers(): Promise<User[]>
// Get user by ID
async getUserById(id: string): Promise<User | null>
// Update user
async updateUser(id: string, userData: UpdateUserRequest): Promise<User>
// Delete user
async deleteUser(id: string): Promise<boolean>
}
Middleware​
Error Handler​
Global error handling middleware:
export const errorHandler = (
error: Error,
req: Request,
res: Response,
next: NextFunction
) => {
console.error('Error:', error);
const statusCode = error.statusCode || 500;
const message = error.message || 'Internal Server Error';
res.status(statusCode).json({
success: false,
message,
...(process.env.NODE_ENV === 'development' && { stack: error.stack })
});
};
Validation Middleware​
Request validation using Zod schemas:
export const validateRequest = (schema: ZodSchema) => {
return (req: Request, res: Response, next: NextFunction) => {
try {
schema.parse(req.body);
next();
} catch (error) {
res.status(400).json({
success: false,
message: 'Validation error',
errors: error.errors
});
}
};
};
Configuration​
Environment Variables​
# Server Configuration
NODE_ENV=development
PORT=3001
API_URL=http://localhost:3001
# LLM Provider API Keys
OPENAI_API_KEY=your_openai_key_here
MISTRAL_API_KEY=your_mistral_key_here
GROQ_API_KEY=your_groq_key_here
# Langfuse Configuration
LANGFUSE_PUBLIC_KEY=your_langfuse_public_key
LANGFUSE_SECRET_KEY=your_langfuse_secret_key
LANGFUSE_HOST=https://cloud.langfuse.com
# Database Configuration (future)
DATABASE_URL=postgresql://user:password@localhost:5432/clearai
Swagger Configuration​
API documentation is automatically generated using Swagger:
// swagger.ts
const swaggerOptions = {
definition: {
openapi: '3.0.0',
info: {
title: 'Clear AI API',
version: '1.0.0',
description: 'API for Clear AI tool execution and workflow orchestration',
},
servers: [
{
url: 'http://localhost:3001',
description: 'Development server',
},
],
},
apis: ['./src/routes/*.ts'],
};
Development​
Getting Started​
# Navigate to server package
cd packages/server
# Install dependencies
npm install
# Start development server
npm run dev
Available Scripts​
# Development
npm run dev # Start with nodemon
npm run dev:tsx # Start with tsx watch
npm run build # Build TypeScript
npm start # Start production server
# Code Quality
npm run lint # Run ESLint
npm run type-check # Run TypeScript checks
# Utilities
npm run clean # Clean build artifacts
Building​
# Build for production
npm run build
# The built files will be in dist/
# - index.js
# - controllers/
# - routes/
# - services/
# - middleware/
Security​
Security Headers​
The server uses Helmet.js for security headers:
app.use(helmet({
contentSecurityPolicy: {
directives: {
defaultSrc: ["'self'"],
styleSrc: ["'self'", "'unsafe-inline'"],
scriptSrc: ["'self'"],
imgSrc: ["'self'", "data:", "https:"],
},
},
}));
CORS Configuration​
Cross-origin resource sharing is configured for the client:
app.use(cors({
origin: process.env.CLIENT_URL || 'http://localhost:3000',
credentials: true,
}));
Input Validation​
All inputs are validated using Zod schemas:
const CreateUserSchema = z.object({
name: z.string().min(1).max(100),
email: z.string().email(),
role: z.enum(['user', 'admin']).default('user'),
});
Logging​
Request Logging​
Morgan middleware logs all HTTP requests:
app.use(morgan('combined', {
stream: {
write: (message) => logger.info(message.trim())
}
}));
Application Logging​
Structured logging with different levels:
import { logger } from '@clear-ai/shared';
// Log levels
logger.info('Server started on port 3001');
logger.warn('Deprecated API endpoint used');
logger.error('Database connection failed');
logger.debug('Tool execution details');
Performance​
Optimization Strategies​
- Connection Pooling: Database connection pooling
- Caching: Response caching for frequently accessed data
- Compression: Gzip compression for responses
- Rate Limiting: API rate limiting (planned)
- Monitoring: Performance monitoring and alerting
Memory Management​
// Monitor memory usage
setInterval(() => {
const memUsage = process.memoryUsage();
logger.info('Memory usage:', {
rss: Math.round(memUsage.rss / 1024 / 1024) + ' MB',
heapTotal: Math.round(memUsage.heapTotal / 1024 / 1024) + ' MB',
heapUsed: Math.round(memUsage.heapUsed / 1024 / 1024) + ' MB',
});
}, 30000);
Testing​
Unit Testing​
# Run unit tests
npm test
# Run tests with coverage
npm run test:coverage
# Run tests in watch mode
npm run test:watch
API Testing​
Test API endpoints using the built-in Swagger UI or tools like Postman:
# Test health endpoint
curl http://localhost:3001/api/health
# Test tool execution
curl -X POST http://localhost:3001/api/tools/execute \
-H "Content-Type: application/json" \
-d '{"toolName": "calculator", "args": {"operation": "add", "a": 5, "b": 3}}'
Deployment​
Production Build​
# Build for production
npm run build
# Start production server
npm start
Docker Deployment​
# Dockerfile
FROM node:18-alpine
WORKDIR /app
# Copy package files
COPY package*.json ./
RUN npm ci --only=production
# Copy built application
COPY dist/ ./dist/
# Expose port
EXPOSE 3001
# Start application
CMD ["node", "dist/index.js"]
Environment Configuration​
# Production environment
NODE_ENV=production
PORT=3001
API_URL=https://api.clear-ai.com
CLIENT_URL=https://clear-ai.com
Monitoring​
Health Checks​
The server provides comprehensive health check endpoints:
- Basic Health:
/api/health
- Detailed Health:
/api/health/detailed
- Readiness:
/api/health/ready
- Liveness:
/api/health/live
Metrics​
Key metrics to monitor:
- Response Time: Average API response time
- Error Rate: Percentage of failed requests
- Memory Usage: Server memory consumption
- CPU Usage: Server CPU utilization
- Active Connections: Number of active connections
Troubleshooting​
Common Issues​
Port Already in Use
# Kill process on port 3001
lsof -ti:3001 | xargs kill -9
TypeScript Errors
# Check types
npm run type-check
Build Failures
# Clean and rebuild
rm -rf dist node_modules
npm install
npm run build
Memory Issues
# Monitor memory usage
node --inspect dist/index.js
Next Steps​
Now that you understand the server package:
- Explore APIs: Check out the API reference
- Learn Tool Execution: Understand tool execution patterns
- Workflow Orchestration: Dive into LangGraph workflows
- Build Something: Follow the tutorials