🧠Shared Package - Advanced AI Intelligence
The Shared package (@clear-ai/shared
) is the Advanced AI Intelligence Package containing common code used across all packages in the Clear AI monorepo. It provides exceptional intelligence capabilities with 8.5/10 intelligence score, including advanced memory integration, relationship analysis, and sophisticated reasoning.
🧠Intelligence Overview​
The Shared package provides exceptional intelligence capabilities:
Core Intelligence Services​
- AgentService - Main intelligent agent with 8.5/10 intelligence score
- MemoryContextService - Advanced episodic and semantic memory
- EnhancedSemanticService - Sophisticated concept extraction and relationships
- RelationshipAnalysisService - Complex relationship understanding and pattern recognition
Advanced Memory System (9/10)​
- Episodic Memory: Store and retrieve conversation history with full context
- Semantic Memory: Conceptual relationships and knowledge mapping
- Cross-session Persistence: Maintains user preferences and context across sessions
- Relationship Analysis: Understands complex data hierarchies and patterns
Hybrid Intelligence (9.5/10)​
- Memory + Tools: Combines stored knowledge with tool execution
- Context + API: Uses past API knowledge to inform current queries
- Reasoning + Execution: Thinks through problems before acting
Intent Classification (9.5/10)​
- Perfect Accuracy: 100% success rate across all test scenarios
- Multi-layered Understanding: Goes beyond simple pattern matching
- Confidence Scoring: Accurately reflects uncertainty levels
- Context-aware Classification: Considers user history and preferences
Traditional Services​
- Common Types: TypeScript interfaces and types used across packages
- Utility Functions: Helper functions for common operations
- Constants: Application-wide constants and configuration
- LLM Services: Multiple LLM provider integrations
- Workflow Services: LangGraph workflow orchestration
- Logger: Structured logging utilities
Technology Stack​
Core Technologies​
- TypeScript: Type definitions and utilities
- LangChain: LLM integration framework
- LangGraph: Workflow orchestration
- Zod: Schema validation
Additional Libraries​
- Axios: HTTP client for LLM APIs
- Langfuse: LLM observability and tracing
Project Structure​
packages/shared/
├── src/
│ ├── services/ # Service implementations
│ │ ├── SimpleLangChainService.ts
│ │ ├── ToolExecutionService.ts
│ │ ├── UserService.ts
│ │ └── README.md
│ ├── workflows/ # Workflow implementations
│ │ ├── SimpleWorkflowService.ts
│ │ └── WorkflowTypes.ts
│ ├── types.ts # Common type definitions
│ ├── constants.ts # Application constants
│ ├── utils.ts # Utility functions
│ ├── logger.ts # Logging utilities
│ └── index.ts # Main export file
├── dist/ # Built files
├── package.json
└── tsconfig.json
Type Definitions​
Core Types​
// API Response wrapper
export interface ApiResponse<T> {
success: boolean;
data: T;
message: string;
}
// Error response
export interface ErrorResponse {
success: false;
message: string;
error?: string;
details?: any;
}
// User types
export interface User {
id: string;
name: string;
email: string;
role: 'user' | 'admin';
createdAt: Date;
updatedAt: Date;
}
export interface CreateUserRequest {
name: string;
email: string;
role?: 'user' | 'admin';
}
export interface UpdateUserRequest {
name?: string;
email?: string;
role?: 'user' | 'admin';
}
Tool Types​
// Tool execution types
export interface Tool {
name: string;
description: string;
inputSchema: any;
outputSchema?: any;
execute: (args: any) => Promise<any>;
}
export interface ToolExecutionRequest {
toolName: string;
args: Record<string, any>;
}
export interface ToolExecutionResponse {
success: boolean;
data?: any;
error?: string;
executionTime?: number;
toolName?: string;
}
LLM Types​
// LLM provider types
export interface LLMProvider {
complete(prompt: string, options?: LLMCompletionOptions): Promise<LLMResponse>;
chatComplete(messages: ChatMessage[], options?: LLMCompletionOptions): Promise<LLMResponse>;
getAvailableModels(): Promise<string[]>;
validateConnection(): Promise<boolean>;
getProviderName(): string;
}
export interface LLMCompletionOptions {
temperature?: number;
maxTokens?: number;
topP?: number;
frequencyPenalty?: number;
presencePenalty?: number;
stop?: string[];
}
export interface LLMResponse {
content: string;
usage?: {
promptTokens: number;
completionTokens: number;
totalTokens: number;
};
model: string;
finishReason?: string;
}
export interface ChatMessage {
role: 'system' | 'user' | 'assistant';
content: string;
}
Constants​
API Constants​
export const API_ENDPOINTS = {
HEALTH: '/api/health',
TOOLS: '/api/tools',
LANGGRAPH: '/api/langgraph',
MCP: '/api/mcp',
USERS: '/api/users',
} as const;
export const HTTP_STATUS = {
OK: 200,
CREATED: 201,
BAD_REQUEST: 400,
UNAUTHORIZED: 401,
FORBIDDEN: 403,
NOT_FOUND: 404,
INTERNAL_SERVER_ERROR: 500,
} as const;
LLM Provider Constants​
export const LLM_PROVIDERS = {
OPENAI: 'openai',
OLLAMA: 'ollama',
MISTRAL: 'mistral',
GROQ: 'groq',
} as const;
export const DEFAULT_MODELS = {
[LLM_PROVIDERS.OPENAI]: 'gpt-3.5-turbo',
[LLM_PROVIDERS.OLLAMA]: 'llama2',
[LLM_PROVIDERS.MISTRAL]: 'mistral-small',
[LLM_PROVIDERS.GROQ]: 'llama2-70b-4096',
} as const;
Utility Functions​
Date Utilities​
export function formatDate(date: Date | string, format: 'short' | 'long' = 'short'): string {
const d = new Date(date);
if (format === 'short') {
return d.toLocaleDateString();
}
return d.toLocaleDateString('en-US', {
year: 'numeric',
month: 'long',
day: 'numeric',
hour: '2-digit',
minute: '2-digit'
});
}
export function isValidDate(date: any): date is Date {
return date instanceof Date && !isNaN(date.getTime());
}
Validation Utilities​
export function isValidEmail(email: string): boolean {
const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
return emailRegex.test(email);
}
export function isValidUrl(url: string): boolean {
try {
new URL(url);
return true;
} catch {
return false;
}
}
export function sanitizeString(input: string): string {
return input.trim().replace(/[<>]/g, '');
}
Data Utilities​
export function deepClone<T>(obj: T): T {
return JSON.parse(JSON.stringify(obj));
}
export function omit<T, K extends keyof T>(obj: T, keys: K[]): Omit<T, K> {
const result = { ...obj };
keys.forEach(key => delete result[key]);
return result;
}
export function pick<T, K extends keyof T>(obj: T, keys: K[]): Pick<T, K> {
const result = {} as Pick<T, K>;
keys.forEach(key => {
if (key in obj) {
result[key] = obj[key];
}
});
return result;
}
Logger​
Logger Implementation​
export class Logger {
private context: string;
constructor(context: string = 'ClearAI') {
this.context = context;
}
info(message: string, meta?: any): void {
this.log('info', message, meta);
}
warn(message: string, meta?: any): void {
this.log('warn', message, meta);
}
error(message: string, meta?: any): void {
this.log('error', message, meta);
}
debug(message: string, meta?: any): void {
this.log('debug', message, meta);
}
private log(level: string, message: string, meta?: any): void {
const timestamp = new Date().toISOString();
const logEntry = {
timestamp,
level,
context: this.context,
message,
...(meta && { meta })
};
console.log(JSON.stringify(logEntry));
}
}
// Default logger instance
export const logger = new Logger();
Usage Examples​
import { logger } from '@clear-ai/shared';
// Basic logging
logger.info('Server started on port 3001');
logger.warn('Deprecated API endpoint used');
logger.error('Database connection failed');
// Logging with metadata
logger.info('User created', { userId: '123', email: 'user@example.com' });
logger.error('Tool execution failed', {
toolName: 'api_call',
error: error.message,
args: { url: 'https://api.example.com' }
});
LLM Services​
LLM Provider Interface​
export abstract class LLMProvider {
protected apiKey: string;
protected model: string;
protected baseURL?: string;
constructor(apiKey: string, model: string, baseURL?: string) {
this.apiKey = apiKey;
this.model = model;
this.baseURL = baseURL;
}
abstract complete(prompt: string, options?: LLMCompletionOptions): Promise<LLMResponse>;
abstract chatComplete(messages: ChatMessage[], options?: LLMCompletionOptions): Promise<LLMResponse>;
abstract getAvailableModels(): Promise<string[]>;
abstract validateConnection(): Promise<boolean>;
abstract getProviderName(): string;
}
OpenAI Provider​
export class OpenAIProvider extends LLMProvider {
constructor(apiKey: string, model: string = 'gpt-3.5-turbo') {
super(apiKey, model, 'https://api.openai.com/v1');
}
async complete(prompt: string, options?: LLMCompletionOptions): Promise<LLMResponse> {
const response = await axios.post(`${this.baseURL}/chat/completions`, {
model: this.model,
messages: [{ role: 'user', content: prompt }],
temperature: options?.temperature || 0.7,
max_tokens: options?.maxTokens || 1000,
}, {
headers: {
'Authorization': `Bearer ${this.apiKey}`,
'Content-Type': 'application/json',
},
});
return {
content: response.data.choices[0].message.content,
usage: response.data.usage,
model: this.model,
finishReason: response.data.choices[0].finish_reason,
};
}
// ... other methods
}
Tool Selector​
export class ToolSelector {
private providers: Map<string, LLMProvider>;
private defaultProvider: string;
constructor(providers: Map<string, LLMProvider>, defaultProvider: string) {
this.providers = providers;
this.defaultProvider = defaultProvider;
}
async executeWithProvider(
providerName: string,
prompt: string,
options?: LLMCompletionOptions
): Promise<LLMResponse> {
const provider = this.providers.get(providerName);
if (!provider) {
throw new Error(`Provider '${providerName}' not found`);
}
return await provider.complete(prompt, options);
}
async executeWithDefault(
prompt: string,
options?: LLMCompletionOptions
): Promise<LLMResponse> {
return await this.executeWithProvider(this.defaultProvider, prompt, options);
}
}
Workflow Services​
Simple Workflow Service​
export class SimpleWorkflowService {
private toolRegistry: ToolRegistry;
private llmProvider: LLMProvider;
constructor(toolRegistry: ToolRegistry, llmProvider: LLMProvider) {
this.toolRegistry = toolRegistry;
this.llmProvider = llmProvider;
}
async executeWorkflow(query: string, options?: WorkflowOptions): Promise<WorkflowResult> {
// Analyze query to determine tools needed
const toolAnalysis = await this.analyzeQuery(query);
// Execute tools in sequence
const results = await this.executeTools(toolAnalysis.tools, toolAnalysis.arguments);
// Process results
const finalResult = await this.processResults(results, query);
return {
success: true,
finalResult,
allResults: results,
executionOrder: toolAnalysis.tools,
executionTime: Date.now() - startTime,
};
}
private async analyzeQuery(query: string): Promise<ToolAnalysis> {
// Use LLM to analyze query and determine required tools
const prompt = `Analyze this query and determine which tools are needed: "${query}"`;
const response = await this.llmProvider.complete(prompt);
// Parse response to extract tool information
return this.parseToolAnalysis(response.content);
}
}
Development​
Getting Started​
# Navigate to shared package
cd packages/shared
# Install dependencies
npm install
# Build the package
npm run build
Available Scripts​
# Development
npm run dev # Build in watch mode
npm run build # Build TypeScript
npm run type-check # Run TypeScript checks
# Code Quality
npm run lint # Run ESLint
# Utilities
npm run clean # Clean build artifacts
Building​
# Build for production
npm run build
# The built files will be in dist/
# - index.js
# - types.js
# - utils.js
# - logger.js
# - services/
# - workflows/
Usage in Other Packages​
Client Package​
// In client package
import { ApiResponse, User, logger } from '@clear-ai/shared';
interface UserListResponse extends ApiResponse<User[]> {}
const fetchUsers = async (): Promise<User[]> => {
try {
const response = await axios.get<UserListResponse>('/api/users');
return response.data.data;
} catch (error) {
logger.error('Failed to fetch users', { error: error.message });
throw error;
}
};
Server Package​
// In server package
import {
ApiResponse,
ToolExecutionRequest,
ToolExecutionResponse,
logger
} from '@clear-ai/shared';
app.post('/api/tools/execute', async (req, res) => {
try {
const { toolName, args }: ToolExecutionRequest = req.body;
const result = await toolService.executeTool(toolName, args);
const response: ApiResponse<any> = {
success: true,
data: result,
message: 'Tool executed successfully'
};
res.json(response);
} catch (error) {
logger.error('Tool execution failed', { error: error.message });
res.status(500).json({
success: false,
message: 'Tool execution failed',
error: error.message
});
}
});
MCP Basic Package​
// In MCP basic package
import { Tool, ToolExecutionRequest, ToolExecutionResponse } from '@clear-ai/shared';
export class ToolRegistry {
private tools: Map<string, Tool> = new Map();
async executeTool(name: string, args: any): Promise<ToolExecutionResponse> {
const tool = this.tools.get(name);
if (!tool) {
return {
success: false,
error: `Tool '${name}' not found`
};
}
try {
const startTime = Date.now();
const result = await tool.execute(args);
const executionTime = Date.now() - startTime;
return {
success: true,
data: result,
executionTime,
toolName: name
};
} catch (error) {
return {
success: false,
error: error.message,
toolName: name
};
}
}
}
Testing​
Unit Testing​
# Run unit tests
npm test
# Run tests with coverage
npm run test:coverage
Testing Utilities​
// Test utilities
export function createMockUser(overrides: Partial<User> = {}): User {
return {
id: 'test-id',
name: 'Test User',
email: 'test@example.com',
role: 'user',
createdAt: new Date(),
updatedAt: new Date(),
...overrides
};
}
export function createMockApiResponse<T>(data: T): ApiResponse<T> {
return {
success: true,
data,
message: 'Success'
};
}
Performance​
Optimization Strategies​
- Tree Shaking: Only import what you need
- Lazy Loading: Load services on demand
- Caching: Cache expensive operations
- Memory Management: Proper cleanup of resources
Bundle Size​
Monitor bundle size impact:
# Analyze bundle size
npm run build
npm run analyze
Security​
Input Sanitization​
export function sanitizeInput(input: any): any {
if (typeof input === 'string') {
return sanitizeString(input);
}
if (Array.isArray(input)) {
return input.map(sanitizeInput);
}
if (typeof input === 'object' && input !== null) {
const sanitized: any = {};
for (const [key, value] of Object.entries(input)) {
sanitized[sanitizeString(key)] = sanitizeInput(value);
}
return sanitized;
}
return input;
}
Type Safety​
All shared types are designed to be type-safe:
// Type-safe API responses
function handleApiResponse<T>(response: ApiResponse<T>): T {
if (!response.success) {
throw new Error(response.message);
}
return response.data;
}
Troubleshooting​
Common Issues​
Type Import Errors
# Check if shared package is built
npm run build --workspace=@clear-ai/shared
Circular Dependencies
# Check for circular dependencies
npm run check-circular
Version Mismatches
# Check package versions
npm list @clear-ai/shared
Next Steps​
Now that you understand the Shared package:
- Explore Services: Check out the LLM provider implementations
- Learn Workflows: Understand workflow service patterns
- Use Types: Learn about the shared type definitions
- Build Something: Follow the tutorials