Overview
Sypha CLI Overview
Sypha CLI
Sypha CLI is currently in active development. Features and APIs may change between versions.
What is Sypha CLI?
Sypha CLI is an enterprise-grade AI-powered command line interface that brings the full power of 40+ LLM providers directly to your terminal. Unlike traditional CLIs, Sypha provides intelligent session management, git-aware operations, and real-time enterprise analytics.
Built for developers and teams who demand flexibility, Sypha CLI enables you to:
- Switch between AI providers seamlessly - Use Anthropic Claude, OpenAI GPT, Google Gemini, Mistral, DeepSeek, and 35+ more without losing conversation context
- Never lose your work - Auto-save every 5 seconds with full AI context preservation
- Integrate deeply with Git - Automated commits with Sypha branding, checkpointing, visual diffs, and intelligent branch management
- Analyze visual content - Reference images naturally with @syntax for vision-capable models
- Track enterprise usage - Comprehensive telemetry for tokens, costs, and team activity
- Customize workflows - Built-in modes (code, plan, debug, architect) plus organization-specific custom modes
Supported Providers
Sypha CLI supports 40+ AI providers across multiple categories:
Major Cloud Providers
- Anthropic - Claude Sonnet, Opus, Haiku models
- OpenAI - GPT-4, GPT-4 Turbo, GPT-3.5 models
- Google - Gemini Pro, Gemini Ultra models
- AWS Bedrock - Access to multiple models through AWS
- Azure OpenAI - Enterprise OpenAI deployments
- Vertex AI - Google Cloud AI models
Specialized Providers
- Mistral AI - Mistral Large, Medium, Small models
- DeepSeek - DeepSeek Coder and Chat models
- Groq - Ultra-fast inference for Llama and Mixtral
- X.AI - Grok models
- Qwen Code - Specialized coding models
- Cerebras - High-performance inference
- MiniMax - Advanced Chinese language models
- Together AI - Open source model hosting
- Fireworks AI - Fast model inference
- Perplexity - Search-augmented generation
Local & Self-Hosted
- Ollama - Run models locally on your machine
- LM Studio - Local model management
- LiteLLM - Unified interface for local models
Enterprise Routers
- Sypha - Intelligent routing and load balancing
- OpenRouter - Multi-provider routing
- Vercel AI Gateway - Enterprise AI infrastructure
- Portkey - LLM observability and routing
- Helicone - LLM monitoring and analytics
Use Cases by Category
Software Development
- Write and refactor code across multiple languages
- Generate unit tests and integration tests
- Debug complex issues with AI assistance
- Review code and suggest improvements
- Document codebases automatically
- Migrate legacy code to modern frameworks
Architecture & Planning
- Design system architectures
- Plan feature implementations
- Create technical specifications
- Evaluate technology choices
- Perform code audits
- Generate architectural diagrams (via code)
DevOps & Infrastructure
- Write infrastructure as code (Terraform, CloudFormation)
- Generate CI/CD pipeline configurations
- Debug deployment issues
- Create monitoring and alerting setups
- Optimize Docker and Kubernetes configurations
- Automate deployment scripts
Data & Analytics
- Write SQL queries and database migrations
- Generate data transformation scripts
- Create analytics dashboards (code generation)
- Build ETL pipelines
- Analyze datasets and generate reports
- Optimize database performance
Security & Compliance
- Review code for security vulnerabilities
- Generate security documentation
- Create compliance reports
- Audit API endpoints
- Implement authentication and authorization
- Scan dependencies for issues
Documentation & Communication
- Generate API documentation
- Write technical blog posts
- Create user guides and tutorials
- Translate documentation
- Generate release notes
- Write commit messages
Key Features
Intelligent Session Management
Never lose your work with auto-save every 5 seconds. Resume conversations across restarts with full AI context restoration. Search through your session history and restore previous conversations instantly.
Available Commands:
/session list- View all saved sessions/session resume- Resume your last session/session search <query>- Find sessions by content/session save- Manually save current session/session select <id>- Restore specific session
Multi-Provider Support
Switch between 40+ AI providers without losing conversation context. Compare responses from different models, optimize costs, and use the best model for each task.
Quick Switching:
/provider anthropic
/model claude-sonnet-4-5
# Switch to OpenAI
/provider openai
/model gpt-4Deep Git Integration
Git-aware checkpointing, automated commits with Sypha branding, visual diffs, and intelligent branch management keep your code organized and traceable.
Git Commands:
/add <files>- Stage files for commit/commit "message"- Create commit with Sypha branding/status- View git status/diff [--staged]- View changes with visual diffs/push- Push to remote repository/undo- Undo last change
Vision Model Support
Reference images naturally with @syntax. Analyze screenshots, diagrams, UI mockups, and more with vision-capable models like Claude Sonnet 4.5 and GPT-4 Vision.
Supported Formats: PNG, JPG, GIF, WebP, SVG, BMP, TIFF, AVIF
Example Usage:
@screenshot.png analyze this UI and suggest improvements
@"path with spaces/diagram.png" explain this architectureEnterprise Analytics
Track tokens, costs, performance, and activity across your entire organization with comprehensive telemetry. Gain visibility into AI usage patterns and optimize spending.
Tracked Metrics:
- Token usage (input/output by provider and model)
- Cost tracking per request and session
- Response times and performance metrics
- Command usage patterns
- Git activity and commit analytics
- Error rates and debugging insights
Custom Workflow Modes
Built-in modes (code, plan, debug, architect) plus organization-specific custom modes for team workflows. Each mode tailors the AI's behavior and system prompts.
Available Modes:
- code - General development (default)
- plan - Architectural planning and design
- debug - Troubleshooting and bug fixing
- architect - System design and technical decisions
- custom - Organization-specific modes (configured per team)
Switch Modes:
/mode plan
# or use keyboard shortcut
Shift+TabAuto-Edit Confirmation Layer
Intelligent approval system for file operations. Review changes before they're applied, or enable auto-edit mode for rapid development.
Approval Options:
- Y or 1 - Allow once
- A or 2 - Allow for this session (enables Auto-Edit mode)
- N or 3 - Reject change
- Esc - Cancel operation
Toggle Auto-Edit: Press Ctrl+Y to enable/disable auto-approval for all file operations.
Architecture Overview
Sypha CLI is built with:
- TypeScript for type-safe, maintainable code
- React Ink for rich terminal UI components
- Anthropic SDK and LiteLLM for multi-provider support
- PostgreSQL/Supabase for enterprise analytics storage
- gRPC for high-performance telemetry streaming
- Zustand for state management
- XTerm.js for advanced terminal rendering
Enterprise Features
Team Management
- Organizations with role-based access control
- Team member activity tracking
- Shared session access and collaboration
- Centralized billing and usage monitoring
Security & Compliance
- Data encryption at rest and in transit
- Audit logs for all operations
- GDPR-compliant data handling
- Data retention policies
- SOC 2 Type II compliance (in progress)
Advanced Analytics Dashboard
- Real-time usage monitoring
- Cost optimization insights
- Performance benchmarking
- Custom report generation
- Export to CSV, JSON, or PDF
Ready to get started? Head to Installation to set up Sypha CLI in minutes.