Model Selection Guide
Last updated: August 20, 2025.
The AI landscape evolves rapidly with frequent model releases, so this guide highlights what's currently performing best with Sypha. We maintain regular updates to reflect these ongoing changes.
Just getting started with model selection? Begin with Module 2 of Sypha's Learning Path for an in-depth walkthrough of model selection and configuration.
What is an AI Model?
An AI model serves as Sypha's "brain" - the core intelligence that processes your requests. Whether you're asking Sypha to generate code, resolve bugs, or restructure your codebase, the model interprets your instructions and produces the corresponding output.
Key points:
- Models are AI systems trained on vast datasets enabling them to comprehend both natural language and programming code
- Each model brings unique capabilities - some specialize in sophisticated reasoning, while others optimize for rapid responses or economical usage
- You have control over which model Sypha employs - similar to selecting different specialists for varying tasks
- API providers serve as model hosts - organizations such as Anthropic, OpenAI, and OpenRouter make these models accessible
Why it matters: Your model selection fundamentally shapes Sypha's performance, output quality, processing speed, and operational costs. Premium models may excel at intricate refactoring operations but come with higher expenses, whereas economical models deliver excellent results for standard tasks at significantly lower costs.
How to Select a Model in Sypha
Complete these 5 straightforward steps to configure Sypha with your chosen AI model:
Step 1: Open Sypha Settings
Your first task is to access Sypha's configuration interface.
Two methods to access settings:
- Quick access: Click the gear icon (⚙️) located in Sypha's chat interface top-right corner
- Command palette: Use Cmd/Ctrl + Shift + P → enter "Sypha: Open Settings"

The configuration panel displays, presenting setup options with "API Provider" positioned at the top.
Your previous configuration is remembered by the settings panel, meaning this setup is typically a one-time process.
Step 2: Select an API Provider
Pick your desired AI provider using the dropdown menu.

Popular providers at a glance:
| Provider | Best For | Notes |
|---|---|---|
| Sypha | Easiest setup | No API keys needed, access to multiple models including stealth models |
| OpenRouter | Value seekers | Multiple models, competitive pricing |
| Anthropic | Reliability | Claude models, most dependable tool usage |
| OpenAI | Latest tech | GPT models |
| Google Gemini | Large context | Google's AI models |
| AWS Bedrock | Enterprise | Advanced features |
| Ollama | Privacy | Run models locally |
Explore the full provider list for additional choices including Cerebras, Vertex AI, Azure, and others.
Recommended for beginners: Begin with Sypha as your provider - eliminates API key management, provides immediate access to multiple models, and offers occasional complimentary inferencing via partner providers.
Step 3: Add Your API Key (or Sign In)
Your next action varies based on your selected provider.
If you selected Sypha as your provider:
- No API key required! Just authenticate using your Sypha account
- Press the Sign In button once it appears
- You'll navigate to app.sypha.bot for authentication
- Once authenticated, switch back to your IDE
If you selected any other provider:
You must obtain an API key from your selected provider:
-
Navigate to your provider's website to obtain an API key:
- Anthropic: console.anthropic.com
- OpenRouter: openrouter.ai/keys
- OpenAI: platform.openai.com/api-keys
- Google: aistudio.google.com/apikey
- Others: See Provider Setup Guide
-
Create a new API key through the provider's platform
-
Copy the API key into your clipboard
-
Insert your key into the "API Key" field within Sypha settings
-
Automatic saving - Your key is securely stored in your editor's encrypted secrets storage

Payment required for most providers: The majority of providers require payment details before key generation. Charges are usage-based only (generally $0.01-$0.10 per coding task).
Step 4: Choose Your Model
After adding your API key (or completing sign-in), the "Model" dropdown menu activates.

Quick model selection guide:
| Your Priority | Choose This Model | Why |
|---|---|---|
| Maximum reliability | Claude Sonnet 4.5 | Most reliable tool usage, excellent at complex tasks |
| Best value | DeepSeek V3 or Qwen3 Coder | Great performance at budget prices |
| Fastest speed | Qwen3 Coder on Cerebras | Lightning-fast responses |
| Run locally | Any Ollama model | Complete privacy, no internet needed |
| Latest features | GPT-5 | OpenAI's newest capabilities |
Uncertain about your choice? Begin with Claude Sonnet 4.5 for dependable performance or DeepSeek V3 for cost efficiency.
Models can be changed anytime without disrupting your ongoing conversation. Experiment with various models to identify what performs best for your particular use cases.
Consult the model comparison tables below for comprehensive specifications and pricing details.
Step 5: Start Using Sypha
Congratulations! Your setup is complete. Here's how to begin coding with Sypha:
-
Enter your request into the Sypha chat interface
- Example: "Create a React component for a login form"
- Example: "Debug this TypeScript error"
- Example: "Refactor this function to be more efficient"
-
Hit Enter or select the send icon to submit your request
Choosing the Right Model
Finding the optimal model requires weighing multiple considerations. Apply this framework to determine your best fit:
Pro tips: Set up different models for Plan Mode and Act Mode. Leverage each model's particular strengths. For instance, employ a budget-friendly model for planning conversations and reserve a premium model for actual implementation.
Key Selection Factors
| Factor | What to Consider | Recommendation |
|---|---|---|
| Task Complexity | Simple fixes vs complex refactoring | Budget models for routine tasks; Premium models for complex work |
| Budget | Monthly spending capacity | $10-$30: Budget, $30-$100: Mid-tier, $100+: Premium |
| Context Window | Project size and file count | Small: 32K-128K, Medium: 128K-200K, Large: 400K+ |
| Speed | Response time requirements | Interactive: Fast models, Background: Reasoning models OK |
| Tool Reliability | Complex operations | Claude excels at tool usage; Test others with your workflow |
| Provider | Access and pricing needs | OpenRouter: Many options, Direct: Faster/reliable, Local: Privacy |
Model Comparison Resources
For in-depth model comparisons, pricing information, and performance data, refer to:
- Model Comparison & Pricing - Comprehensive pricing tables and performance benchmarks
- Context Window Guide - Understanding and optimizing context usage
Open Source vs Closed Source
Open Source Advantages
- Multiple providers vie to host them
- Lower costs resulting from competitive hosting
- Provider flexibility - switch providers if one experiences downtime
- Accelerated innovation cycles
Open Source Models Available
- Qwen3 Coder (Apache 2.0)
- Z AI GLM 4.5 (MIT)
- Kimi K2 (Open source)
- DeepSeek series (Various licenses)
Quick Decision Matrix
| If you want... | Use this |
|---|---|
| Something that just works | Claude Sonnet 4.5 |
| To save money | DeepSeek V3 or Qwen3 variants |
| Huge context windows | Gemini 2.5 Pro or Claude Sonnet 4.5 |
| Open source | Qwen3 Coder, Z AI GLM 4.5, or Kimi K2 |
| Latest tech | GPT-5 |
| Speed | Qwen3 Coder on Cerebras (fastest available) |
What Others Are Using
Review OpenRouter's Sypha usage stats to observe actual usage patterns from the community.