Sypha AI Docs
Core features

Model Selection Guide

Last updated: August 20, 2025.

The AI landscape evolves rapidly with frequent model releases, so this guide highlights what's currently performing best with Sypha. We maintain regular updates to reflect these ongoing changes.

Just getting started with model selection? Begin with Module 2 of Sypha's Learning Path for an in-depth walkthrough of model selection and configuration.

What is an AI Model?

An AI model serves as Sypha's "brain" - the core intelligence that processes your requests. Whether you're asking Sypha to generate code, resolve bugs, or restructure your codebase, the model interprets your instructions and produces the corresponding output.

Key points:

  • Models are AI systems trained on vast datasets enabling them to comprehend both natural language and programming code
  • Each model brings unique capabilities - some specialize in sophisticated reasoning, while others optimize for rapid responses or economical usage
  • You have control over which model Sypha employs - similar to selecting different specialists for varying tasks
  • API providers serve as model hosts - organizations such as Anthropic, OpenAI, and OpenRouter make these models accessible

Why it matters: Your model selection fundamentally shapes Sypha's performance, output quality, processing speed, and operational costs. Premium models may excel at intricate refactoring operations but come with higher expenses, whereas economical models deliver excellent results for standard tasks at significantly lower costs.

How to Select a Model in Sypha

Complete these 5 straightforward steps to configure Sypha with your chosen AI model:

Step 1: Open Sypha Settings

Your first task is to access Sypha's configuration interface.

Two methods to access settings:

  • Quick access: Click the gear icon (⚙️) located in Sypha's chat interface top-right corner
  • Command palette: Use Cmd/Ctrl + Shift + P → enter "Sypha: Open Settings"
Sypha Settings Panel

The configuration panel displays, presenting setup options with "API Provider" positioned at the top.

Your previous configuration is remembered by the settings panel, meaning this setup is typically a one-time process.

Step 2: Select an API Provider

Pick your desired AI provider using the dropdown menu.

Sypha Settings Panel

Popular providers at a glance:

ProviderBest ForNotes
SyphaEasiest setupNo API keys needed, access to multiple models including stealth models
OpenRouterValue seekersMultiple models, competitive pricing
AnthropicReliabilityClaude models, most dependable tool usage
OpenAILatest techGPT models
Google GeminiLarge contextGoogle's AI models
AWS BedrockEnterpriseAdvanced features
OllamaPrivacyRun models locally

Explore the full provider list for additional choices including Cerebras, Vertex AI, Azure, and others.

Recommended for beginners: Begin with Sypha as your provider - eliminates API key management, provides immediate access to multiple models, and offers occasional complimentary inferencing via partner providers.

Step 3: Add Your API Key (or Sign In)

Your next action varies based on your selected provider.

If you selected Sypha as your provider:

  • No API key required! Just authenticate using your Sypha account
  • Press the Sign In button once it appears
  • You'll navigate to app.sypha.bot for authentication
  • Once authenticated, switch back to your IDE

If you selected any other provider:

You must obtain an API key from your selected provider:

  1. Navigate to your provider's website to obtain an API key:

  2. Create a new API key through the provider's platform

  3. Copy the API key into your clipboard

  4. Insert your key into the "API Key" field within Sypha settings

  5. Automatic saving - Your key is securely stored in your editor's encrypted secrets storage

Sypha API Selection

Payment required for most providers: The majority of providers require payment details before key generation. Charges are usage-based only (generally $0.01-$0.10 per coding task).

Step 4: Choose Your Model

After adding your API key (or completing sign-in), the "Model" dropdown menu activates.

Sypha Model Selection

Quick model selection guide:

Your PriorityChoose This ModelWhy
Maximum reliabilityClaude Sonnet 4.5Most reliable tool usage, excellent at complex tasks
Best valueDeepSeek V3 or Qwen3 CoderGreat performance at budget prices
Fastest speedQwen3 Coder on CerebrasLightning-fast responses
Run locallyAny Ollama modelComplete privacy, no internet needed
Latest featuresGPT-5OpenAI's newest capabilities

Uncertain about your choice? Begin with Claude Sonnet 4.5 for dependable performance or DeepSeek V3 for cost efficiency.

Models can be changed anytime without disrupting your ongoing conversation. Experiment with various models to identify what performs best for your particular use cases.

Consult the model comparison tables below for comprehensive specifications and pricing details.

Step 5: Start Using Sypha

Congratulations! Your setup is complete. Here's how to begin coding with Sypha:

  1. Enter your request into the Sypha chat interface

    • Example: "Create a React component for a login form"
    • Example: "Debug this TypeScript error"
    • Example: "Refactor this function to be more efficient"
  2. Hit Enter or select the send icon to submit your request

Choosing the Right Model

Finding the optimal model requires weighing multiple considerations. Apply this framework to determine your best fit:

Pro tips: Set up different models for Plan Mode and Act Mode. Leverage each model's particular strengths. For instance, employ a budget-friendly model for planning conversations and reserve a premium model for actual implementation.

Key Selection Factors

FactorWhat to ConsiderRecommendation
Task ComplexitySimple fixes vs complex refactoringBudget models for routine tasks; Premium models for complex work
BudgetMonthly spending capacity$10-$30: Budget, $30-$100: Mid-tier, $100+: Premium
Context WindowProject size and file countSmall: 32K-128K, Medium: 128K-200K, Large: 400K+
SpeedResponse time requirementsInteractive: Fast models, Background: Reasoning models OK
Tool ReliabilityComplex operationsClaude excels at tool usage; Test others with your workflow
ProviderAccess and pricing needsOpenRouter: Many options, Direct: Faster/reliable, Local: Privacy

Model Comparison Resources

For in-depth model comparisons, pricing information, and performance data, refer to:

Open Source vs Closed Source

Open Source Advantages

  • Multiple providers vie to host them
  • Lower costs resulting from competitive hosting
  • Provider flexibility - switch providers if one experiences downtime
  • Accelerated innovation cycles

Open Source Models Available

  • Qwen3 Coder (Apache 2.0)
  • Z AI GLM 4.5 (MIT)
  • Kimi K2 (Open source)
  • DeepSeek series (Various licenses)

Quick Decision Matrix

If you want...Use this
Something that just worksClaude Sonnet 4.5
To save moneyDeepSeek V3 or Qwen3 variants
Huge context windowsGemini 2.5 Pro or Claude Sonnet 4.5
Open sourceQwen3 Coder, Z AI GLM 4.5, or Kimi K2
Latest techGPT-5
SpeedQwen3 Coder on Cerebras (fastest available)

What Others Are Using

Review OpenRouter's Sypha usage stats to observe actual usage patterns from the community.

On this page