Sypha AI Docs
Running models locally

Ollama

A comprehensive walkthrough for configuring Ollama to run AI models locally alongside Sypha.

Requirements

  • A Windows, macOS, or Linux system
  • VS Code with Sypha extension installed

Configuration Process

1. Get Ollama Installed

  • Navigate to ollama.com
  • Obtain and install the version compatible with your system
Ollama download page

2. Select and Obtain a Model

  • Explore available models at ollama.com/search

  • Choose a model and copy its command:

    ollama run [model-name]
Selecting a model in Ollama
  • Launch your Terminal and execute the command:

    • For instance:

      ollama run llama2
Running Ollama in terminal

Your model is now prepared for integration with Sypha.

3. Set Up Sypha

Complete Ollama setup process

Launch VS Code and configure Sypha:

  1. Access the Sypha settings icon
  2. Choose "Ollama" as your API provider
  3. Base URL: http://localhost:11434/ (the default setting, typically doesn't require modification)
  4. Pick your model from the dropdown menu

Optimal Model Selection

To achieve the best results with Sypha, we recommend Qwen3 Coder 30B. This model delivers excellent coding performance and dependable tool integration for local development workflows.

To obtain it:

ollama run qwen3-coder-30b

Additional capable alternatives include:

  • mistral-small - Offers good equilibrium between performance and speed
  • devstral-small - Tailored for coding-related tasks

Key Points to Remember

  • Launch Ollama prior to connecting it with Sypha
  • Maintain Ollama as a background process
  • Initial model downloads can require several minutes

Activating Compact Prompts

To maximize performance when using local models, activate compact prompts through Sypha's settings. This feature decreases prompt size by 90% while preserving essential functionality.

Go to Sypha Settings → Features → Use Compact Prompt and enable it.

Resolving Common Issues

If Sypha cannot establish connection to Ollama:

  1. Confirm Ollama is active
  2. Validate the base URL is accurate
  3. Make certain the model has been downloaded

Need additional information? Consult the Ollama Docs.

On this page