Sypha AI Docs
Provider config

OpenRouter

Learn how to use OpenRouter with Sypha to access a wide variety of language models through a single API.

OpenRouter is an AI platform delivering access to a diverse range of language models from various providers, all via a unified API. This approach can streamline configuration and facilitate experimentation with different models.

Website: https://openrouter.ai/

Obtaining an API Key

  1. Account Creation/Login: Access the OpenRouter website. Authenticate using your Google or GitHub account.
  2. Acquire an API Key: Navigate to the keys page. You should observe an API key displayed. If absent, generate a new key.
  3. Retrieve the Key: Copy the API key.

Available Models

OpenRouter accommodates an extensive and expanding collection of models. Sypha automatically retrieves the list of available models. Consult the OpenRouter Models page for the comprehensive and current list.

Sypha Configuration

  1. Access Sypha Configuration: Select the settings icon (⚙️) within the Sypha panel.
  2. Choose Provider: Pick "OpenRouter" from the "API Provider" selector.
  3. Insert API Key: Place your OpenRouter API key into the "OpenRouter API Key" field.
  4. Choose Model: Select your preferred model from the "Model" selector.
  5. (Optional) Custom Base URL: Should you require a custom base URL for the OpenRouter API, enable "Use custom base URL" and provide the URL. Leave this unspecified for most users.

Available Transforms

OpenRouter offers an optional "middle-out" message transform to assist with prompts exceeding a model's maximum context size. You can activate it by enabling the "Compress prompts and message chains to the context size" option.

Important Considerations

  • Model Choice: OpenRouter provides a broad spectrum of models. Experiment to identify the optimal one for your requirements.
  • Cost Structure: OpenRouter charges based on the underlying model's pricing. Consult the OpenRouter Models page for specifics.
  • Prompt Caching:
    • OpenRouter forwards caching requests to underlying models that accommodate it. Verify the OpenRouter Models page to identify which models provide caching.
    • For most models, caching should activate automatically when supported by the model itself (comparable to how Requesty operates).
    • Exception for Gemini Models via OpenRouter: Due to potential response delays occasionally observed with Google's caching mechanism when accessed via OpenRouter, a manual activation step is necessary specifically for Gemini models.
    • When using a Gemini model via OpenRouter, you must manually enable the "Enable Prompt Caching" option in the provider settings to activate caching for that model. This checkbox functions as a temporary workaround. For non-Gemini models on OpenRouter, this checkbox is unnecessary for caching.

On this page