OpenAI Compatible
Learn how to configure Sypha with various AI model providers that offer OpenAI-compatible APIs.
Sypha is compatible with an extensive range of AI model providers offering APIs that align with the OpenAI API standard. This permits you to use models from providers besides OpenAI, while still employing a familiar API interface. This encompasses providers such as:
- Local models operating through tools like Ollama and LM Studio (which are addressed in their respective sections).
- Cloud providers such as Perplexity, Together AI, Anyscale, and numerous others.
- Any other provider that provides an OpenAI-compatible API endpoint.
This document concentrates on configuring providers apart from the official OpenAI API (which has its own dedicated configuration page).
General Configuration
The essential aspect of using an OpenAI-compatible provider with Sypha is to configure these primary settings:
- Base URL: This is the API endpoint particular to the provider. It will not be
https://api.openai.com/v1(that URL is for the official OpenAI API). - API Key: This is the secret key you acquire from your selected provider.
- Model ID: This is the particular name or identifier for the model you intend to use.
You'll locate these settings in the Sypha settings panel (select the ⚙️ icon):
- API Provider: Pick "OpenAI Compatible".
- Base URL: Input the base URL supplied by your selected provider. This is an essential step.
- API Key: Input your API key from the provider.
- Model: Select or input the model ID.
- Model Configuration: This section permits you to tailor advanced parameters for the model, such as:
- Max Output Tokens
- Context Window size
- Image Support capabilities
- Computer Use (e.g., for models with tool/function calling)
- Input Price (per token/million tokens)
- Output Price (per token/million tokens)
Supported Models (for OpenAI Native Endpoint)
While the "OpenAI Compatible" provider type permits connecting to various endpoints, if you are connecting directly to the official OpenAI API (or an endpoint that replicates it precisely), Sypha identifies the following model IDs according to the openAiNativeModels definition in its source code:
o3-minio3-mini-higho3-mini-lowo1o1-previewo1-minigpt-4ogpt-4o-mini
Note: If you are employing a different OpenAI-compatible provider (such as Together AI, Anyscale, etc.), the available model IDs will vary. Always consult your particular provider's documentation for their supported model names and any distinctive configuration details.
v0 (Vercel SDK) in Sypha:
-
For developers utilizing v0, their AI SDK documentation offers valuable insights and examples for integrating various models, many of which are OpenAI-compatible. This can be a beneficial resource for understanding how to organize calls and handle configurations when employing Sypha with services deployed on or integrated with Vercel.
-
v0 can be utilized in Sypha with the OpenAI Compatible provider.
-
Quickstart
-
- With the OpenAI Compatible provider chosen, configure the Base URL to https://api.v0.dev/v1.
-
- Insert your v0 API Key
-
- Configure the Model ID: v0-1.0-md
-
- Select Verify to validate the connection.
Troubleshooting
- "Invalid API Key": Verify that you've input the API key accurately and that it corresponds to the correct provider.
- "Model Not Found": Confirm you're employing a valid model ID for your chosen provider and that it's accessible at the designated Base URL.
- Connection Errors: Confirm the Base URL is accurate, that your provider's API is reachable from your machine, and that there are no firewall or network complications.
- Unexpected Results: If you're receiving unexpected outputs, attempt a different model or verify all configuration parameters.
By employing an OpenAI-compatible provider, you can harness the flexibility of Sypha with a broader array of AI models. Remember to always reference your provider's documentation for the most precise and current information.