Sypha AI Docs
Provider config

Requesty

Learn how to use Requesty with Sypha to access and optimize over 150 large language models.

Sypha accommodates model access through the Requesty AI platform. Requesty delivers a streamlined and optimized API for interacting with 150+ large language models (LLMs).

Website: https://www.requesty.ai/

Obtaining an API Key

  1. Account Creation/Login: Access the Requesty website and establish an account or authenticate.
  2. Acquire API Key: You can obtain an API key from the API Management section of your Requesty dashboard.

Available Models

Requesty delivers access to a broad spectrum of models. Sypha will automatically retrieve the current list of available models. You can view the complete list of available models on the Model List page.

Sypha Configuration

  1. Access Sypha Configuration: Select the settings icon (⚙️) within the Sypha panel.
  2. Choose Provider: Pick "Requesty" from the "API Provider" selector.
  3. Insert API Key: Place your Requesty API key into the "Requesty API Key" field.
  4. Choose Model: Select your preferred model from the "Model" selector.

Important Considerations

  • Optimizations: Requesty provides a spectrum of in-flight cost optimizations to reduce your expenses.
  • Unified and simplified billing: Unrestricted access to all providers and models, automatic balance replenishment and more via a single API key.
  • Cost tracking: Monitor cost per model, coding language, modified file, and more via the Cost dashboard or the Requesty VS Code extension.
  • Stats and logs: View your coding stats dashboard or examine your LLM interaction logs.
  • Fallback policies: Maintain your LLM operational with fallback policies when providers experience downtime.
  • Prompt Caching: Certain providers accommodate prompt caching. Search models with caching.

On this page