Sypha AI Docs
Provider config

LiteLLM & Sypha (using Codestral)

Learn how to set up and run LiteLLM with Sypha using the Codestral model. This guide covers Docker setup, configuration, and integration with Sypha.

Using LiteLLM with Sypha

This tutorial walks you through setting up a LiteLLM demonstration using the Codestral model integrated with Sypha.

Prerequisites

  • Docker CLI or Docker Desktop must be installed for running the LiteLLM image on your local machine
  • For this configuration example: A Codestral API Key (note that this differs from standard Mistral API Keys)

Setup

  1. Generate a .env file and populate the required fields

    # Tip: Use the following command to generate a random alphanumeric key:
    # openssl rand -base64 32 | tr -dc 'A-Za-z0-9' | head -c 32
    LITELLM_MASTER_KEY=YOUR_LITELLM_MASTER_KEY
    CODESTRAL_API_KEY=YOUR_CODESTRAL_API_KEY

    Note: Despite being localhost-only, setting LITELLM_MASTER_KEY to a secure value is recommended practice

  2. Configuration

    You'll need to generate a config.yaml file containing your LiteLLM setup. For this example, we'll configure a single model, 'codestral-latest', and assign it the label 'codestral'

    model_list:
        - model_name: codestral
          litellm_params:
              model: codestral/codestral-latest
              api_key: os.environ/CODESTRAL_API_KEY

Running the Demo

  1. Launch the LiteLLM docker container

    docker run \
        --env-file .env \
        -v $(pwd)/config.yaml:/app/config.yaml \
        -p 127.0.0.1:4000:4000 \
        ghcr.io/berriai/litellm:main-latest \
        --config /app/config.yaml --detailed_debug
  2. Configure Sypha

    After the LiteLLM server has started and is operational, configure it within Sypha:

    • Base URL must be http://0.0.0.0:4000/v1
    • API Key must match the value you defined in .env for LITELLM_MASTER_KEY
    • Model ID is codestral or the custom name you assigned in config.yaml

Getting Help

On this page