LiteLLM & Sypha (using Codestral)
Learn how to set up and run LiteLLM with Sypha using the Codestral model. This guide covers Docker setup, configuration, and integration with Sypha.
Using LiteLLM with Sypha
This tutorial walks you through setting up a LiteLLM demonstration using the Codestral model integrated with Sypha.
Prerequisites
- Docker CLI or Docker Desktop must be installed for running the LiteLLM image on your local machine
- For this configuration example: A Codestral API Key (note that this differs from standard Mistral API Keys)
Setup
-
Generate a
.envfile and populate the required fields# Tip: Use the following command to generate a random alphanumeric key: # openssl rand -base64 32 | tr -dc 'A-Za-z0-9' | head -c 32 LITELLM_MASTER_KEY=YOUR_LITELLM_MASTER_KEY CODESTRAL_API_KEY=YOUR_CODESTRAL_API_KEYNote: Despite being localhost-only, setting LITELLM_MASTER_KEY to a secure value is recommended practice
-
Configuration
You'll need to generate a
config.yamlfile containing your LiteLLM setup. For this example, we'll configure a single model, 'codestral-latest', and assign it the label 'codestral'model_list: - model_name: codestral litellm_params: model: codestral/codestral-latest api_key: os.environ/CODESTRAL_API_KEY
Running the Demo
-
Launch the LiteLLM docker container
docker run \ --env-file .env \ -v $(pwd)/config.yaml:/app/config.yaml \ -p 127.0.0.1:4000:4000 \ ghcr.io/berriai/litellm:main-latest \ --config /app/config.yaml --detailed_debug -
Configure Sypha
After the LiteLLM server has started and is operational, configure it within Sypha:
- Base URL must be
http://0.0.0.0:4000/v1 - API Key must match the value you defined in
.envfor LITELLM_MASTER_KEY - Model ID is
codestralor the custom name you assigned inconfig.yaml
- Base URL must be
Getting Help
Groq
Learn how to configure and use Groq's lightning-fast inference with Sypha. Access models from OpenAI, Meta, DeepSeek, and more on Groq's purpose-built LPU architecture.
Mistral
Learn how to configure and use Mistral AI models, including Codestral, with Sypha. Covers API key setup and model selection.