Sypha AI Docs
Provider config

SAP AI Core

Learn how to configure and use LLM models from Generative AI Hub in SAP AI Core with Sypha.

SAP AI Core along with the generative AI hub enable you to incorporate LLMs and AI into novel business processes in an economical fashion.

Website: SAP Help Portal

SAP AI Core, and Generative AI Hub, are services provided through SAP BTP. You require an active SAP BTP contract and an existing subaccount featuring a SAP AI Core instance with the extended service plan (For additional information about SAP AI Core service plans and their features, consult the Service Plans documentation) to execute these steps.

Getting a Service Binding

  1. Access: Navigate to your subaccount through BTP Cloud Cockpit
  2. Create a Service Binding: Navigate to "Instances and Subscriptions", choose your SAP AI Core service instance and select Service Bindings > Create.
  3. Copy the Service Binding: Capture the service binding values.

Supported Models

SAP AI Core is compatible with an extensive and expanding collection of models. Consult the Generative AI Hub Supported Models page for the comprehensive and current list.

Configuration in Sypha

  1. Open Sypha Settings: Select the settings icon (⚙️) within the Sypha panel.
  2. Select Provider: Pick "SAP AI Core" from the "API Provider" dropdown menu.
  3. Enter Client Id: Insert the .clientid field from the service binding into the "AI Core Client Id" field.
  4. Enter Client Secret: Insert the .clientsecret field from the service binding into the "AI Core Client Secret" field.
  5. Enter Base URL: Insert the .serviceurls.AI_API_URL field from the service binding into the "AI Core Base URL" field.
  6. Enter Auth URL: Insert the .url field from the service binding into the "AI Core Auth URL" field.
  7. Enter Resource Group: Insert the resource group where your model deployments reside. Refer to Create a Deployment for a Generative AI Model.
  8. Configure Orchestration Mode: When you possess an extended service plan, the "Orchestration Mode" checkbox will automatically display.
  9. Select Model: Pick your preferred model from the "Model" dropdown menu.

Orchestration Mode vs Native API

Orchestration Mode:

  • Streamlined usage: Grants access to all accessible models without necessitating individual deployments via the Harmonized API

Native API Mode:

  • Manual deployments: Necessitates manual model deployment and administration within your SAP AI Core service instance

Tips and Notes

  • Service Plan Requirement: You must possess the SAP AI Core extended service plan to utilize LLMs with Sypha. Alternative service plans do not grant access to Generative AI Hub.

  • Orchestration Mode (Recommended): Maintain Orchestration Mode activated for the most straightforward setup. It delivers automatic access to all accessible models without necessitating manual deployments.

  • Native API Mode: Only deactivate Orchestration Mode when you have particular requirements demanding direct AI Core API access or require features unsupported by the orchestration mode.

  • When using Native API Mode:

    • Model Selection: The model dropdown presents models in two distinct lists:
      • Deployed Models: These models are already deployed within your designated resource group and are immediately ready for use.
      • Not Deployed Models: These models lack active deployments within your designated resource group. You cannot utilize these models until you establish deployments for them in SAP AI Core.
    • Creating Deployments: To employ a model that hasn't been deployed yet, you'll need to establish a deployment within your SAP AI Core service instance. Consult Create a Deployment for a Generative AI Model for guidance.

Configuring Reasoning Effort for OpenAI Models

When employing OpenAI reasoning models (such as o1, o3, o3-mini, o4-mini) via SAP AI Core, you can regulate the reasoning effort to balance performance and cost:

  1. Open Sypha Settings: Select the settings icon (⚙️) within the Sypha panel.
  2. Navigate to Features: Proceed to the "Features" section within the settings.
  3. Find OpenAI Reasoning Effort: Find the "OpenAI Reasoning Effort" setting.
  4. Choose Effort Level: Pick between:
    • Low: Quicker responses with reduced token usage, appropriate for simpler tasks
    • Medium: Balanced performance and token usage for typical tasks
    • High: More comprehensive analysis with elevated token usage, superior for complex reasoning tasks

This setting exclusively applies when employing OpenAI reasoning models (o1, o3, o3-mini, o4-mini, gpt-5, etc.) deployed via SAP AI Core. Other models will disregard this setting.

On this page