Provider config
VS Code Language Model API
Learn how to use Sypha with the experimental VS Code Language Model API, enabling access to models from GitHub Copilot and other compatible extensions.
Sypha provides experimental support for the VS Code Language Model API. This API permits extensions to provide access to language models directly inside the VS Code environment. As a result, you may be able to utilize models from:
- GitHub Copilot: If you possess an active Copilot subscription and the extension is installed.
- Other VS Code Extensions: Any extension implementing the Language Model API.
Important Note: This integration is presently in an experimental stage and may not function as expected. Its operation depends on other extensions properly implementing the VS Code Language Model API.
Prerequisites
- VS Code: The Language Model API is available through VS Code (it is not presently supported by Cursor).
- A Language Model Provider Extension: An extension providing a language model is necessary. Examples include:
- GitHub Copilot: With a Copilot subscription, the GitHub Copilot and GitHub Copilot Chat extensions can function as model providers.
- Alternative Extensions: Browse the VS Code Marketplace for extensions referencing "Language Model API" or "lm". Additional experimental options may exist
Configuration Steps
- Ensure Copilot Account is Active and Extensions are installed: Users authenticated to either the Copilot or Copilot Chat extension should be able to obtain access through Sypha.
- Access Sypha Settings: Select the gear icon (⚙️) positioned in the Sypha panel.
- Choose Provider: Pick "VS Code LM API" from the "API Provider" dropdown menu.
- Select Model: When the Copilot extension(s) are installed and the user is authenticated to their Copilot account, the "Language Model" dropdown will fill with available models after a brief period. The naming pattern is
vendor/family. For example, if Copilot is active, you may see options such as:copilot - gpt-3.5-turbocopilot - gpt-4o-minicopilot - gpt-4copilot - gpt-4-turbocopilot - gpt-4ocopilot - claude-3.5-sonnetNOTE: this model does not function.copilot - gemini-2.0-flashcopilot - gpt-4.1
For optimal results with the VSCode LM API Provider, we recommend using the OpenAI Models (GPT 3, 4, 4.1, 4o etc.)
Current Limitations
- Experimental API Status: The VS Code Language Model API remains under active development. Expect potential changes and instability.
- Dependency on Extensions: This feature is completely dependent on other extensions providing models. Sypha cannot directly manage the list of available models.
- Restricted Functionality: The VS Code Language Model API may not include all features accessible through other API providers (e.g., image input capabilities, streaming responses, detailed usage metrics).
- No Direct Cost Management: Users are bound by the pricing structures and terms of service of the extension supplying the model. Sypha cannot directly track or control associated costs.
- GitHub Copilot Rate Throttling: When utilizing the VS Code LM API with GitHub Copilot, be aware that GitHub may impose rate limits on Copilot usage. These restrictions are controlled by GitHub, not Sypha.
Troubleshooting Tips
- Models Not Appearing:
- Verify that VS Code is installed.
- Confirm that a language model provider extension (e.g., GitHub Copilot, GitHub Copilot Chat) is installed and activated.
- If using Copilot, ensure you have previously transmitted a Copilot Chat message utilizing the desired model.
- Unexpected Operation: If you experience unexpected behavior, it is probably an issue originating from the underlying Language Model API or the provider extension. Consider submitting the problem to the developers of the provider extension.