xAI (Grok)
Learn how to configure and use xAI's Grok models with Sypha, including API key setup, supported models, and reasoning capabilities.
xAI develops Grok, an advanced language model recognized for its strong conversational capabilities and extensive context processing capacity. The Grok family of models is engineered to deliver useful, accurate, and contextually appropriate responses across various use cases.
Website: https://x.ai/
Obtaining an API Key
- Account Creation/Login: Access the xAI Console. Register for a new account or authenticate with existing credentials.
- Access API Keys Section: Locate the API keys area within your dashboard.
- Generate a New Key: Initiate new API key creation. Assign a meaningful name to your key (for example, "Sypha").
- Secure Your Key: Critical: Retrieve the API key right away. This is your only opportunity to view it. Keep it in a secure location.
Available Models
Sypha provides support for these xAI Grok models:
Grok-3 Models
grok-3-beta(Default) - xAI's Grok-3 beta model with 131K context windowgrok-3-fast-beta- xAI's Grok-3 fast beta model with 131K context windowgrok-3-mini-beta- xAI's Grok-3 mini beta model with 131K context windowgrok-3-mini-fast-beta- xAI's Grok-3 mini fast beta model with 131K context window
Grok-2 Models
grok-2-latest- xAI's Grok-2 model - latest version with 131K context windowgrok-2- xAI's Grok-2 model with 131K context windowgrok-2-1212- xAI's Grok-2 model (version 1212) with 131K context window
Grok Vision Models
grok-2-vision-latest- xAI's Grok-2 Vision model - latest version with image support and 32K context windowgrok-2-vision- xAI's Grok-2 Vision model with image support and 32K context windowgrok-2-vision-1212- xAI's Grok-2 Vision model (version 1212) with image support and 32K context windowgrok-vision-beta- xAI's Grok Vision Beta model with image support and 8K context window
Legacy Models
grok-beta- xAI's Grok Beta model (legacy) with 131K context window
Sypha Configuration Steps
- Access Settings Panel: Select the settings icon (⚙️) within the Sypha interface.
- Choose Provider: Pick "xAI" from the available "API Provider" options.
- Insert API Key: Place your xAI API key into the designated "xAI API Key" input field.
- Pick Model: Select your preferred Grok model from the "Model" selection menu.
Advanced Reasoning Features
The Grok 3 Mini variants include dedicated reasoning functionality, enabling them to process information before generating responses—especially valuable for intricate problem-solving scenarios.
Models with Reasoning Support
Reasoning functionality is exclusively available in:
grok-3-mini-betagrok-3-mini-fast-beta
Note that grok-3-beta and grok-3-fast-beta do not include reasoning support.
Adjusting Reasoning Intensity
For models that support reasoning, you can regulate the depth of analysis using the reasoning_effort parameter:
low: Reduced processing time, consuming fewer tokens for faster resultshigh: Extended analysis period, utilizing additional tokens for intricate challenges
Select low for straightforward questions requiring rapid completion, and high for challenging problems where processing time is less critical.
Core Capabilities
- Methodical Problem Analysis: The model systematically evaluates problems before formulating responses
- Mathematical & Analytical Proficiency: Demonstrates excellence in computational tasks and logical reasoning
- Thought Process Visibility: Access to the model's reasoning workflow via the
reasoning_contentfield in response completion objects
Important Considerations
- Context Capacity: The majority of Grok models provide generous context windows (extending to 131K tokens), enabling you to supply extensive code samples and contextual information in your requests.
- Image Processing: Opt for vision-compatible models (
grok-2-vision-latest,grok-2-vision, etc.) when working with visual content or image analysis tasks. - Cost Structure: Model pricing differs, with input rates spanning $0.3 to $5.0 per million tokens and output rates ranging from $0.5 to $25.0 per million tokens. Consult xAI documentation for current pricing details.
- Speed vs. Capability Balance: "Fast" model variants generally deliver speedier responses but might incur elevated costs, whereas "mini" variants provide cost efficiency with potentially limited functionality.
VS Code Language Model API
Learn how to use Sypha with the experimental VS Code Language Model API, enabling access to models from GitHub Copilot and other compatible extensions.
Z AI (Zhipu AI)
Learn how to configure and use Z AI's GLM-4.5 models with Sypha. Experience advanced hybrid reasoning, agentic capabilities, and open-source excellence with regional optimization.