xAI Grok 3 Mini Fast
The xai.grok-3-mini-fast
model is a lightweight model that thinks before responding. Fast, smart, and great for logic-based tasks that don't require deep domain knowledge. The raw thinking traces are accessible.
The xai.grok-3-mini
and xai.grok-3-mini-fast
models, both use the same underlying model and deliver identical response quality. The difference lies in how they're served: The xai.grok-3-mini-fast
model is served on faster infrastructure, offering response times that are significantly faster than the standard xai.grok-3-mini
model. The increased speed comes at a higher cost per output token.
The xai.grok-3-mini
and xai.grok-3-mini-fast
models point to the same underlying model. Select xai.grok-3-mini-fast
for latency-sensitive applications, and select xai.grok-3-mini
for reduced cost.
Available in These Regions
- US East (Ashburn) (on-demand only)
- US Midwest (Chicago) (on-demand only)
- US West (Phoenix) (on-demand only)
External Calls
The xAI Grok models are hosted in an OCI data center, in a tenancy provisioned for xAI. The xAI Grok models, which can be accessed through the OCI Generative AI service, are managed by xAI.
Access this Model
Key Features
- Model name in OCI
Generative AI:
xai.grok-3-mini
- Available On-Demand: Access this model on-demand, through the Console playground or the API.
- Text-Mode Only: Input text and get a text output. (No image support.)
- Fast: Great for logic-based tasks that don't require deep domain knowledge.
- Context Length: 131,072 tokens (maximum prompt + response length is 131,072 tokens for each run). In the playground, the response length is capped at 16,000 tokens for each run.
- Function Calling: Yes, through the API.
- Structured Outputs: Yes.
- Has Reasoning: Yes. See the
reasoning_effort
parameter in the Model Parameters section. - Knowledge Cutoff: November 2024
Limits
- Tokens per minute (TPM)
-
Inference calls to this model are capped at 100,000 tokens per minute (TPM) per customer or tenancy.
To see the current limit for a tenancy, in the Console, navigate to Governance and Administration. Under Tenancy Management, select Limits, quotas, and usage. Under Service, select Generative AI and review the service limits. To request a service limit increase, select Request a service limit increase. For the TPM limit increase, use the following limit name:
grok-3-mini-chat-tokens-per-minute-count
.
On-Demand Mode
The Grok models are available only in the on-demand mode.
Model Name | OCI Model Name | Pricing Page Product Name |
---|---|---|
xAI Grok 3 Mini Fast | xai.grok-3-mini-fast |
xAI – Grok 3 Mini Fast |
-
You pay as you go for each inference call when you use the models in the playground or when you call the models through the API.
- Low barrier to start using Generative AI.
- Great for experimentation, proof of concept, and model evaluation.
- Available for the pretrained models in regions not listed as (dedicated AI cluster only).
Release Date
Model | Beta Release Date | General Availability Release Date | On-Demand Retirement Date | Dedicated Mode Retirement Date |
---|---|---|---|---|
xai.grok-3-mini-fast |
2025-05-22 | 2025-06-24 | Tentative | This model isn't available for the dedicated mode. |
Model Parameters
To change the model responses, you can change the values of the following parameters in the playground or the API.
- Maximum output tokens
-
The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens. The maximum prompt + output length is 131,072 tokens for each run. In the playground, the maximum output tokens is capped at 16,000 tokens for each run.
- Temperature
-
The level of randomness used to generate the output text. Min: 0, Max: 2
Tip
Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information. - Top p
-
A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Min: 0, Max: 1.
Assign
p
a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Setp
to 1 to consider all tokens. - Reasoning Effort
-
The
reasoning_effort
parameter, available through the API and not the Console, controls how much time the model spends thinking before responding. You must set it to one of these values:low
: Minimal thinking time, using fewer tokens for quick responses.high
: Maximum thinking time, leveraging more tokens for complex problems.
Choosing the correct level depends on your task: use
low
for simple queries that complete quickly, andhigh
for harder problems where response latency is less important. Learn about this parameter in the xAI guides.