Meta Llama 3.2 90B Vision
The meta.llama-3.2-90b-vision-instruct
model offers text and image understanding features and is available for on-demand inferencing and dedicated hosting.
Available in These Regions
- Brazil East (Sao Paulo)
- UK South (London)
- Japan Central (Osaka)
- Saudi Arabia Central (Riyadh) (dedicated AI cluster only)
- US Midwest (Chicago)
Access this Model
Key Features
- Key features
-
- Multimodal support: Input text and images and get a text output.
- Model Size: Model has 90 billion parameters.
- Context Length: 128,000 tokens (Maximum prompt + response length: 128,000 tokens for each run)
- Multilingual Support: English, French, German, Hindi, Italian, Portuguese, Spanish, and Thai
- About the new vision feature through multimodal support
-
Submit an image, ask questions about the image, and get a text outputs such as:
- Advanced image captions
- Detailed description of an image.
- Answers to questions about an image.
- Information about charts and graphs in an image.
- More details
-
- Includes the text-based capabilities of the previous Llama 3.1 70B model.
- In the playground, to add the next image and text, you must clear the chat which results in losing context of the previous conversation by clearing the chat.
- For on-demand inferencing, the response length is capped at 4,000 tokens for each run.
- For the dedicated mode, the response length isn't capped off and the context length is 128,000 tokens.
- English is the only supported language for the image plus text option.
- Multilingual option supported for the text only option.
- In the Console, input a
.png
or.jpg
image of 5 MB or less. - For API, input a
base64
encoded image in each run. A 512 x 512 image is converted to about 1,610 tokens.
On-Demand Mode
This model is available on-demand in regions not listed as (dedicated AI cluster only). See the following table for this model's on-demand product name on the pricing page.
Model Name | OCI Model Name | Pricing Page Product Name |
---|---|---|
Meta Llama 3.2 90B Vision | meta.llama-3.2-90b-vision-instruct |
Large Meta |
-
You pay as you go for each inference call when you use the models in the playground or when you call the models through the API.
- Low barrier to start using Generative AI.
- Great for experimentation, proof of concept, and model evaluation.
- Available for the pretrained models in regions not listed as (dedicated AI cluster only).
Dynamic Throttling Limit Adjustment for On-Demand Mode
OCI Generative AI dynamically adjusts the request throttling limit for each active tenancy based on model demand and system capacity to optimize resource allocation and ensure fair access.
This adjustment depends on the following factors:
- The current maximum throughput supported by the target model.
- Any unused system capacity at the time of adjustment.
- Each tenancy’s historical throughput usage and any specified override limits set for that tenancy.
Note: Because of dynamic throttling, rate limits are undocumented and can change to meet system-wide demand.
Because of the dynamic throttling limit adjustment, we recommend implementing a back-off strategy, which involves delaying requests after a rejection. Without one, repeated rapid requests can lead to further rejections over time, increased latency, and potential temporary blocking of client by the Generative AI service. By using a back-off strategy, such as an exponential back-off strategy, you can distribute requests more evenly, reduce load, and improve retry success, following industry best practices and enhancing the overall stability and performance of your integration to the service.
Dedicated AI Cluster for the Model
In the preceding region list, regions that aren't marked with (dedicated AI cluster only) have both on-demand and dedicated AI cluster options. For the on-demand option, you don't need clusters and you can reach the model in the Console playground or through the API. Learn about the dedicated mode.
To reach a model through a dedicated AI cluster in any listed region, you must create an endpoint for that model on a dedicated AI cluster. For the cluster unit size that matches this model, see the following table.
Base Model | Fine-Tuning Cluster | Hosting Cluster | Pricing Page Information | Request Cluster Limit Increase |
---|---|---|---|---|
|
Not available for fine-tuning |
|
|
|
If you don't have enough cluster limits in your tenancy for hosting the Meta Llama 3.2 90B Vision model on a dedicated AI cluster, request the limit dedicated-unit-llama2-70-count
to increase by 2.
Endpoint Rules for Clusters
- A dedicated AI cluster can hold up to 50 endpoints.
- Use these endpoints to create aliases that all point either to the same base model or to the same version of a custom model, but not both types.
- Several endpoints for the same model make it easy to assign them to different users or purposes.
Hosting Cluster Unit Size | Endpoint Rules |
---|---|
Large Generic V2 |
|
-
To increase the call volume supported by a hosting cluster, increase its instance count by editing the dedicated AI cluster. See Updating a Dedicated AI Cluster.
-
For more than 50 endpoints per cluster, request an increase for the limit,
endpoint-per-dedicated-unit-count
. See Requesting a Service Limit Increase and Service Limits for Generative AI.
Cluster Performance Benchmarks
Review the Meta Llama 3.2 90B Vision cluster performance benchmarks for different use cases.
Release and Retirement Dates
Model | Release Date | On-Demand Retirement Date | Dedicated Mode Retirement Date |
---|---|---|---|
meta.llama-3.2-90b-vision-instruct
|
2024-11-14 | At least one month after the release of the 1st replacement model. | At least 6 months after the release of the 1st replacement model. |
Model Parameters
To change the model responses, you can change the values of the following parameters in the playground or the API.
- Maximum output tokens
-
The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens.
- Temperature
-
The level of randomness used to generate the output text.
Tip
Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information. - Top p
-
A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign
p
a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Setp
to 1 to consider all tokens. - Top k
-
A sampling method in which the model chooses the next token randomly from the
top k
most likely tokens. A high value fork
generates more random output, which makes the output text sound more natural. The default value for k is 0 forCohere Command
models and -1 forMeta Llama
models, which means that the model should consider all tokens and not use this method. - Frequency penalty
-
A penalty that's assigned to a token when that token appears frequently. High penalties encourage fewer repeated tokens and produce a more random output.
For the Meta Llama family models, this penalty can be positive or negative. Positive numbers encourage the model to use new tokens and negative numbers encourage the model to repeat the tokens. Set to 0 to disable.
- Presence penalty
-
A penalty that's assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used.
- Seed
-
A parameter that makes a best effort to sample tokens deterministically. When this parameter is assigned a value, the large language model aims to return the same result for repeated requests when you assign the same seed and parameters for the requests.
Allowed values are integers and assigning a large or a small seed value doesn't affect the result. Assigning a number for the seed parameter is similar to tagging the request with a number. The large language model aims to generate the same set of tokens for the same integer in consecutive requests. This feature is especially useful for debugging and testing. The seed parameter has no maximum value for the API, and in the Console, its maximum value is 9999. Leaving the seed value blank in the Console, or null in the API disables this feature.
Warning
The seed parameter might not produce the same result in the long-run, because the model updates in the OCI Generative AI service might invalidate the seed.