Meta Llama 3 (70B)

The meta.llama-3-70b-instruct model is retired.

Important

The meta.llama-3-70b-instruct model is now retired. See Retiring the Models for suggested replacement models.

About Retired Models

Retirement for On-Demand Mode

When a model is retired in the on-demand mode, it's no longer available for use in the Generative AI service playground or through the Generative AI Inference API.

Retirement for Dedicated Mode

When a model is retired in the dedicated mode, you can no longer create a dedicated AI cluster for the retired model, but an active dedicated AI cluster running a retired model continues to run. A custom model, that's running off a retired model also continues to be available for active dedicated AI clusters and you can continue to create new dedicated AI clusters with a custom model that was created on a retired model. However, Oracle offers limited support for these scenarios, and Oracle engineering might ask you to upgrade to a supported model to resolve issues related to your model.

To request for a model to stay alive longer than the retirement date in a dedicated mode, create a support ticket.

Available in These Regions

If you're running this model on a dedicated AI cluster, this model would be in one of these regions:

  • Brazil East (Sao Paulo)
  • Germany Central (Frankfurt)
  • UK South (London)
  • US Midwest (Chicago)

Key Features

  • Model Size: 70 billion parameters
  • Context Length: 8,000 tokens (Maximum prompt + response length: 8,000 tokens for each run.)
  • Knowledge: Has a broad general knowledge, from generating ideas to refining text analysis and drafting written content, such as emails, blog posts, and descriptions.

On-Demand Mode

The cohere.embed-english-light-image-v3.0 model is retired and therefore, isn't available in the on-demand mode.

Dedicated AI Cluster for the Model

To reach a model through a dedicated AI cluster in any listed region, you must create an endpoint for that model on a dedicated AI cluster. If you created a dedicated AI cluster for this model, here is the information about the cluster:

Base Model Fine-Tuning Cluster Hosting Cluster Pricing Page Information Request Cluster Limit Increase
  • Model Name: Meta Llama 3
  • OCI Model Name: meta.llama-3-70b-instruct (retired)
  • Unit Size: Large Generic
  • Required Units: 2
  • Unit Size: Large Generic
  • Required Units: 1
  • Pricing Page Product Name: Large Meta - Dedicated
  • For Hosting, Multiply the Unit Price: x2
  • For Fine-Tuning, Multiply the Unit Price: x4
  • Limit Name: dedicated-unit-llama2-70-count
  • For Hosting, Request Limit Increase by: 2
  • For Fine-Tuning, Request Limit Increase by: 4
Note

  • Hosting the Meta Llama 3 model on a dedicated AI cluster, uses 2 unit counts of the service limit, dedicated-unit-llama2-70-count.
  • Fine-tuning the Meta Llama 3 model on a dedicated AI cluster, uses 4 unit counts of the service limit, dedicated-unit-llama2-70-count.

Endpoint Rules for Clusters

  • A dedicated AI cluster can hold up to 50 endpoints.
  • Use these endpoints to create aliases that all point either to the same base model or to the same version of a custom model, but not both types.
  • Several endpoints for the same model make it easy to assign them to different users or purposes.
Hosting Cluster Unit Size Endpoint Rules
Large Generic
  • Base model: To run the meta.llama-3-70b-instruct model on several endpoints, create as many endpoints as you need on a Large Generic cluster (unit‑size).
  • Custom model: The same applies to a custom model that's built on top of  meta.llama-3-70b-instruct: create the required number of endpoints on a Large Generic (unit‑size) cluster.
Tip

Cluster Performance Benchmarks

Review the Meta Llama 3 (70B) cluster performance benchmarks for different use cases.

Release and Retirement Dates

Model Release Date On-Demand Retirement Date Dedicated Mode Retirement Date
meta.llama-3-70b-instruct 2024-06-04 2024-11-12 2025-08-07
Important

For a list of all model time lines and retirement details, see Retiring the Models.

Model Parameters

To change the model responses, you can change the values of the following parameters in the playground or the API.

Maximum output tokens

The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens.

Temperature

The level of randomness used to generate the output text.

Tip

Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information.
Top p

A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign p a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Set p to 1 to consider all tokens.

Top k

A sampling method in which the model chooses the next token randomly from the top k most likely tokens. A high value for k generates more random output, which makes the output text sound more natural. The default value for k is 0 for Cohere Command models and -1 for Meta Llama models, which means that the model should consider all tokens and not use this method.

Frequency penalty

A penalty that's assigned to a token when that token appears frequently. High penalties encourage fewer repeated tokens and produce a more random output.

For the Meta Llama family models, this penalty can be positive or negative. Positive numbers encourage the model to use new tokens and negative numbers encourage the model to repeat the tokens. Set to 0 to disable.

Presence penalty

A penalty that's assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used.

Seed

A parameter that makes a best effort to sample tokens deterministically. When this parameter is assigned a value, the large language model aims to return the same result for repeated requests when you assign the same seed and parameters for the requests.

Allowed values are integers and assigning a large or a small seed value doesn't affect the result. Assigning a number for the seed parameter is similar to tagging the request with a number. The large language model aims to generate the same set of tokens for the same integer in consecutive requests. This feature is especially useful for debugging and testing. The seed parameter has no maximum value for the API, and in the Console, its maximum value is 9999. Leaving the seed value blank in the Console, or null in the API disables this feature.

Warning

The seed parameter might not produce the same result in the long-run, because the model updates in the OCI Generative AI service might invalidate the seed.