Retiring the Models
OCI Generative AI retires its large language models (LLMs) based on each model's type and serving mode. The LLMs serve user requests in either an on-demand mode or a dedicated mode. Review the following sections to learn about deprecation and removal times and to decide what serving mode works best for you.
- Retirement for On-Demand Mode
-
When a model is retired in the on-demand mode, it's no longer available for use in the Generative AI service playground or through the Generative AI inference API.
- Retirement for Dedicated Mode
-
When a model is retired in the dedicated mode, you can no longer create a dedicated AI cluster for the retired model, but an active dedicated AI cluster running a retired model will continue to run. A custom model, that's running off a retired model will also continue to be available for active dedicated AI clusters and you can continue to create new dedicated AI clusters with a custom model that was created on a retired model. However, Oracle offers limited support for these scenarios, and Oracle engineering might ask you to upgrade to a supported model to resolve issues related to your model.
To request for a model to stay alive longer than the retirement date in a dedicated mode, create a support ticket.
- Deprecation
-
When a model is deprecated it remains available in the Generative AI service, but will have a defined amount of time that it can be used before it's retired. This amount of time is longer for the dedicated mode.
All models that were supported for the text generation and summarization APIs (including the playground) are now retired.
- On-Demand Mode
-
- You pay as you go for each inference call when you chat in the playground or when you call the Chat API.
- Available only for the offered Pretrained Foundational Models in Generative AI.
- Dedicated Mode
-
- You get a dedicated set of GPUs for your dedicated AI clusters.
- You can create custom models on the dedicated AI clusters, by fine-tuning a subset of the Pretrained Foundational Models in Generative AI.
- You can host replicas of the foundational and fine-tuned models on your dedicated AI clusters.
- You commit in advance to certain hours of using the dedicated AI clusters. For prices, see the pricing page.
The following table shows the retirement dates for models supported for the on-demand serving mode.
Model | Release Date | Retirement Date | Suggested Replacement Options |
---|---|---|---|
meta.llama-3.3-70b-instruct |
2025-02-07 | At least one month after the release of the 1st replacement model. | Tentative |
cohere.command-r-08-2024 |
2024-11-14 | At least one month after the release of the 1st replacement model. | Tentative |
cohere.command-r-plus-08-2024 |
2024-11-14 | At least one month after the release of the 1st replacement model. | Tentative |
meta.llama-3.2-90b-vision-instruct |
2024-11-14 | At least one month after the release of the 1st replacement model. | Tentative |
meta.llama-3.1-405b-instruct |
2024-09-19 | At least one month after the release of the 1st replacement model. | Tentative |
meta.llama-3.1-70b-instruct |
2024-09-19 | 2025-03-28 |
|
cohere.command-r-plus |
2024-06-18 | 2025-01-16 | cohere.command-r-plus-08-2024 |
cohere.command-r-16k |
2024-06-04 | 2025-01-16 | cohere.command-r-08-2024 |
cohere.embed-english-v3.0 |
2024-02-07 | At least 6 months after the release of the 1st replacement model. | Tentative |
cohere.embed-multilingual-v3.0 |
2024-02-07 | At least 6 months after the release of the 1st replacement model. | Tentative |
cohere.embed-english-light-v3.0 |
2024-02-07 | At least 6 months after the release of the 1st replacement model. | Tentative |
cohere.embed-multilingual-light-v3.0 |
2024-02-07 | At least 6 months after the release of the 1st replacement model. | Tentative |
meta.llama-3-70b-instruct |
2024-06-04 | 2024-11-12 |
|
cohere.command |
2024-02-07 | 2024-10-02 |
|
cohere.command-light |
2024-02-07 | 2024-10-02 |
|
meta.llama-2-70b-chat |
2024-01-22 | 2024-10-02 |
|
Deprecation times might change in the future.
If you need a dedicated serving mode model to stay alive longer than the retirement date, create a support ticket.
The following table shows the retirement dates for models supported for the dedicated serving mode.
Model | Release Date | Retirement Date | Suggested Replacement Options |
---|---|---|---|
meta.llama-3.3-70b-instruct |
2025-02-07 | At least 6 months after the release of the 1st replacement model. | Tentative |
cohere.command-r-08-2024 |
2024-11-14 | At least 6 months after the release of the 1st replacement model. | Tentative |
cohere.command-r-plus-08-2024 |
2024-11-14 | At least 6 months after the release of the 1st replacement model. | Tentative |
meta.llama-3.2-11b-vision-instruct |
2024-11-14 | At least 6 months after the release of the 1st replacement model. | Tentative |
meta.llama-3.2-90b-vision-instruct |
2024-11-14 | At least 6 months after the release of the 1st replacement model. | Tentative |
meta.llama-3.1-405b-instruct |
2024-09-19 | At least 6 months after the release of the 1st replacement model. | Tentative |
meta.llama-3.1-70b-instruct |
2024-09-19 | No sooner than 2025-08-07 |
|
cohere.command-r-plus |
2024-06-18 | 2025-05-14 | cohere.command-r-plus-08-2024 |
cohere.command-r-16k |
2024-06-04 | 2025-05-14 | cohere.command-r-08-2024 |
cohere.embed-english-v3.0 |
2024-02-07 | At least 6 months after the release of the 1st replacement model. | Tentative |
cohere.embed-multilingual-v3.0 |
2024-02-07 | At least 6 months after the release of the 1st replacement model. | Tentative |
cohere.embed-english-light-v3.0 |
2024-02-07 | At least 6 months after the release of the 1st replacement model. | Tentative |
cohere.embed-multilingual-light-v3.0 |
2024-02-07 | At least 6 months after the release of the 1st replacement model. | Tentative |
meta.llama-3-70b-instruct |
2024-06-04 | No sooner than 2025-03-19 |
|
cohere.command |
2024-02-07 | No sooner than 2025-01-18 |
|
cohere.command-light |
2024-02-07 | No sooner than 2025-01-04 |
|
meta.llama-2-70b-chat |
2024-01-22 | 2025-03-07 |
|
Deprecation times might change in the future.
The Generative AI service strives to mitigate quickly against any security issues or bug fixes that are present for any of the supported pretrained foundational models. Check the release notes to learn if you need to migrate to a different version.