Fine-Tune Meta's Llama 3.3 Instruct Model in OCI Generative AI

OCI Generative AI now supports fine-tuning Meta's pretrained 70 billion-parameter Llama 3.3 instruct model. This text-only model delivers better performance than both Llama 3.1 70B and Llama 3.2 90B for text tasks.

You can enhance your AI capabilities by fine-tuning the Llama 3.3 instruct model with your own dataset using the Low-Rank Adaptation (LoRA) method. For more information about fine-tuning, see fine-tuning the base models.

For information about the service, see the Generative AI documentation.

Was this article helpful?