Data Handling in Generative AI

Learn how OCI Generative AI handles user data.

Does OCI Generative AI retain customer-provided training data used to fine-tune a custom model?

No. The customer stores and manages their training data in their own customer tenancy (commonly inside an OCI Object Storage bucket). OCI Generative AI's fine-tuning job uses this training data to train a custom model for customer. OCI Generative AI doesn't retain this training data beyond the duration of this training job. This training data is solely used for building a custom model for this customer. The custom model is also a resource that's managed by the customer. This training data is not used to improve the general use cases for OCI Generative AI.

Does OCI Generative AI retain customer-provided prompts and inputs used for inferencing, on the large language models (LLMs)?
No, OCI Generative AI doesn't retain customer inputs. A user's input on an inference call is sent to the LLM and a response is generated by the LLM that's returned to the user. Both input and output are not stored inside OCI Generative AI.
Does OCI Generative AI share prompts and responses, fine-tuning training data, or fine-tuned custom models with third-party model providers such as Cohere or Meta?
No.
Is the training data encrypted?

The data is encrypted both at rest and in transit. The training data is deleted from the fine-tuning cluster as soon as the training job is completed.

Training data for fine-tuning a model is always double encrypted, by default, by Oracle-managed AES-256 encryption and optionally by customer-managed private keys through OCI Vault service. Customers can delete the data at any time.

Oracle encrypts all the data in motion with TLS 1.2.

Was this article helpful?