Viewing a Model's Details
The model information details tab shows the model information, such as the model artifact, model directory, git details, training script, description, created by, created on, OCID, and so.
The Git, model directory, and model training script metadata gives contextual information on the source code used to train the model stored in catalog. Diligent tracking of the provenance metadata improves model reproducibility and auditability. All of these are optional.
From a Model Information view, you can:
-
View the description, creator, creation date, and time, and model artifact file name and size.
For an OCID: you can use Show to see the full name of the user that created the notebook session. Use Copy to copy the name to the clipboard to use elsewhere.
-
Select a resource such as Model Taxonomy or Associated Model Deployments.
The default view is the model's provenance.
-
Use the oci data-science model get command and required parameters to view a model's details:
oci data-science model get --model-id
<model-id>
... [OPTIONS]For a complete list of flags and variable options for CLI commands, see the CLI Command Reference.
Use the GetModel operation to view model details.
Viewing Model Provenance
When model provenance has been defined, you can view how the model was trained.
The Training resource tab shows the notebook session or job run that trained the model. You can select them to manage them.
The Model training source code tab shows the Git details, model script, and the training script.
You can view any tags that are defined by clicking the Tags tab.
Viewing Model Taxonomy
When model taxonomy has been defined, you can view the taxonomy description and any defined custom attributes. You can show all the hyperparameters, and copy them to use elsewhere.
You can view any tags that are defined by clicking the Tags tab.
Viewing Associated Model Deployments
If there are associated model deployments, then they're listed so that you can select a deployment to manage it.
You can view any tags that are defined by clicking the Tags tab.
Viewing Model Introspection
When model introspection has been defined, you can view the tests that ran on the client-side before you save the model to the model catalog by clicking score.py or runtime.yaml. The status is shown for each use case test as success, failed, or not tested. We recommend that all of your tests are successful before you save the model.
Click the Custom Model Attributes tab to see the label, value, category, and description for the model if they were created for the model.
You can view any tags that are defined by clicking the Tags tab.
Viewing Model Schemas
When input and output model schemas have been defined, the contents of the uploaded files are displayed in separate fields. You can review and copy the contents.
You can view any tags that are defined by clicking the Tags tab.