Editing a Model
You can edit (update) some Data Science model options.
If you added metadata to a model, you can edit the provenance and taxonomy. You can't edit the input and output schemas.
You can edit the model name and description, all other options are unchangeable. You can change a model by loading it back into a notebook session, making changes, and then saving the model as a new model.
- Use the Console to sign in to a tenancy with the necessary policies.
- Open the navigation menu and click Analytics & AI. Under Machine Learning, click Data Science.
-
Select the compartment that contains the project with the model.
All projects in the compartment are listed.
-
Click the name of the project.
The project details page opens and lists the notebook sessions.
-
Under Resources, click Models.
A tabular list of models in the compartment is displayed.
-
Click the name of the model.
The model details page opens.
- Click Edit.
- (Optional) Change the name, description, or version label.
- (Optional)
In the Model provenance box, click Select.
- Select Notebook session or Job run depending on where you want to store the taxonomy documentation.
-
Find the notebook session or job run that the model was trained with by using one of the following options:
- Choose a project:
-
Select the name of the project to use in the selected compartment.
The selected compartment applies to both the project and the notebook session or job run, and both must be in the same compartment. If not, then use the OCID search instead.
You can change the compartment for both the project and notebook session or job run.
The name of the project to use in the selected compartment.
Select the notebook session or job run that the model was trained with.
- OCID search:
-
If the notebook session or job run is in a different compartment than the project, then enter the notebook session or job run OCID that you trained the model in.
- Select the notebook session or job run that the model was trained with.
- (Optional)
Click Show advanced options to identify Git and model training information.
Enter or select any of the following values:
- Git repository URL
-
The URL of the remote Git repository.
- Git commit
-
The commit ID of the Git repository.
- Git branch
-
The name of the branch.
- Local model directory
-
The directory path where the model artifact was temporarily stored. This could be a path in a notebook session or a local computer directory for example.
- Model training script
-
The name of the Python script or notebook session that the model was trained with.
Tip
You can also populate model provenance metadata when you save a model to the model catalog using the OCI SDKs or the CLI.
- Click Select.
- (Optional)
In the Model taxonomy box, click Select to specify what the model does, machine learning framework, hyperparameters, or to create custom metadata to document the model.
Important
The maximum allowed size for all the model metadata is 32000 bytes. The size is a combination of the preset model taxonomy and the custom attributes.
-
In the Model taxonomy section, add preset labels as follows:
Enter or select the following:
Model taxonomy- Use case
-
The type of machine learning use case to use.
- Model framework
-
The Python library you used to train the model.
- Model framework version
-
The version of the machine learning framework. This is a free text value. For example, the value could be 2.3.
- Model algorithm or model estimator object
-
The algorithm used or model instance class. This is a free text value. For example,
sklearn.ensemble.RandomForestRegressor
could be the value. - Model hyperparameters
-
The hyperparameters of the model in JSON format.
- Artifact test results
-
The JSON output of the introspection test results run on the client side. These tests are included in the model artifact boilerplate code. You can run them optionally before saving the model in the model catalog.
Create custom label and value attribute pairs- Label
-
The key label of your custom metadata
- Value
-
The value attached to the key
- Category
-
(Optional) The category of the metadata from many choices including:
-
performance
-
training profile
-
training and validation datasets
-
training environment
-
other
You can use the category to group and filter custom metadata to display in the Console. This is useful when you have a large number of custom metadata that you want to track.
-
- Description
-
(Optional) Enter unique description of the custom metadata.
- Click Select.
-
In the Model taxonomy section, add preset labels as follows:
- (Optional) Click Show Advanced Options to change tags.
- (Optional)
Enter the tag namespace (for a defined tag), key, and value to assign tags to the resource.
To add more than one tag, click Add tag.
Tagging describes the various tags that you can use organize and find resources including cost-tracking tags.
- Click Save changes.
Use the oci data-science model update command and required parameters to edit (update) a model:
oci data-science model update --model-id
<model-id>
... [OPTIONS]For a complete list of flags and variable options for CLI commands, see the CLI Command Reference.
Use the UpdateModel operation to edit (update) a model.