Creating and Saving a Model with the Console
Create a model in the Console and save it directly to the model catalog.
To document a model, you must prepare the metadata before you create and save it.
This task involves creating a model, adding metadata, defining the training environment, specifying predictions schemas, and saving the model to the model catalog.
If you're saving a model trained elsewhere or want to use the Console, use these step to save a model:
- Use the Console to sign in to a tenancy with the necessary policies.
- Open the navigation menu and click Analytics & AI. Under Machine Learning, click Data Science.
Select the compartment that contains the project that you want to save the model in.
All projects in the compartment are listed.
Click the name of the project.
The project details page opens and lists the notebook sessions.
Under Resources, click Models.
A tabular list of models in the compartment is displayed.
Create a model artifact zip archive on your local machine containing the
runtime.yamlfiles (and any other files needed to run your model). Click Download sample artifact zip to get sample files that you can change to create your model artifact.
- Click Create model.
- Select the compartment to contain the model.
Enter a unique name (limit of 255 characters). If you don't provide a name, a name is automatically generated.
- (Optional) Enter a description (limit of 400 characters) for the model.
In the Upload model artifact box, click Select to upload the model artifact archive (a zip file).
- Drag the zip file into the Upload an artifact file box, and then click Upload.
- (Optional) In the Model version set box, click Select, and then configure with an existing version set or create a new set.
In the Model provenance box, click Select.
- Select Notebook session or Job run depending on where you want to store the taxonomy documentation.
Find the notebook session or job run that the model was trained with by using one of the following options:
- Choose a project:
Select the name of the project to use in the selected compartment.
The selected compartment applies to both the project and the notebook session or job run, and both must be in the same compartment. If not, then use the OCID search instead.
You can change the compartment for both the project and notebook session or job run.
The name of the project to use in the selected compartment.
Select the notebook session or job run that the model was trained with.
- OCID search:
If the notebook session or job run is in a different compartment than the project, then enter the notebook session or job run OCID that you trained the model in.
- Select the notebook session or job run that the model was trained with.
Click Show advanced options to identify Git and model training information.
Enter or select any of the following values:
- Git repository URL
The URL of the remote Git repository.
- Git commit
The commit ID of the Git repository.
- Git branch
The name of the branch.
- Local model directory
The directory path where the model artifact was temporarily stored. This could be a path in a notebook session or a local computer directory for example.
- Model training script
The name of the Python script or notebook session that the model was trained with.
You can also populate model provenance metadata when you save a model to the model catalog using the OCI SDKs or the CLI.
- Click Select.
In the Model taxonomy box, click Select to specify what the model does, machine learning framework, hyperparameters, or to create custom metadata to document the model.
The maximum allowed size for all the model metadata is 32000 bytes. The size is a combination of the preset model taxonomy and the custom attributes.
In the Model taxonomy section, add preset labels as follows:
Enter or select the following:Model taxonomy
Create custom label and value attribute pairs
- Use case
The type of machine learning use case to use.
- Model framework
The Python library you used to train the model.
- Model framework version
The version of the machine learning framework. This is a free text value. For example, the value could be 2.3.
- Model algorithm or model estimator object
The algorithm used or model instance class. This is a free text value. For example,
sklearn.ensemble.RandomForestRegressorcould be the value.
- Model hyperparameters
The hyperparameters of the model in JSON format.
- Artifact test results
The JSON output of the introspection test results run on the client side. These tests are included in the model artifact boilerplate code. You can run them optionally before saving the model in the model catalog.
The key label of your custom metadata
The value attached to the key
(Optional) The category of the metadata from many choices including:
training and validation datasets
You can use the category to group and filter custom metadata to display in the Console. This is useful when you have a large number of custom metadata that you want to track.
(Optional) Enter unique description of the custom metadata.
- Click Select.
- In the Model taxonomy section, add preset labels as follows:
Click Select in the Document model input and output data schema box to document the model predictions. You define model prediction features that the model requires to make a successful prediction. You also define input and output schemas that describe the predictions returned by the model (defined in the
score.pyfile with the
The maximum allowed file size for the combined input and output schemas is 32000 bytes.
- Drag your input schema JSON file into the Upload an input schema box.
- Drag your output schema JSON file into the Upload an output schema box.
- Click Select.
You can only document the input and output data schemas when you create the model. You can't edit the schemas post model creation.
- (Optional) Click Show Advanced Options to add tags.
- (Optional) Enter the tag namespace (for a defined tag), key, and value to assign tags to the resource.
Models stored in the model catalog can also be deployed using model deployment.