Creating and Saving a Model with the Console

Create a model in the Console and save it directly to the model catalog.

To document a model, you must prepare the metadata before you create and save it.

This task involves creating a model, adding metadata, defining the training environment, specifying predictions schemas, and saving the model to the model catalog.

Important

  • We recommend that you create and save models to the model catalog programmatically instead, either using ADS or the OCI Python SDK.

  • You can use ADS to create large models. Large models have artifacts between 2 and 6 GB.

If you're saving a model trained elsewhere or want to use the Console, use these step to save a model:

  1. Use the Console to sign in to a tenancy with the necessary policies.
  2. Open the navigation menu and click Analytics & AI. Under Machine Learning, click Data Science.
  3. Select the compartment that contains the project that you want to save the model in.

    All projects in the compartment are listed.

  4. Click the name of the project.

    The project details page opens and lists the notebook sessions.

  5. Under Resources, click Models.

    A tabular list of models in the compartment is displayed.

  6. Create a model artifact zip archive on your local machine containing the score.py and runtime.yaml files (and any other files needed to run your model). Click Download sample artifact zip to get sample files that you can change to create your model artifact.
  7. Click Create model.
  8. Select the compartment to contain the model.
  9. (Optional) Enter a unique name (limit of 255 characters). If you don't provide a name, a name is automatically generated.

    For example, model20200108222435.

  10. (Optional) Enter a description (limit of 400 characters) for the model.
  11. In the Upload model artifact box, click Select to upload the model artifact archive (a zip file).
    1. Drag the zip file into the Upload an artifact file box, and then click Upload.
  12. (Optional) In the Model version set box, click Select, and then configure with an existing version set or create a new set.
  13. (Optional) In the Model provenance box, click Select.
    1. Select Notebook session or Job run depending on where you want to store the taxonomy documentation.
    2. Find the notebook session or job run that the model was trained with by using one of the following options:
      Choose a project:

      Select the name of the project to use in the selected compartment.

      The selected compartment applies to both the project and the notebook session or job run, and both must be in the same compartment. If not, then use the OCID search instead.

      You can change the compartment for both the project and notebook session or job run.

      The name of the project to use in the selected compartment.

      Select the notebook session or job run that the model was trained with.

      OCID search:

      If the notebook session or job run is in a different compartment than the project, then enter the notebook session or job run OCID that you trained the model in.

    3. Select the notebook session or job run that the model was trained with.
    4. (Optional) Click Show advanced options to identify Git and model training information.

      Enter or select any of the following values:

      Git repository URL

      The URL of the remote Git repository.

      Git commit

      The commit ID of the Git repository.

      Git branch

      The name of the branch.

      Local model directory

      The directory path where the model artifact was temporarily stored. This could be a path in a notebook session or a local computer directory for example.

      Model training script

      The name of the Python script or notebook session that the model was trained with.

      Tip

      You can also populate model provenance metadata when you save a model to the model catalog using the OCI SDKs or the CLI.

    5. Click Select.
  14. (Optional) In the Model taxonomy box, click Select to specify what the model does, machine learning framework, hyperparameters, or to create custom metadata to document the model.
    Important

    The maximum allowed size for all the model metadata is 32000 bytes. The size is a combination of the preset model taxonomy and the custom attributes.

    1. In the Model taxonomy section, add preset labels as follows:

      Enter or select the following:

      Model taxonomy
      Use case

      The type of machine learning use case to use.

      Model framework

      The Python library you used to train the model.

      Model framework version

      The version of the machine learning framework. This is a free text value. For example, the value could be 2.3.

      Model algorithm or model estimator object

      The algorithm used or model instance class. This is a free text value. For example, sklearn.ensemble.RandomForestRegressor could be the value.

      Model hyperparameters

      The hyperparameters of the model in JSON format.

      Artifact test results

      The JSON output of the introspection test results run on the client side. These tests are included in the model artifact boilerplate code. You can run them optionally before saving the model in the model catalog.

      Create custom label and value attribute pairs
      Label

      The key label of your custom metadata

      Value

      The value attached to the key

      Category

      (Optional) The category of the metadata from many choices including:

      • performance

      • training profile

      • training and validation datasets

      • training environment

      • other

      You can use the category to group and filter custom metadata to display in the Console. This is useful when you have a large number of custom metadata that you want to track.

      Description

      (Optional) Enter unique description of the custom metadata.

    2. Click Select.
  15. (Optional) Click Select in the Document model input and output data schema box to document the model predictions. You define model prediction features that the model requires to make a successful prediction. You also define input and output schemas that describe the predictions returned by the model (defined in the score.py file with the predict() function).
    Important

    The maximum allowed file size for the combined input and output schemas is 32000 bytes.

    1. Drag your input schema JSON file into the Upload an input schema box.
    2. Drag your output schema JSON file into the Upload an output schema box.
    3. Click Select.
    Important

    You can only document the input and output data schemas when you create the model. You can't edit the schemas post model creation.

  16. (Optional) Click Show Advanced Options to add tags.
  17. (Optional) Enter the tag namespace (for a defined tag), key, and value to assign tags to the resource.

    To add more than one tag, click Add tag.

    Tagging describes the various tags that you can use organize and find resources including cost-tracking tags.

  18. Click Create.
    Note

    Models stored in the model catalog can also be deployed using model deployment.