Model Catalog

Learn how to work with the Data Science model catalog.

Model Catalog

The model catalog is a centralized and managed repository of model artifacts. Models stored in the model catalog can be shared across members of a team and they can be loaded back into a notebook session. For example, models in the model catalog can also be deployed as HTTP endpoints using a model deployment.

A model entry in the model catalog has two components:

  • A model artifact is a zip archive that includes the saved model object. A Python script providing instructions about how to use the model for inference purposes (score.py), and a file documenting the runtime environment of the model (runtime.yaml). You can obtain artifact, score.py, and runtime.yaml examples from Github.

  • Metadata about the provenance of the model including Git-related information and the script or notebook used to push the model to the catalog. You can document the resource that the model was trained in (either a notebook session or job run), and the Git reference to the training source code. This metadata is automatically extracted from your notebook session environment if you save your model artifact with ADS.

Model artifacts stored in the model catalog are immutable by design. Any changes you want to apply to a model requires that a new model is created. Immutability prevents unwanted changes, and ensures that any model in production can be tracked down to the exact artifact behind the model predictions.

Important

Artifacts have a maximum size limit of 100 MB when saved from the Console. The size limit has been removed from ADS, the OCI SDKs, and CLI. Large models have artifacts limitations of up to 400 GB.

Documenting Models

You can use these options to document how you trained the model, the use case, and the necessary prediction features.

Note

ADS automatically populates the provenance and taxonomy on your behalf when you save a model with ADS.

Provenance

Model provenance is documentation that helps you improve the model reproducibility and auditability. You can document the resource that the model was trained in (either a notebook session or job run), and the Git reference to the training source code. These parameters are automatically extracted when you save a model with the ADS SDK.

When you're working inside a Git repository, ADS can obtain Git information and populate the model provenance metadata fields automatically for you.

Taxonomy

Taxonomy lets you to describe the model you're saving to the model catalog. You can use preset fields to document the:

  • Machine learning use case

  • Machine learning model framework

  • Version

  • Estimator object

  • Hyperparameters

  • Artifact test results

Or, you can create custom metadata.

Model Introspection Tests

Introspection in the context of machine learning models is a series of tests and checks run on a model artifact to test all aspects of the operational health of the model. These tests target the score.py and runtime.yaml with the goal to capture some common errors and issues of the model artifact. Introspection tests results are part of the pre-defined model metadata. If you save your model using the Console, you can store the test results in JSON format in the Artifact Test Results field when you select Document model taxonomy. If you decide to save the model using the OCI Python SDK, use the ArtifactTestResults metadata key.

As part of our model artifact template, we included a Python script that contains a series of introspection test definitions. These tests are optional and you can run them before saving the model to the model catalog. You can then save the test results as part of the model metadata to display in the OCI Console.

Our Data Science blog contains more information about using model introspection.

Model Input and Output Schemas

The schema definition is a description of the features that are necessary to make a successful model prediction. The schema definition is a contract that defines what the required input payload that clients of the model must provide. The input and output schema definitions are used only for documentation purposes in this release of the model catalog. Schemas are in JSON file format.

You might want to define both schemas. At a minimum, an input schema is needed for any model predictions.

The output schema might not always be necessary. For example, when the model is returning a simple floating point value, there's not as much value in defining a schema for such a simple output. You could convey that information in the description of the model.