Scenario: Creating Dimensions for a Monitoring Target
Learn how to create dimensions for a custom metric generated by a connector. Send log data from the Logging service to metrics (Monitoring service) using Connector Hub.
This scenario involves creating a connector to generate a custom metric with dimensions referencing log data. Use this connector to move log data from Logging to Monitoring. After the data is moved, you can filter the new custom metrics using the dimensions created by the connector.
Required IAM Policy
If you're a member of the Administrators group, you already have the required access to complete this scenario. Otherwise, you need access to Monitoring.
The workflow for creating the connector includes a default policy when needed to provide permission for writing to the target service. If you're new to policies, see Getting Started with Policies and Common Policies.
Goal 🔗
This topic describes the goal of this scenario.
The goal of this scenario is to filter update events for Object Storage buckets. For example, find updates that changed buckets to public access. Finding public buckets can help prevent leakage of secrets. In addition to public access type, this scenario sets up filters for bucket name, compartment name, availability domain, versioning status, and a static value.
Setting Up This Scenario 🔗
This topic describes the tasks involved in setting up this scenario.
This scenario creates a metric from a log using the Connector Hub service. Setting up this scenario involves the following tasks:
Create a connector to move logs from Logging to a custom metric with dimensions in Monitoring.
While this scenario uses the _Audit log group and the bucket update
event, you can use the same approach with any log available in your
tenancy.
Metric namespace: bucket_events
Metric name: update
Static value buckets-from-connector (dimension name:
mytags)
Extracted value using path dimensions:
Note
Each new dimension value creates a new metric stream. To avoid generating too many unique metric streams, which could potentially result in throttling, we recommend excluding GUIDs or UUIDs (such as compartment OCIDs) from the dimensions.
Next, create custom dimensions to tag the log data with the static
value "buckets-from-connector" and to extract bucket name, compartment
name, compartment OCID, availability domain, public access type, and
versioning status.
Select Add dimensions.
The Add dimensions panel appears.
Extract the bucket name from the log data (dimension name
bucketName):
Under Select path, browse the available
log data for the bucketName path.
The six latest rows of log data are retrieved from the log specified under Configure source.
Example of fragment of log data, showing the bucketName path:
If no log data is available, then you can manually enter a path value with a custom dimension name under Edit path. The path must start with logContent, using either dot (.) or index ([]) notation. Dot and index are the only supported JMESPath selectors. For example:
logContent.data (dot notation)
logContent.data[0].content (index notation)
Example path for bucket update event, using dot notation:
The following image shows an example of a selected path (bucketName) and an unselected path (eTag):
Under Edit path, the following fields
are automatically populated from your selected path. You can
optionally edit the default Dimension
name.
Dimension name
Value
bucketName
logContent.data.additionalDetails.bucketName
Repeat extraction for each additional value you want to use as a
dimension (filter): Under Select path, select the
check box for the path corresponding to the Dimension
name in the following table.
The Value is automatically populated from your
selected path. You can optionally edit the default Dimension
name.