Overview of Connector Hub

Use the Connector Hub service to transfer data between services in Oracle Cloud Infrastructure.

Connector Hub is a cloud message bus platform that offers a single pane of glass for describing, executing, and monitoring interactions when moving data between Oracle Cloud Infrastructure services. Connector Hub is formerly known as Service Connector Hub.

Tip

Watch a video introduction to the service.
Data movement between services in Oracle Cloud Infrastructure.

Supported Targets for Each Source

Following are supported targets for each source.

The Logging task is supported by the Logging source only. The Functions task is supported except where asterisked (*).

Source Target
Functions Logging Analytics Monitoring Notifications Object Storage Streaming
Logging
Monitoring - - -
Queue - - ✔*
Streaming - ✔*

*The Functions task isn't supported for this source-target combination.

How Connector Hub Works

Connector Hub orchestrates data movement between services in Oracle Cloud Infrastructure.

Data is moved using connectors. A connector specifies the source service that contains the data to be moved, optional tasks, and the target service for delivery of data when tasks are complete. An optional task might be a function task to process data from the source or a log filter task to filter log data from the source.

Speed of Data Movement

The connector reads data as soon as it is available. Aggregation or buffering might delay delivery to the target service. For example, metric data points need to be aggregated first.

While connectors run continuously, data is moved sequentially by individual move operations. The amount of data and the speed of each move operation depend on the connector configuration (source, task, and target) and the relevant service limits. Relevant service limits are determined by the services selected for the source, task, and target, in addition to limits for Connector Hub.

Example 1: Logging source, Notifications target (no task)

Example 1: Logging source, Notifications target (no task).

Callouts for Example 1
Number Description
1 Connector Hub reads log data from Logging.
2 Connector Hub writes the log data to the Notifications target service.
3 Notifications sends messages to all subscriptions in the configured topic.

Each move operation moves data from the log sources to the topic, within service limits, at a speed affected by the types of subscriptions in the selected topic. The time required for a single move operation to move log sources in this scenario is up to a few minutes.

Example 2: Streaming source, Functions task, Object Storage target

Example 2: Streaming source, Functions task, Object Storage target.
Callouts for Example 2
Number Description
1 Connector Hub reads stream data from Streaming.
2 Connector Hub triggers theFunctions task for custom processing of stream data.
3 The task returns processed data to Connector Hub.
4 Connector Hub writes the stream data to the Object Storage target service.
5 Object Storage writes the stream data to a bucket.

Each move operation moves data from the selected stream to the function task and then to the bucket, within service limits and according to size of each batch. (A batch is a list of entries received from the source or task service.) Batch size is configured in task and target settings for this scenario. Once Connector Hub receives the stream data from the Streaming service, a single move operation moves a batch of this stream data to the function task according to the task's batch size configuration and finally moves the processed batch to the bucket according to the target's batch size configuration. The time required to receive, process, and move a stream in this scenario is up to 17 minutes depending on the task and target batch size configurations.

Delivery Details

Note

For information about function processing of messages from queues, see Error Handling When Processing Queues.

Connector Hub follows "at least once" delivery. That is, when moving data to targets, connectors deliver each batch of data at least once.*

If a move operation fails, then the connector retries that operation. The connector doesn't move subsequent batches of data until the retried operation succeeds.

If the move operation continues to fail beyond the source's retention period, then that batch of data isn't delivered.

*The maximum message size for the Notifications target is 128 KB. Any message that exceeds the maximum size is dropped.

For the Connector Hub source retention period, see the related documentation: Retention Period: Logging Source, Retention Period: Monitoring Source, Retention Period: Queue Source, or Retention Period: Streaming Source.

For certain failure conditions, a connector that continuously fails is automatically deactivated by the service team at Oracle Cloud Infrastructure. Such a long-term continuous failure can indicate invalid configuration of the connector's source or target.

Long Failing Connectors are Deactivated

Warning announcements, followed by automatic deactivation, occur for connectors that continuously fail because of the following conditions:

After four consecutive days of these failure conditions, Connector Hub sends a warning announcement indicating the possibility of a future deactivation and providing troubleshooting information.

After seven consecutive days of these failure conditions, Connector Hub automatically deactivates the connector and sends an announcement indicating the deactivation.

You can troubleshoot a deactivated connector, update it to a valid configuration, and then reactivate it. Confirm that the newly reactivated connector is moving data as expected by checking the target service. To get details on the data flow from a connector's source to its target, enable logs for the connector.

Connector Hub Concepts

The following concepts are essential to working with Connector Hub.

connector

The definition of the data to be moved. A connector specifies a source service, target service, and optional tasks.

source

The service that contains the data to be moved according to specified tasks—for example, Logging.

target

The service that receives data from the source, according to specified tasks. A given target service processes, stores, or delivers received data—the Functions service processes the received data; the Logging Analytics, Monitoring, Object Storage, and Streaming services store the data; and the Notifications service delivers the data.

task

Optional filtering to apply to the data before moving it from the source service to the target service.

trigger

The condition that must be met for a connector to run. Currently, the trigger is continuous; that is, connectors run continuously.

Flow of Data

When a connector runs, it receives data from the source service, completes optional tasks on the data (such as filtering), and then moves the data to the target service.

Following are the supported targets and optional tasks for each available source, along with a description of the targets.

For examples, see Connector Hub Scenarios.

Logging Source

Select a Logging source to transfer log data from the Logging service.

For examples of connectors using Logging sources, see Scenario: Creating Dimensions for a Monitoring Target and other scenarios at Connector Hub Scenarios.

All targets are supported by a connector that's defined with a Logging source and optional task (Functions or Logging).

This image shows the targets supported by a connector that's defined with a Logging source and optional task.

Callouts for Logging source
Number Description
1 Connector Hub reads log data from Logging.
2 Optional: If configured, Connector Hub triggers one of the following tasks:
  • Functions task for custom processing of log data.
  • Log Filter task (Logging service) for filtering log data.
3 The task returns processed data to Connector Hub.
4 Connector Hub writes the log data to a target service.

Example of a connector that uses Logging as source, with Functions as task: Scenario: Sending Log Data to an Autonomous Database.

The retention period for the Logging source in Connector Hub is 24 hours. For more information about delivery, see Delivery Details.

If the first run of a new connector is successful, then it moves log data from the connector's creation time. If the first run fails (such as with missing policies), then after resolution the connector moves log data from the connector creation time or 24 hours before the current time, whichever is later.

Each later run moves the next log data. If a later run fails and resolution occurs within the 24-hour retention period, then the connector moves the next log data. If a later run fails and resolution occurs outside the 24-hour retention period, then the connector moves the latest log data, and any data generated between the failed run and that latest log data isn't delivered.

Monitoring Source

Select a Monitoring source to transfer metric data points from the Monitoring service.

For an example of a connector using a Monitoring source, see Scenario: Sending Metrics to Object Storage.

The following targets are supported by a connector that's defined with a Monitoring source and (optional) Functions task: Functions, Object Storage, and Streaming.

This image shows the targets supported by a connector that's defined with a Monitoring source and optional task.

Callouts for Monitoring source
Number Description
1 Connector Hub reads metric data from Monitoring.
2 Optional: If configured, Connector Hub triggers the following task:
  • Functions task for custom processing of metric data.
3 The task returns processed data to Connector Hub.
4 Connector Hub writes the metric data to a target service.

The retention period for the Monitoring source in Connector Hub is 24 hours. For more information about delivery, see Delivery Details.

Queue Source

Select the Queue source to transfer messages from the Queue service.

The following targets are supported by a connector that's defined with a Queue source and (optional) Functions task:

This image shows the targets supported by a connector that's defined with a Queue source and optional task.

Callouts for Queue source
Number Description
1 Connector Hub reads messages from Queue.
2 Optional: If configured, Connector Hub triggers the following task:
  • Functions task for custom processing of messages.
3 The task returns processed data to Connector Hub.
4

Connector Hub writes the messages to a target service, then automatically deletes the transferred messages from the queue.

For a scenario involving a target function, see Scenario: Sending Queue Messages to a Function.

The retention period for the Queue source in Connector Hub depends on the queue configuration. See Creating a Queue. For more information about delivery, see Delivery Details.

Streaming Source

Select the Streaming source to transfer stream data from the Streaming service.

The following targets are supported by a connector that's defined with a Streaming source and (optional) Functions task:

  • Functions

  • Logging Analytics

  • Notifications*

    The Notifications target (asterisked in illustration) is supported except when using the Functions task.

  • Object Storage

  • Streaming
This image shows the targets supported by a connector that's defined with a Streaming source and optional task.
Callouts for Streaming source
Number Description
1 Connector Hub reads stream data from Streaming.
2 Optional: If configured, Connector Hub triggers the following task:
  • Functions task for custom processing of stream data.
3 The task returns processed data to Connector Hub.
4 Connector Hub writes the stream data to a target service.

The retention period for the Streaming source in Connector Hub is customer-defined. See Limits on Streaming Resources. For more information about delivery, see Delivery Details.

Together with the retention period, the Streaming source's read position determines where in the stream to start moving data.

  • Latest read position: Starts reading messages published after creating the connector.
    • If the first run of a new connector with this configuration is successful, then it moves data from the connector's creation time. If the first run fails (such as with missing policies), then after resolution the connector either moves data from the connector's creation time or, if the creation time is outside the retention period, the oldest available data in the stream. For example, consider a connector created at 10 a.m. for a stream with a two-hour retention period. If failed runs are resolved at 11 a.m., then the connector moves data from 10 a.m. If failed runs are resolved at 1 p.m., then the connector moves the oldest available data in the stream.
    • Later runs move data from the next position in the stream. If a later run fails, then after resolution the connector moves data from the next position in the stream or the oldest available data in the stream, depending on the stream's retention period.
  • Trim Horizon read position: Starts reading from the oldest available message in the stream.
    • If the first run of a new connector with this configuration is successful, then it moves data from the oldest available data in the stream. If the first run fails (such as with missing policies), then after resolution the connector moves the oldest available data in the stream, regardless of the stream's retention period.
    • Later runs move data from the next position in the stream. If a later run fails, then after resolution the connector moves data from the next position in the stream or the oldest available data in the stream, depending on the stream's retention period.

Targets

Learn when to use each available target.

  • Functions: Send data to a function.
  • Logging Analytics: Send data to a log group.
  • Monitoring: Send metric data points to the Monitoring service.
  • Notifications: Send data to a topic.
  • Object Storage: Send data to a bucket.
  • Streaming: Send data to a stream.

Availability

The Connector Hub service is available in all Oracle Cloud Infrastructure commercial regions. See About Regions and Availability Domains for the list of available regions, along with associated locations, region identifiers, region keys, and availability domains.

Resource Identifiers

Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle Cloud ID (OCID). For information about the OCID format and other ways to identify your resources, see Resource Identifiers.

Ways to Access Connector Hub

You can access Oracle Cloud Infrastructure (OCI) by using the Console (a browser-based interface), REST API, or OCI CLI. Instructions for using the Console, API, and CLI are included in topics throughout this documentation. For a list of available SDKs, see Software Development Kits and Command Line Interface.

To access the Console, you must use a supported browser. To go to the Console sign-in page, open the navigation menu at the top of this page and click Infrastructure Console. You are prompted to enter your cloud tenant, your user name, and your password.

Console: To access Connector Hub using the Console, you must use a supported browser. To go to the Console sign-in page, open the navigation menu at the top of this page and click Infrastructure Console. You are prompted to enter your cloud tenant, your user name, and your password. Open the navigation menu and click Analytics & AI. Under Messaging, click Connector Hub.

You can also access Connector Hub from the following services in the Console:

  • Logging: Open the navigation menu and click Observability & Management. Under Logging, click Logs.
  • Streaming: Open the navigation menu and click Analytics & AI. Under Messaging, click Streaming.

API: To access Connector Hub through API, use Connector Hub API.

CLI: See Command Line Reference for Connector Hub.

Authentication and Authorization

Each service in Oracle Cloud Infrastructure integrates with IAM for authentication and authorization, for all interfaces (the Console, SDK or CLI, and REST API).

An administrator in your organization needs to set up groups , compartments , and policies  that control which users can access which services, which resources, and the type of access. For example, the policies control who can create new users, create and manage the cloud network, launch instances, create buckets, download objects, and so on. For more information, see Getting Started with Policies. For specific details about writing policies for each of the different services, see Policy Reference.

If you're a regular user (not an administrator) who needs to use the Oracle Cloud Infrastructure resources that your company owns, contact your administrator to set up a user ID for you. The administrator can confirm which compartment or compartments you should be using.

For troubleshooting information, see Troubleshooting Connectors.

Access to Connector Hub

Administrators: For common policies providing access to Connector Hub, see IAM Policies.

Access to Source, Task, and Target Services

Note

Ensure that any policy you create complies with your company guidelines. Automatically created policies remain when connectors are deleted. As a best practice, delete associated policies when deleting the connector.

To move data, your connector must have authorization to access the specified resources in the source , task , and target  services. Some resources are accessible without policies.

Default policies providing the required authorization are offered when you use the Console to define a connector. These policies are limited to the context of the connector. You can either accept the default policies or ensure that you have the proper authorizations in custom policies for user and service access.

Default Policies

This section details the default policies offered when you create or update a connector in the Console.

Note

To accept default policies for an existing connector, simply edit the connector. The default policies are offered whenever you create or edit a connector. The only exception is when the exact policy already exists in IAM, in which case the default policy is not offered.
Functions (Task or Target)

Applies when the connector specifies a function task or selects Functions as its target service.

Where this policy is created: The compartment where the function resides. The function is selected for the task or target when you create or update the connector.

Allow any-user to use fn-function in compartment id <target_function_compartment_ocid> where all {request.principal.type='serviceconnector', request.principal.compartment.id='<serviceconnector_compartment_ocid>'} Allow any-user to use fn-invocation in compartment id <target_function_compartment_ocid> where all {request.principal.type='serviceconnector', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}

Following is the policy with line breaks added for clarity.

Allow any-user to use fn-function in compartment id <target_function_compartment_ocid>
    where all {
        request.principal.type='serviceconnector',     
        request.principal.compartment.id='<serviceconnector_compartment_ocid>'
    }
Allow any-user to use fn-invocation in compartment id <target_function_compartment_ocid>
    where all {
        request.principal.type='serviceconnector',     
        request.principal.compartment.id='<serviceconnector_compartment_OCID>'
    }
Logging (Source or Task)

No default policies are offered. To create or edit a connector that specifies logs for the source or task, you must have read access to the specified logs. For more information, see Required Permissions for Working with Logs and Log Groups.

Logging Analytics (Target)

Applies when the connector specifies Logging Analytics as its target service.

Where this policy is created: The compartment where the log group resides. The log group is selected or entered for the target when you create or update the connector.

Allow any-user to use loganalytics-log-group in compartment id <target_log_group_compartment_OCID> where all {request.principal.type='serviceconnector', target.loganalytics-log-group.id=<log_group_OCID>, request.principal.compartment.id=<serviceconnector_compartment_OCID>}

Following is the policy with line breaks added for clarity.

Allow any-user to use loganalytics-log-group in compartment id <target_log_group_compartment_OCID> 
    where all {
        request.principal.type='serviceconnector', 
        target.loganalytics-log-group.id=<log_group_OCID>, 
        request.principal.compartment.id=<serviceconnector_compartment_OCID>
    }
Monitoring (Source)

Applies when the connector specifies Monitoring as its source service.

Where this policy is created: The compartment where the metric namespace resides. The metric namespace is selected or entered for the target when you create or update the connector.

Allow any-user to read metrics in tenancy where all {request.principal.type = 'serviceconnector', request.principal.compartment.id = '<compartment_OCID>', target.compartment.id in ('<compartment1_OCID>', '<compartment2_OCID>', '<compartment3_OCID>')}

Following is the policy with line breaks added for clarity.

Allow any-user to read metrics in tenancy 
    where all 
        {
            request.principal.type = 'serviceconnector', 
            request.principal.compartment.id = '<compartment_OCID>', 
            target.compartment.id in ('<compartment1_OCID>', '<compartment2_OCID>', '<compartment3_OCID>')
        }
Monitoring (Target)

Applies when the connector specifies Monitoring as its target service.

Where this policy is created: The compartment where the metric namespace resides. The metric namespace is selected or entered for the target when you create or update the connector.

Allow any-user to use metrics in compartment id <target_metric_compartment_OCID> where all {request.principal.type='serviceconnector', target.metrics.namespace='<metric_namespace>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}

Following is the policy with line breaks added for clarity.

Allow any-user to use metrics in compartment id <target_metric_compartment_OCID>
    where all 
        {
            request.principal.type='serviceconnector', 
            target.metrics.namespace='<metric_namespace>', 
            request.principal.compartment.id='<serviceconnector_compartment_OCID>'
        }
Notifications (Target)

Applies when the connector specifies Notifications as its target service.

Where this policy is created: The compartment where the topic resides. The topic is selected for the target when you create or update the connector.

Allow any-user to use ons-topics in compartment id <target_topic_compartment_OCID> where all {request.principal.type= 'serviceconnector', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}

Following is the policy with line breaks added for clarity.

Allow any-user to use ons-topics in compartment id <target_topic_compartment_OCID>
    where all {
        request.principal.type= 'serviceconnector',
        request.principal.compartment.id='<serviceconnector_compartment_OCID>'
    }
Object Storage (Target)

Applies when the connector specifies Object Storage as its target service.

Where this policy is created: The compartment where the bucket resides. The bucket is selected for the target when you create or update the connector.

Allow any-user to manage objects in compartment id <target_bucket_compartment_OCID> where all {request.principal.type='serviceconnector', target.bucket.name='<bucket_name>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}

Following is the policy with line breaks added for clarity.

Allow any-user to manage objects in compartment id <target_bucket_compartment_OCID> 
    where all {
        request.principal.type='serviceconnector',
        target.bucket.name='<bucket_name>',          
        request.principal.compartment.id='<serviceconnector_compartment_OCID>'
    }
Queue (Source)

Applies when the connector specifies Queue as its source service.

Where this policy is created: The compartment where the queue resides. The queue is selected for the source when you create or edit a connector.

Allow any-user to { QUEUE_READ , QUEUE_CONSUME } in compartment id <queue_compartment_OCID> where all {request.principal.type='serviceconnector', target.queue.id='<queue_OCID>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}

Following is the policy with line breaks added for clarity.

Allow any-user to { QUEUE_READ , QUEUE_CONSUME } in compartment id <queue_compartment_OCID>
    where all {
        request.principal.type='serviceconnector',
        target.queue.id='<queue_OCID>',
        request.principal.compartment.id='<serviceconnector_compartment_OCID>'
    }
Streaming (Source)

Applies when the connector specifies Streaming as its source service.

Where this policy is created: The compartment where the stream resides. The stream is selected for the source when you create or update the connector.

Allow any-user to {STREAM_READ, STREAM_CONSUME} in compartment id <source_stream_compartment_OCID> where all {request.principal.type='serviceconnector', target.stream.id='<stream_OCID>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}

Following is the policy with line breaks added for clarity.

Allow any-user to {STREAM_READ, STREAM_CONSUME} in compartment id <source_stream_compartment_OCID>
    where all {
        request.principal.type='serviceconnector',
        target.stream.id='<stream_OCID>',
        request.principal.compartment.id='<serviceconnector_compartment_OCID>'
    }
Streaming (Target)

Applies when the connector specifies Streaming as its target service.

Where this policy is created: The compartment where the stream resides. The stream is selected for the target when you create or update the connector.

Allow any-user to use stream-push in compartment id <target_stream_compartment_OCID> where all {request.principal.type='serviceconnector', target.stream.id='<stream_OCID>', request.principal.compartment.id='<serviceconnector_compartment_OCID>'}

Following is the policy with line breaks added for clarity.

Allow any-user to use stream-push in compartment id <target_stream_compartment_OCID>
    where all {
        request.principal.type='serviceconnector',
        target.stream.id='<stream_OCID>',
        request.principal.compartment.id='<serviceconnector_compartment_OCID>'
    }

When reviewing group-based policies for required authorization to access a resource (service) in a connector, reference the default policy offered for that service in that context (see previous section) or see the policy details for the service at Policy Reference.

Note

To accept default policies for an existing connector, simply edit the connector. The default policies are offered whenever you create or edit a connector. The only exception is when the exact policy already exists in IAM, in which case the default policy is not offered.

For troubleshooting information, see Troubleshooting Connectors.

Custom Policies

Write policies using dynamic groups for access to connectors and related resources.

Note

Ensure that any policy you create complies with your company guidelines. When you write custom policies, use the default policies as the basis.

As an alternative to accepting the default policies (which include the all-users subject), you can create custom policies with narrower access by using dynamic groups.

Dynamic Group

Create a dynamic group for the custom policies.

  1. Create a dynamic group.

    For instructions, see Managing Dynamic Groups.

  2. For this new dynamic group, define the following matching rule.

    All {resource.type = 'serviceconnector', resource.compartment.id = '<serviceconnector_compartment_OCID>'}

    Use this dynamic group for custom policies.

Functions (Task or Target)

Write custom policies for a dynamic group to access a function (Functions service) that is a task or target of a connector.

These policies are for the previously created dynamic group.

Policy 1:

Allow dynamic-group <dynamic-group-name> to use fn-function in compartment id <function_compartment_ocid>

Policy 2:

Allow dynamic-group <dynamic-group-name> to use fn-invocation in compartment id <function_compartment_ocid>
Logging Analytics (Target)

Write a custom policy for a dynamic group to access a log group (Logging Analytics service) that is a target of a connector.

This policy is for the previously created dynamic group.

Allow dynamic-group <dynamic-group-name> to use loganalytics-log-group in compartment id <log_group_compartment_ocid> where target.loganalytics-log-group.id='<log_group_ocid>'

Following is the policy with line breaks added for clarity.

Allow dynamic-group <dynamic-group-name> to use loganalytics-log-group in compartment id <log_group_compartment_ocid> 
    where target.loganalytics-log-group.id='<log_group_ocid>'
Monitoring (Source)

Write a custom policy for a dynamic group to access a metric (Monitoring service) that is a source of a connector.

This policy is for the previously created dynamic group.

Allow dynamic-group <dynamic-group-name> to read metrics in compartment id <metric_compartment_ocid> where target.compartment.id in ('compartment1_OCID', 'compartment2_OCID', 'compartment3_OCID')

Following is the policy with line breaks added for clarity.

Allow dynamic-group <dynamic-group-name> to read metrics in compartment id <metric_compartment_ocid> 
    where target.compartment.id in ('<compartment1_OCID>', '<compartment2_OCID>', '<compartment3_OCID>')
Monitoring (Target)

Write a custom policy for a dynamic group to access a metric (Monitoring service) that is a target of a connector.

This policy is for the previously created dynamic group.

Allow dynamic-group <dynamic-group-name> to use metrics in compartment id <metric_compartment_ocid> where target.metrics.namespace='<metric_namespace>'

Following is the policy with line breaks added for clarity.

Allow dynamic-group <dynamic-group-name> to use metrics in compartment id <metric_compartment_ocid> 
    where target.metrics.namespace='<metric_namespace>'
Notifications (Target)

Write a custom policy for a dynamic group to access a topic (Notifications) that is a target of a connector.

This policy is for the previously created dynamic group.

Allow dynamic-group <dynamic-group-name> to use ons-topics in compartment id <topic_compartment_ocid>
Object Storage (Target)

Write a custom policy for a dynamic group to access a bucket (Object Storage service) that is a target of a connector.

This policy is for the previously created dynamic group.

Allow dynamic-group <dynamic-group-name> to manage objects in compartment id <bucket_compartment_ocid> where target.bucket.name='<bucket_name>'

Following is the policy with line breaks added for clarity.

Allow dynamic-group <dynamic-group-name> to manage objects in compartment id <bucket_compartment_ocid> 
    where target.bucket.name='<bucket_name>'
Queue (Source)

Write custom policies for a dynamic group to access a queue (Queue service) that is a source of a connector.

These policies are for the previously created dynamic group.

Allow dynamic-group <dynamic-group-name> to { QUEUE_READ , QUEUE_CONSUME } in compartment id <queue_compartment_ocid> where target.queue.id='<queue_ocid>'

Following is the policy with line breaks added for clarity.

Allow dynamic-group <dynamic-group-name> to { QUEUE_READ , QUEUE_CONSUME } in compartment id <queue_compartment_ocid>
    where target.queue.id='<queue_ocid>'
Streaming (Source)

Write custom policies for a dynamic group to access a stream (Streaming service) that is a source of a connector.

These policies are for the previously created dynamic group.

Allow dynamic-group <dynamic-group-name> to {STREAM_READ, STREAM_CONSUME} in compartment id <stream_compartment_ocid> where target.stream.id='<stream_ocid>'

Following is the policy with line breaks added for clarity.

Allow dynamic-group <dynamic-group-name> to {STREAM_READ, STREAM_CONSUME} in compartment id <stream_compartment_ocid>  
    where target.stream.id='<stream_ocid>'
Streaming (Target)

Write custom policies for a dynamic group to access a stream (Streaming service) that is a target of a connector.

These policies are for the previously created dynamic group.

Allow dynamic-group <dynamic-group-name> to use stream-push in compartment id <stream_compartment_ocid> where target.stream.id='<stream_ocid>'

Following is the policy with line breaks added for clarity.

Allow dynamic-group <dynamic-group-name> to use stream-push in compartment id <stream_compartment_ocid> 
    where target.stream.id='<stream_ocid>'

Deactivated Connectors

For certain failure conditions, a connector that continuously fails is automatically deactivated by the service team at Oracle Cloud Infrastructure. Such a long-term continuous failure can indicate invalid configuration of the connector's source or target. For more information, see Deactivation for Unknown Reasons.