oracle.oci.oci_data_flow_run – Manage a Run resource in Oracle Cloud Infrastructure¶
Note
This plugin is part of the oracle.oci collection (version 5.3.0).
You might already have this collection installed if you are using the ansible
package.
It is not included in ansible-core
.
To check whether it is installed, run ansible-galaxy collection list
.
To install it, use: ansible-galaxy collection install oracle.oci
.
To use it in a playbook, specify: oracle.oci.oci_data_flow_run
.
New in version 2.9.0: of oracle.oci
Synopsis¶
This module allows the user to create and update a Run resource in Oracle Cloud Infrastructure
For state=present, creates a run for an application.
This resource has the following action operations in the oracle.oci.oci_data_flow_run_actions module: change_compartment, cancel.
Requirements¶
The below requirements are needed on the host that executes this module.
python >= 3.6
Python SDK for Oracle Cloud Infrastructure https://oracle-cloud-infrastructure-python-sdk.readthedocs.io
Parameters¶
Parameter | Choices/Defaults | Comments | |
---|---|---|---|
api_user
string
|
The OCID of the user, on whose behalf, OCI APIs are invoked. If not set, then the value of the OCI_USER_ID environment variable, if any, is used. This option is required if the user is not specified through a configuration file (See
config_file_location ). To get the user's OCID, please refer https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm. |
||
api_user_fingerprint
string
|
Fingerprint for the key pair being used. If not set, then the value of the OCI_USER_FINGERPRINT environment variable, if any, is used. This option is required if the key fingerprint is not specified through a configuration file (See
config_file_location ). To get the key pair's fingerprint value please refer https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm. |
||
api_user_key_file
string
|
Full path and filename of the private key (in PEM format). If not set, then the value of the OCI_USER_KEY_FILE variable, if any, is used. This option is required if the private key is not specified through a configuration file (See
config_file_location ). If the key is encrypted with a pass-phrase, the api_user_key_pass_phrase option must also be provided. |
||
api_user_key_pass_phrase
string
|
Passphrase used by the key referenced in
api_user_key_file , if it is encrypted. If not set, then the value of the OCI_USER_KEY_PASS_PHRASE variable, if any, is used. This option is required if the key passphrase is not specified through a configuration file (See config_file_location ). |
||
application_id
string
|
The OCID of the associated application. If this value is set, then no value for the execute parameter is required. If this value is not set, then a value for the execute parameter is required, and a new application is created and associated with the new run.
|
||
application_log_config
dictionary
|
|||
log_group_id
string
/ required
|
The log group id for where log objects will be for Data Flow Runs.
|
||
log_id
string
/ required
|
The log id of the log object the Application Logs of Data Flow Run will be shipped to.
|
||
archive_uri
string
|
A comma separated list of one or more archive files as Oracle Cloud Infrastructure URIs. For example, ``oci://path/to/a.zip,oci://path/to/b.zip``. An Oracle Cloud Infrastructure URI of an archive.zip file containing custom dependencies that may be used to support the execution of a Python, Java, or Scala application. See https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/hdfsconnector.htm#uriformat.
|
||
arguments
list
/ elements=string
|
The arguments passed to the running application as command line arguments. An argument is either a plain text or a placeholder. Placeholders are replaced using values from the parameters map. Each placeholder specified must be represented in the parameters map else the request (POST or PUT) will fail with a HTTP 400 status code. Placeholders are specified as `Service Api Spec`, where `name` is the name of the parameter. Example: `[ "--input", "${input_file}", "--name", "John Doe" ]` If "input_file" has a value of "mydata.xml", then the value above will be translated to `--input mydata.xml --name "John Doe"`
|
||
auth_purpose
string
|
|
The auth purpose which can be used in conjunction with 'auth_type=instance_principal'. The default auth_purpose for instance_principal is None.
|
|
auth_type
string
|
|
The type of authentication to use for making API requests. By default
auth_type="api_key" based authentication is performed and the API key (see api_user_key_file) in your config file will be used. If this 'auth_type' module option is not specified, the value of the OCI_ANSIBLE_AUTH_TYPE, if any, is used. Use auth_type="instance_principal" to use instance principal based authentication when running ansible playbooks within an OCI compute instance. |
|
cert_bundle
string
|
The full path to a CA certificate bundle to be used for SSL verification. This will override the default CA certificate bundle. If not set, then the value of the OCI_ANSIBLE_CERT_BUNDLE variable, if any, is used.
|
||
compartment_id
string
|
The OCID of a compartment.
Required for create using state=present.
Required for update when environment variable
OCI_USE_NAME_AS_IDENTIFIER is set. |
||
config_file_location
string
|
Path to configuration file. If not set then the value of the OCI_CONFIG_FILE environment variable, if any, is used. Otherwise, defaults to ~/.oci/config.
|
||
config_profile_name
string
|
The profile to load from the config file referenced by
config_file_location . If not set, then the value of the OCI_CONFIG_PROFILE environment variable, if any, is used. Otherwise, defaults to the "DEFAULT" profile in config_file_location . |
||
configuration
dictionary
|
The Spark configuration passed to the running process. See https://spark.apache.org/docs/latest/configuration.html#available-properties. Example: { "spark.app.name" : "My App Name", "spark.shuffle.io.maxRetries" : "4" } Note: Not all Spark properties are permitted to be set. Attempting to set a property that is not allowed to be overwritten will cause a 400 status to be returned.
|
||
defined_tags
dictionary
|
Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see Resource Tags. Example: `{"Operations": {"CostCenter": "42"}}`
This parameter is updatable.
|
||
display_name
string
|
A user-friendly name that does not have to be unique. Avoid entering confidential information. If this value is not specified, it will be derived from the associated application's displayName or set by API using fileUri's application file name.
Required for create, update when environment variable
OCI_USE_NAME_AS_IDENTIFIER is set.aliases: name |
||
driver_shape
string
|
The VM shape for the driver. Sets the driver cores and memory.
|
||
driver_shape_config
dictionary
|
|||
memory_in_gbs
float
|
The amount of memory used for the driver or executors.
|
||
ocpus
float
|
The total number of OCPUs used for the driver or executors. See here for details.
|
||
execute
string
|
The input used for spark-submit command. For more details see https://spark.apache.org/docs/latest/submitting-applications.html#launching- applications-with-spark-submit. Supported options include ``--class``, ``--file``, ``--jars``, ``--conf``, ``--py-files``, and main application file with arguments. Example: ``--jars oci://path/to/a.jar,oci://path/to/b.jar --files oci://path/to/a.json,oci://path/to/b.csv --py-files oci://path/to/a.py,oci://path/to/b.py --conf spark.sql.crossJoin.enabled=true --class org.apache.spark.examples.SparkPi oci://path/to/main.jar 10`` Note: If execute is specified together with applicationId, className, configuration, fileUri, language, arguments, parameters during application create/update, or run create/submit, Data Flow service will use derived information from execute input only.
|
||
executor_shape
string
|
The VM shape for the executors. Sets the executor cores and memory.
|
||
executor_shape_config
dictionary
|
|||
memory_in_gbs
float
|
The amount of memory used for the driver or executors.
|
||
ocpus
float
|
The total number of OCPUs used for the driver or executors. See here for details.
|
||
force_create
boolean
|
|
Whether to attempt non-idempotent creation of a resource. By default, create resource is an idempotent operation, and doesn't create the resource if it already exists. Setting this option to true, forcefully creates a copy of the resource, even if it already exists.This option is mutually exclusive with key_by.
|
|
freeform_tags
dictionary
|
Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see Resource Tags. Example: `{"Department": "Finance"}`
This parameter is updatable.
|
||
idle_timeout_in_minutes
integer
|
The timeout value in minutes used to manage Runs. A Run would be stopped after inactivity for this amount of time period. Note: This parameter is currently only applicable for Runs of type `SESSION`. Default value is 2880 minutes (2 days)
This parameter is updatable.
|
||
key_by
list
/ elements=string
|
The list of attributes of this resource which should be used to uniquely identify an instance of the resource. By default, all the attributes of a resource are used to uniquely identify a resource.
|
||
logs_bucket_uri
string
|
An Oracle Cloud Infrastructure URI of the bucket where the Spark job logs are to be uploaded. See https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/hdfsconnector.htm#uriformat.
|
||
max_duration_in_minutes
integer
|
The maximum duration in minutes for which an Application should run. Data Flow Run would be terminated once it reaches this duration from the time it transitions to `IN_PROGRESS` state.
This parameter is updatable.
|
||
metastore_id
string
|
The OCID of OCI Hive Metastore.
|
||
num_executors
integer
|
The number of executor VMs requested.
|
||
parameters
list
/ elements=dictionary
|
An array of name/value pairs used to fill placeholders found in properties like `Application.arguments`. The name must be a string of one or more word characters (a-z, A-Z, 0-9, _). The value can be a string of 0 or more characters of any kind. Example: [ { name: "iterations", value: "10"}, { name: "input_file", value: "mydata.xml" }, { name: "variable_x", value: "${x}"} ]
|
||
name
string
/ required
|
The name of the parameter. It must be a string of one or more word characters (a-z, A-Z, 0-9, _). Examples: "iterations", "input_file"
|
||
value
string
/ required
|
The value of the parameter. It must be a string of 0 or more characters of any kind. Examples: "" (empty string), "10", "mydata.xml", "${x}"
|
||
pool_id
string
|
The OCID of a pool. Unique Id to indentify a dataflow pool resource.
|
||
realm_specific_endpoint_template_enabled
boolean
|
|
Enable/Disable realm specific endpoint template for service client. By Default, realm specific endpoint template is disabled. If not set, then the value of the OCI_REALM_SPECIFIC_SERVICE_ENDPOINT_TEMPLATE_ENABLED variable, if any, is used.
|
|
region
string
|
The Oracle Cloud Infrastructure region to use for all OCI API requests. If not set, then the value of the OCI_REGION variable, if any, is used. This option is required if the region is not specified through a configuration file (See
config_file_location ). Please refer to https://docs.us-phoenix-1.oraclecloud.com/Content/General/Concepts/regions.htm for more information on OCI regions. |
||
run_id
string
|
The unique ID for the run
Required for update using state=present when environment variable
OCI_USE_NAME_AS_IDENTIFIER is not set.aliases: id |
||
spark_version
string
|
The Spark version utilized to run the application. This value may be set if applicationId is not since the Spark version will be taken from the associated application.
|
||
state
string
|
|
The state of the Run.
Use state=present to create or update a Run.
|
|
tenancy
string
|
OCID of your tenancy. If not set, then the value of the OCI_TENANCY variable, if any, is used. This option is required if the tenancy OCID is not specified through a configuration file (See
config_file_location ). To get the tenancy OCID, please refer https://docs.us-phoenix-1.oraclecloud.com/Content/API/Concepts/apisigningkey.htm |
||
type
string
|
|
The Spark application processing type.
|
|
wait
boolean
|
|
Whether to wait for create or delete operation to complete.
|
|
wait_timeout
integer
|
Time, in seconds, to wait when wait=yes. Defaults to 1200 for most of the services but some services might have a longer wait timeout.
|
||
warehouse_bucket_uri
string
|
An Oracle Cloud Infrastructure URI of the bucket to be used as default warehouse directory for BATCH SQL runs. See https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/hdfsconnector.htm#uriformat.
|
Notes¶
Note
For OCI python sdk configuration, please refer to https://oracle-cloud-infrastructure-python-sdk.readthedocs.io/en/latest/configuration.html
Examples¶
- name: Create run
oci_data_flow_run:
# required
compartment_id: "ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx"
# optional
application_log_config:
# required
log_group_id: "ocid1.loggroup.oc1..xxxxxxEXAMPLExxxxxx"
log_id: "ocid1.log.oc1..xxxxxxEXAMPLExxxxxx"
application_id: "ocid1.application.oc1..xxxxxxEXAMPLExxxxxx"
archive_uri: archive_uri_example
arguments: [ "arguments_example" ]
configuration: null
display_name: display_name_example
driver_shape: driver_shape_example
driver_shape_config:
# optional
ocpus: 3.4
memory_in_gbs: 3.4
execute: execute_example
executor_shape: executor_shape_example
executor_shape_config:
# optional
ocpus: 3.4
memory_in_gbs: 3.4
logs_bucket_uri: logs_bucket_uri_example
metastore_id: "ocid1.metastore.oc1..xxxxxxEXAMPLExxxxxx"
num_executors: 56
parameters:
- # required
name: name_example
value: value_example
pool_id: "ocid1.pool.oc1..xxxxxxEXAMPLExxxxxx"
spark_version: spark_version_example
type: BATCH
warehouse_bucket_uri: warehouse_bucket_uri_example
defined_tags: {'Operations': {'CostCenter': 'US'}}
freeform_tags: {'Department': 'Finance'}
max_duration_in_minutes: 56
idle_timeout_in_minutes: 56
- name: Update run
oci_data_flow_run:
# required
run_id: "ocid1.run.oc1..xxxxxxEXAMPLExxxxxx"
# optional
defined_tags: {'Operations': {'CostCenter': 'US'}}
freeform_tags: {'Department': 'Finance'}
max_duration_in_minutes: 56
idle_timeout_in_minutes: 56
- name: Update run using name (when environment variable OCI_USE_NAME_AS_IDENTIFIER is set)
oci_data_flow_run:
# required
compartment_id: "ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx"
display_name: display_name_example
# optional
defined_tags: {'Operations': {'CostCenter': 'US'}}
freeform_tags: {'Department': 'Finance'}
max_duration_in_minutes: 56
idle_timeout_in_minutes: 56
Return Values¶
Common return values are documented here, the following are the fields unique to this module:
Key | Returned | Description | ||
---|---|---|---|---|
run
complex
|
on success |
Details of the Run resource acted upon by the current operation
Sample:
{'application_id': 'ocid1.application.oc1..xxxxxxEXAMPLExxxxxx', 'application_log_config': {'log_group_id': 'ocid1.loggroup.oc1..xxxxxxEXAMPLExxxxxx', 'log_id': 'ocid1.log.oc1..xxxxxxEXAMPLExxxxxx'}, 'archive_uri': 'archive_uri_example', 'arguments': [], 'class_name': 'class_name_example', 'compartment_id': 'ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx', 'configuration': {}, 'data_read_in_bytes': 56, 'data_written_in_bytes': 56, 'defined_tags': {'Operations': {'CostCenter': 'US'}}, 'display_name': 'display_name_example', 'driver_shape': 'driver_shape_example', 'driver_shape_config': {'memory_in_gbs': 10, 'ocpus': 10}, 'execute': 'execute_example', 'executor_shape': 'executor_shape_example', 'executor_shape_config': {'memory_in_gbs': 10, 'ocpus': 10}, 'file_uri': 'file_uri_example', 'freeform_tags': {'Department': 'Finance'}, 'id': 'ocid1.resource.oc1..xxxxxxEXAMPLExxxxxx', 'idle_timeout_in_minutes': 56, 'language': 'SCALA', 'lifecycle_details': 'lifecycle_details_example', 'lifecycle_state': 'ACCEPTED', 'logs_bucket_uri': 'logs_bucket_uri_example', 'max_duration_in_minutes': 56, 'metastore_id': 'ocid1.metastore.oc1..xxxxxxEXAMPLExxxxxx', 'num_executors': 56, 'opc_request_id': 'ocid1.opcrequest.oc1..xxxxxxEXAMPLExxxxxx', 'owner_principal_id': 'ocid1.ownerprincipal.oc1..xxxxxxEXAMPLExxxxxx', 'owner_user_name': 'owner_user_name_example', 'parameters': [{'name': 'name_example', 'value': 'value_example'}], 'pool_id': 'ocid1.pool.oc1..xxxxxxEXAMPLExxxxxx', 'private_endpoint_dns_zones': [], 'private_endpoint_id': 'ocid1.privateendpoint.oc1..xxxxxxEXAMPLExxxxxx', 'private_endpoint_max_host_count': 56, 'private_endpoint_nsg_ids': [], 'private_endpoint_subnet_id': 'ocid1.privateendpointsubnet.oc1..xxxxxxEXAMPLExxxxxx', 'run_duration_in_milliseconds': 56, 'spark_version': 'spark_version_example', 'time_created': '2013-10-20T19:20:30+01:00', 'time_updated': '2013-10-20T19:20:30+01:00', 'total_o_cpu': 56, 'type': 'BATCH', 'warehouse_bucket_uri': 'warehouse_bucket_uri_example'}
|
||
application_id
string
|
on success |
The application ID.
Sample:
ocid1.application.oc1..xxxxxxEXAMPLExxxxxx
|
||
application_log_config
complex
|
on success |
|
||
log_group_id
string
|
on success |
The log group id for where log objects will be for Data Flow Runs.
Sample:
ocid1.loggroup.oc1..xxxxxxEXAMPLExxxxxx
|
||
log_id
string
|
on success |
The log id of the log object the Application Logs of Data Flow Run will be shipped to.
Sample:
ocid1.log.oc1..xxxxxxEXAMPLExxxxxx
|
||
archive_uri
string
|
on success |
A comma separated list of one or more archive files as Oracle Cloud Infrastructure URIs. For example, ``oci://path/to/a.zip,oci://path/to/b.zip``. An Oracle Cloud Infrastructure URI of an archive.zip file containing custom dependencies that may be used to support the execution of a Python, Java, or Scala application. See https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/hdfsconnector.htm#uriformat.
Sample:
archive_uri_example
|
||
arguments
list
/ elements=string
|
on success |
The arguments passed to the running application as command line arguments. An argument is either a plain text or a placeholder. Placeholders are replaced using values from the parameters map. Each placeholder specified must be represented in the parameters map else the request (POST or PUT) will fail with a HTTP 400 status code. Placeholders are specified as `Service Api Spec`, where `name` is the name of the parameter. Example: `[ "--input", "${input_file}", "--name", "John Doe" ]` If "input_file" has a value of "mydata.xml", then the value above will be translated to `--input mydata.xml --name "John Doe"`
|
||
class_name
string
|
on success |
The class for the application.
Sample:
class_name_example
|
||
compartment_id
string
|
on success |
The OCID of a compartment.
Sample:
ocid1.compartment.oc1..xxxxxxEXAMPLExxxxxx
|
||
configuration
dictionary
|
on success |
The Spark configuration passed to the running process. See https://spark.apache.org/docs/latest/configuration.html#available-properties. Example: { "spark.app.name" : "My App Name", "spark.shuffle.io.maxRetries" : "4" } Note: Not all Spark properties are permitted to be set. Attempting to set a property that is not allowed to be overwritten will cause a 400 status to be returned.
|
||
data_read_in_bytes
integer
|
on success |
The data read by the run in bytes.
Sample:
56
|
||
data_written_in_bytes
integer
|
on success |
The data written by the run in bytes.
Sample:
56
|
||
defined_tags
dictionary
|
on success |
Defined tags for this resource. Each key is predefined and scoped to a namespace. For more information, see Resource Tags. Example: `{"Operations": {"CostCenter": "42"}}`
Sample:
{'Operations': {'CostCenter': 'US'}}
|
||
display_name
string
|
on success |
A user-friendly name. This name is not necessarily unique.
Sample:
display_name_example
|
||
driver_shape
string
|
on success |
The VM shape for the driver. Sets the driver cores and memory.
Sample:
driver_shape_example
|
||
driver_shape_config
complex
|
on success |
|
||
memory_in_gbs
float
|
on success |
The amount of memory used for the driver or executors.
Sample:
10
|
||
ocpus
float
|
on success |
The total number of OCPUs used for the driver or executors. See here for details.
Sample:
10
|
||
execute
string
|
on success |
The input used for spark-submit command. For more details see https://spark.apache.org/docs/latest/submitting-applications.html#launching- applications-with-spark-submit. Supported options include ``--class``, ``--file``, ``--jars``, ``--conf``, ``--py-files``, and main application file with arguments. Example: ``--jars oci://path/to/a.jar,oci://path/to/b.jar --files oci://path/to/a.json,oci://path/to/b.csv --py-files oci://path/to/a.py,oci://path/to/b.py --conf spark.sql.crossJoin.enabled=true --class org.apache.spark.examples.SparkPi oci://path/to/main.jar 10`` Note: If execute is specified together with applicationId, className, configuration, fileUri, language, arguments, parameters during application create/update, or run create/submit, Data Flow service will use derived information from execute input only.
Sample:
execute_example
|
||
executor_shape
string
|
on success |
The VM shape for the executors. Sets the executor cores and memory.
Sample:
executor_shape_example
|
||
executor_shape_config
complex
|
on success |
|
||
memory_in_gbs
float
|
on success |
The amount of memory used for the driver or executors.
Sample:
10
|
||
ocpus
float
|
on success |
The total number of OCPUs used for the driver or executors. See here for details.
Sample:
10
|
||
file_uri
string
|
on success |
An Oracle Cloud Infrastructure URI of the file containing the application to execute. See https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/hdfsconnector.htm#uriformat.
Sample:
file_uri_example
|
||
freeform_tags
dictionary
|
on success |
Free-form tags for this resource. Each tag is a simple key-value pair with no predefined name, type, or namespace. For more information, see Resource Tags. Example: `{"Department": "Finance"}`
Sample:
{'Department': 'Finance'}
|
||
id
string
|
on success |
The ID of a run.
Sample:
ocid1.resource.oc1..xxxxxxEXAMPLExxxxxx
|
||
idle_timeout_in_minutes
integer
|
on success |
The timeout value in minutes used to manage Runs. A Run would be stopped after inactivity for this amount of time period. Note: This parameter is currently only applicable for Runs of type `SESSION`. Default value is 2880 minutes (2 days)
Sample:
56
|
||
language
string
|
on success |
The Spark language.
Sample:
SCALA
|
||
lifecycle_details
string
|
on success |
The detailed messages about the lifecycle state.
Sample:
lifecycle_details_example
|
||
lifecycle_state
string
|
on success |
The current state of this run.
Sample:
ACCEPTED
|
||
logs_bucket_uri
string
|
on success |
An Oracle Cloud Infrastructure URI of the bucket where the Spark job logs are to be uploaded. See https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/hdfsconnector.htm#uriformat.
Sample:
logs_bucket_uri_example
|
||
max_duration_in_minutes
integer
|
on success |
The maximum duration in minutes for which an Application should run. Data Flow Run would be terminated once it reaches this duration from the time it transitions to `IN_PROGRESS` state.
Sample:
56
|
||
metastore_id
string
|
on success |
The OCID of OCI Hive Metastore.
Sample:
ocid1.metastore.oc1..xxxxxxEXAMPLExxxxxx
|
||
num_executors
integer
|
on success |
The number of executor VMs requested.
Sample:
56
|
||
opc_request_id
string
|
on success |
Unique Oracle assigned identifier for the request. If you need to contact Oracle about a particular request, please provide the request ID.
Sample:
ocid1.opcrequest.oc1..xxxxxxEXAMPLExxxxxx
|
||
owner_principal_id
string
|
on success |
The OCID of the user who created the resource.
Sample:
ocid1.ownerprincipal.oc1..xxxxxxEXAMPLExxxxxx
|
||
owner_user_name
string
|
on success |
The username of the user who created the resource. If the username of the owner does not exist, `null` will be returned and the caller should refer to the ownerPrincipalId value instead.
Sample:
owner_user_name_example
|
||
parameters
complex
|
on success |
An array of name/value pairs used to fill placeholders found in properties like `Application.arguments`. The name must be a string of one or more word characters (a-z, A-Z, 0-9, _). The value can be a string of 0 or more characters of any kind. Example: [ { name: "iterations", value: "10"}, { name: "input_file", value: "mydata.xml" }, { name: "variable_x", value: "${x}"} ]
|
||
name
string
|
on success |
The name of the parameter. It must be a string of one or more word characters (a-z, A-Z, 0-9, _). Examples: "iterations", "input_file"
Sample:
name_example
|
||
value
string
|
on success |
The value of the parameter. It must be a string of 0 or more characters of any kind. Examples: "" (empty string), "10", "mydata.xml", "${x}"
Sample:
value_example
|
||
pool_id
string
|
on success |
The OCID of a pool. Unique Id to indentify a dataflow pool resource.
Sample:
ocid1.pool.oc1..xxxxxxEXAMPLExxxxxx
|
||
private_endpoint_dns_zones
list
/ elements=string
|
on success |
An array of DNS zone names. Example: `[ "app.examplecorp.com", "app.examplecorp2.com" ]`
|
||
private_endpoint_id
string
|
on success |
The OCID of a private endpoint.
Sample:
ocid1.privateendpoint.oc1..xxxxxxEXAMPLExxxxxx
|
||
private_endpoint_max_host_count
integer
|
on success |
The maximum number of hosts to be accessed through the private endpoint. This value is used to calculate the relevant CIDR block and should be a multiple of 256. If the value is not a multiple of 256, it is rounded up to the next multiple of 256. For example, 300 is rounded up to 512.
Sample:
56
|
||
private_endpoint_nsg_ids
list
/ elements=string
|
on success |
An array of network security group OCIDs.
|
||
private_endpoint_subnet_id
string
|
on success |
The OCID of a subnet.
Sample:
ocid1.privateendpointsubnet.oc1..xxxxxxEXAMPLExxxxxx
|
||
run_duration_in_milliseconds
integer
|
on success |
The duration of the run in milliseconds.
Sample:
56
|
||
spark_version
string
|
on success |
The Spark version utilized to run the application.
Sample:
spark_version_example
|
||
time_created
string
|
on success |
The date and time the resource was created, expressed in RFC 3339 timestamp format. Example: `2018-04-03T21:10:29.600Z`
Sample:
2013-10-20T19:20:30+01:00
|
||
time_updated
string
|
on success |
The date and time the resource was updated, expressed in RFC 3339 timestamp format. Example: `2018-04-03T21:10:29.600Z`
Sample:
2013-10-20T19:20:30+01:00
|
||
total_o_cpu
integer
|
on success |
The total number of oCPU requested by the run.
Sample:
56
|
||
type
string
|
on success |
The Spark application processing type.
Sample:
BATCH
|
||
warehouse_bucket_uri
string
|
on success |
An Oracle Cloud Infrastructure URI of the bucket to be used as default warehouse directory for BATCH SQL runs. See https://docs.cloud.oracle.com/iaas/Content/API/SDKDocs/hdfsconnector.htm#uriformat.
Sample:
warehouse_bucket_uri_example
|
Authors¶
Oracle (@oracle)