Creating a Spark-Submit Data Flow Application
Create a Spark-Submit Application in Data Flow.
Upload your Spark-submit files to an Oracle Cloud Infrastructure Object Storage. See Set Up Object Store for details. - Open the navigation menu, and click Analytics and AI. Under Data Lake click Data Flow.
- In the left-side menu, click Applications.
- Under List scope, select the compartment that you want to create the application in.
- On the Applications page, click Create application.
- In the Create application panel, enter a name for the application and an optional description that can help you search for it.
-
Under Resource configuration, provide the following
values. To help calculate the number of resources that you need, see Sizing the Data Flow Application.
- Select the Spark version.
- (Optional) Select a pool.
- For Driver shape, select the type of cluster node to use to host the Spark driver.
- (Optional) If you selected a flexible shape for the driver, customize the number of OCPUs and the amount of memory.
- For Executor shape, select the type of cluster node to use to host each Spark executor.
- (Optional) If you selected a flexible shape for the executor, customize the number of OCPUs and the amount of memory.
- (Optional) To enable use of Spark dynamic allocation (autoscaling), select Enable autoscaling.
- Enter the number of executors you need. If you selected to use autoscaling, enter a minimum and maximum number of executors.
-
Under Application configuration, provide the following
values.
- (Optional) If the application is for Spark streaming, select Spark Streaming.
- Select Use Spark-Submit Options. The supported
spark-submit options are:
--py-files
--files
--jars
--class
--conf
The aribtary Spark configuration property inkey=value
format. If a value contains spaces, wrap it in quotes,"key=value"
. Pass many configurations as separate arguments, for example,--conf <key1>=<value1> --conf <key2>=<value2>
application-jar
The path to a bundled JAR including your application and all its dependencies.application-arguments
The arguments passed to the main method of your main class.
- In the Spark-Submit options text box, enter the
options in the format:
For example, to use Spark Oracle Datasource, use the following option:
--py-files oci://<bucket_name>@<objectstore_namespace>/<file_name> .pyoci://<bucket_name>@<objectstore_namespace>/<dependencies_file_name.zip> --files oci://<bucket_name>@<objectstore_namespace>/<file_name>.json --jars oci://<bucket_name>@<objectstore_namespace>/<file_name>.jar --conf spark.sql.crossJoin.enabled=true oci://<bucket_name>@<objectstore_namespace>/<file_name>.py oci://<argument2_path_to_input> oci://<argument3_path_to_output>
--conf spark.oracle.datasource.enable=true
Important
Data Flow doesn't support URIs beginninglocal://
orhdfs://
. The URI must startoci://
, so all the files (includingmain-application
) must be in Oracle Cloud Infrastructure Object Storage, and you must use the fully qualified domain name (FQDN) for each file. - (Optional) If you have an
archive.zip
file, uploadarchive.zip
to Oracle Cloud Infrastructure Object Storage and populate Archive URI with the path to it. There are two ways to do this:- Select the file from the Object Storage file name list. Click Change compartment if the bucket is in a different compartment.
- Click Enter the file path manually and
enter the file name and the path to it using this format:
oci://<bucket_name>@<namespace_name>/<file_name>
- Under Application log location, specify where you
want to ingest Oracle Cloud Infrastructure Logging in one of the
following ways:
- Select the
dataflow-logs
bucket from the Object Storage file name list. Click Change compartment if the bucket is in a different compartment. - Click Enter the bucket path manually and
enter the bucket path to it using this format:
oci://dataflow-logs@<namespace_name>
- Don't click Enter the bucket path manually, and select the file from the
- Select the
- (Optional) Select the Metastore from the list. If the metastore is in a different compartment, click Change compartment first, and select a different compartment, then select the Metastore from the list. The Default managed table location is automatically populated based on your metastore.
- (Optional) To add tags to the application, select a tag namespace (for defined tags) and populate then specify a tag key and value. Add more tags as needed. For more information about tagging, see Overview of Tagging.
- (Optional)
Click Show advanced options, and provide the following
values.
- (Optional) Select Use resource principal auth to enable faster starting or if you expect the Run to last more than 24 hours. You must have Resource Principal Policies set up.
- Check Enable Delta Lake to use Delta Lake.
- Select the Delta Lake version. The value you choose is reflected in the Spark configuration properties Key/Value pair.
- Select the logs group.
- (Optional) Click Enable Spark Oracle data source to use Spark Oracle Datasource.
- (Optional) In the Logs section, select the logs groups and the application logs for Oracle Cloud Infrastructure Logging. If the logs groups are in a different compartment, click Change compartment.
- Add Spark Configuration Properties. Enter a Key and Value pair.
- Click + Another property to add another configuration property.
- Repeat steps b and c until you've added all the configuration properties.
- Override the default value for the warehouse bucket by populating
Warehouse Bucket URI in the format:
oci://<warehouse-name>@<tenancy>
- For Choose network access, select one of the
following options:
- If you're Attaching a Private Endpoint to Data Flow, click the
Secure Access to Private Subnet radio
button. Select the private endpoint from the resulting list.
Note
You can't use an IP address to connect to the private endpoint, you must use the FQDN. - If you're not using a private endpoint, click the Internet Access (No Subnet) radio button.
- If you're Attaching a Private Endpoint to Data Flow, click the
Secure Access to Private Subnet radio
button. Select the private endpoint from the resulting list.
- (Optional) To enable data lineage collection:
- Click Enable data lineage collection.
- Click Enter data catalog into manually or select a Data Catalog instance from a configurable compartment in the current tenancy.
- (Optional) If you clicked Enter data catalog into manually in the previous step, enter the values for Data catalog tenancy OCID, Data catalog compartment OCID, and Data Catalog instance ODID.
- For Max run duration in minutes, enter a value between 60 (1 hour) and 10080 (7 days). If you don't enter a value, the submitted run continues until it succeeds, fails, is canceled, or reaches its default maximum duration (24 hours).
-
Click Create to create the Application, or click
Save as stack to create it later.
To change the values for Name and File URL in the future, see Editing an Application.
Use the create command and required parameters to create an application:
For a complete list of flags and variable options for CLI commands, see the CLI Command Reference.oci data-flow application create [OPTIONS]
Run the CreateApplication operation to create an application.