Connect to Object Store

Prerequisites

  1. Resource principal must be enabled for the cluster.
  2. There must be a policy defined with correct privileges in your tenancy. Refer Policy Examples.
  3. In case of cross tenancy (your tenancy [where Big Data Service cluster is created] talking to some other tenancy), the correct policies must be defined in both tenancies to allow the cluster to operate.

Connecting to Object Store

Big Data Service provides a custom authenticator to connect to object store using Resource Principal Authentication.

Note

On Big Data Service 3.0.28 or earlier clusters in the me-riyadh-1 region, run the following export command prior connecting to object storage through HDFS.


# export OCI_REGION_METADATA="{ \"realmKey\" : \"oc1\",\"realmDomainComponent\" : \"oraclecloud.com\",    \"regionKey\" : \"RUH\", \"regionIdentifier\" : \"me-riyadh-1\" }" 

# hadoop fs -ls oci://<bucket>/<nameservice>/

To use RPST, complete the following:

  1. Access Apache Ambari.
  2. From the side toolbar, under Services click HDFS.
  3. Click the Configs tab.
  4. Find fs.oci.client.custom.authenticator, and then update its value to com.oracle.oci.bds.commons.auth.BDSResourcePrincipalAuthenticator.
  5. Find fs.oci.client.regionCodeOrId, and then update its value to the region where object store connection is needed. For example: us-region-1.
  6. Click Save, and then restart all required services.
    After restart is completed, all the object store connections initiated using HDFS command, applications (Spark, Hive, Flink, Oozie, and so on), and the program that uses the default /etc/hadoop/conf/core-site.xml file, consume the resource principal session token.