Import Data Using Oracle Data Pump on Autonomous Database
Oracle Data
Pump offers very fast bulk data and metadata movement between Oracle databases and Autonomous Databases.
Data Pump Import lets you import data from Data Pump files residing on Oracle Cloud Infrastructure Object
Storage,
Microsoft Azure, AWS S3, and Oracle Cloud Infrastructure Object
Storage Classic. You can save your data to your Cloud
Object Store and use Oracle Data Pump to load data to Autonomous Database.
When a load or import operation results in the following timezone related
error, you need to get your timezone file upgraded to the latest version available for
your database:
ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ version n+1
into a target database with TSTZ version n.
Import Data Using Oracle Data Pump Version 18.3 or Later Oracle recommends using the latest Oracle Data Pump version for importing data from Data Pump files into your Autonomous Database, as it contains enhancements and fixes for a better experience.
Access Log Files for Data Pump Import The log files for Data Pump Import operations are stored in the directory you specify with the data pump impdpdirectory parameter.
Export Your Existing Oracle Database to Import into Autonomous Database 🔗
Use Oracle
Data Pump Export to export your existing Oracle Database to migrate to Autonomous Database using Oracle Data Pump
Import.
Oracle recommends using Oracle Data Pump schema mode to migrate your database
to Autonomous Database. You can list the schemas
you want to export by using the schemas parameter.
For a faster migration, export your schemas into multiple Data Pump files and
use parallelism. You can specify the dump file name format you want to use with the
dumpfile parameter. Set the parallel parameter to at least the number of
CPUs you have in your database.
Oracle recommends using the following Data Pump parameters for faster and easier
migration to Autonomous Database:
If during the export with expdp you use the
encryption_pwd_prompt=yes parameter then also use
encryption_pwd_prompt=yes with your import and input the same
password at the impdp prompt to decrypt the dump files (remember the
password you supply during export). The maximum length of the encryption password is 128
bytes
The
dumpfile parameter supports the
%L and %l wildcards in
addition to the legacy %U and %u
wildcards. For example, dumpfile=export%L.dmp. Use
the %L or %l wildcard for exports
from Oracle Database Release 12.2 and higher. This wildcard expands
the dumpfile file name into a 3-digit to 10-digit, variable-width
incrementing integer, starting at 100 and ending at
2147483646.
Use the legacy
%U or %u wildcard for
exports from Oracle Database prior to Release 12.2. If you use this
option and more than 99 dump files are needed, you must specify
multiple dumpfile names, each with the %U or
%u parameter.
You can use other Data Pump Export parameters, like compression, depending on your
requirements. For more information on Oracle Data Pump Export see Oracle Database
Utilities.
Import Data Using Oracle Data Pump Version 18.3 or Later 🔗
Oracle recommends using the
latest Oracle Data Pump version for importing data from Data Pump files into your Autonomous Database, as it contains enhancements and
fixes for a better experience.
Download the latest version of Oracle Instant Client, which includes Oracle Data Pump, for your platform from Oracle Instant Client Downloads. See the installation instructions on the platform install download page for the installation steps required after you download Oracle Instant Client.
In Oracle Data Pump version 18.3 and later, the credential argument authenticates Data Pump to the Cloud Object Storage service you are using for your source files. The dumpfile argument is a comma delimited list of URLs for your Data Pump files.
In Oracle Data Pump, if your source files reside on Oracle Cloud
Infrastructure Object Storage you can use Oracle Cloud
Infrastructure native URIs or Swift URIs. See DBMS_CLOUD URI Formats for details on these file URI formats.
Importing with Oracle Data Pump and Setting credential Parameter
Store your Cloud Object Storage credential using DBMS_CLOUD.CREATE_CREDENTIAL.
For example, to create Oracle Cloud
Infrastructure Auth Token credentials:
For more information on Oracle Cloud
Infrastructure Signing Key based credentials see CREATE_CREDENTIAL Procedure.
Supported credential types:
Data Pump Import supports Oracle Cloud
Infrastructure Auth Token based credentials and Oracle Cloud
Infrastructure Signing Key based credentials.
For more information on the credential types for Oracle Cloud
Infrastructure Cloud Object Storage, see CREATE_CREDENTIAL Procedure.
Data Pump supports using an
Oracle Cloud
Infrastructure Object Storage pre-authenticated URL for the dumpfile
parameter. When you use a pre-authenticated URL, providing the
credential parameter is required and
impdp ignores the
credential parameter. When you use a
pre-authenticated URL for the dumpfile, you can use
a NULL value for the credential in
the next step. See Using
Pre-Authenticated Requests for more
information.
Run Data Pump Import with the dumpfile parameter set to the list of file URLs on your Cloud Object Storage and the credential parameter set to the name of the credential you created in the previous step. For example:
If during the export with expdp you used the
encryption_pwd_prompt=yes parameter then use
encryption_pwd_prompt=yes and input the same password at the
impdp prompt that you specified during the export.
The
dumpfile parameter supports the
%L and %l wildcards in
addition to the legacy %U and %u
wildcards. For example, dumpfile=export%L.dmp. Use
the %L or %l wildcard for exports
from Oracle Database Release 12.2 and higher. This wildcard expands
the dumpfile file name into a 3-digit to 10-digit, variable-width
incrementing integer, starting at 100 and ending at
2147483646.
Use the legacy
%U or %u wildcard for
exports from Oracle Database prior to Release 12.2. If you use this
option and more than 99 dump files are needed, you must specify
multiple dumpfile names, each with the %U or
%u parameter.
If your bucket and data do not reside in a commercial (OC1) realm, where OCI
Dedicated Endpoints are not supported, use the following URI format for
dumpfile:
The credential parameter cannot be an Azure service
principal, Amazon Resource Name (ARN), or a Google service account. See Configure Policies and Roles to Access Resources for more information on resource principal based
authentication.
When you use a pre-authenticated URL for dumpfile,
you can use a NULL value for the credential.
For the best import
performance use the HIGH database service for your import connection
and set the parallel parameter
to one quarter the number of ECPUs (.25 x ECPU
count). If you are using OCPU compute model, set
the parallel parameter to the number of OCPUs (1 x OCPU
count).
When you perform Oracle Data Pump export to Object Storage with a Swift
URI you must use Swift credentials to import with Oracle Data Pump import. See Oracle Cloud Infrastructure Object Storage Swift URI Format for more information on Swift URIs.
When you perform Oracle Data Pump export to Object Storage with a native URI, you can
import using either Swift credentials or Signing Key based Credentials. See Oracle Cloud Infrastructure Object Storage Native URI Format for more information on Native URIs.
Import Data Using Oracle Data
Pump with OCI Resource Principal Credential 🔗
Oracle Data Pump supports
importing data pump files into your Autonomous Database using an Oracle Cloud
Infrastructure resource principal as a credential object.
If you use Oracle Data Pump expdp to export directly to Object
Store then you must use the same credential that was used to export when you import with
impdp. In this case, Oracle Data Pump import does not support Oracle Cloud
Infrastructure resource principal credentials. Other methods for uploading are supported for using
impdp using resource principal credentials. For example, if you upload
Oracle Data Pump files on Object Store using DBMS_CLOUD.PUT_OBJECT, you can import the files using Oracle Data
pump impdp using resource principal credentials. Likewise, when you use the
Oracle Cloud
Infrastructure Console to upload data pump files to Object Store, you can use resource principal
credentials to import into an Autonomous Database
instance with Oracle Data pump impdp.
In Oracle Data Pump, if your source files reside on Oracle Cloud
Infrastructure Object Storage you can use Oracle Cloud
Infrastructure native URIs or Swift URIs. See DBMS_CLOUD URI Formats for details on these file URI formats.
Configure the dynamic groups and policies and enable Oracle Cloud
Infrastructure resource principal to access the Object Store location where the data pump files you
want to import reside.
Run Data Pump Import with the dumpfile parameter set to the list of file URLs on
your Cloud Object Storage and the credential parameter set to
OCI$RESOURCE_PRINCIPAL.
If during the export with expdp you used the
encryption_pwd_prompt=yes parameter then use
encryption_pwd_prompt=yes and input the same password at the
impdp prompt that you specified during the export.
The
dumpfile parameter supports the
%L and %l wildcards in
addition to the legacy %U and %u
wildcards. For example, dumpfile=export%L.dmp. Use
the %L or %l wildcard for exports
from Oracle Database Release 12.2 and higher. This wildcard expands
the dumpfile file name into a 3-digit to 10-digit, variable-width
incrementing integer, starting at 100 and ending at
2147483646.
Use the legacy
%U or %u wildcard for
exports from Oracle Database prior to Release 12.2. If you use this
option and more than 99 dump files are needed, you must specify
multiple dumpfile names, each with the %U or
%u parameter.
If your bucket and data do not reside in a commercial (OC1) realm, where OCI
Dedicated Endpoints are not supported, use the following URI format for
dumpfile:
For the best import
performance use the HIGH database service for your import connection
and set the parallel parameter
to one quarter the number of ECPUs (.25 x ECPU
count). If you are using OCPU compute model, set
the parallel parameter to the number of OCPUs (1 x OCPU
count).
Import Data Using Oracle Data Pump (Versions 12.2.0.1 and Earlier) 🔗
You can import data from
Data Pump files into your Autonomous Database
using Data Pump client versions 12.2.0.1 and earlier by setting the
default_credential parameter.
Data Pump Import versions 12.2.0.1 and earlier do not have the credential parameter. If you are using an older version of Data Pump Import you need to define a default credential property for Autonomous Database and use the default_credential keyword in the dumpfile parameter.
In Oracle Data Pump, if your source files reside on Oracle Cloud
Infrastructure Object Storage you can use the Oracle Cloud
Infrastructure native URIs, or Swift URIs. See DBMS_CLOUD URI Formats for details on these file URI formats.
Importing with Older Oracle Data Pump Versions and Setting default_credential
Store your Cloud Object Storage credential using DBMS_CLOUD.CREATE_CREDENTIAL.
For example, to create Oracle Cloud
Infrastructure Auth Token credentials:
For more information on Oracle Cloud
Infrastructure Signing Key based credentials see CREATE_CREDENTIAL Procedure.
Supported credential types:
Data Pump import supports Oracle Cloud
Infrastructure Auth Token based credentials and Oracle Cloud
Infrastructure Signing Key based credentials.
For more information on the credential types for Oracle Cloud
Infrastructure Cloud Object Storage, see CREATE_CREDENTIAL Procedure.
Data Pump supports using an
Oracle Cloud
Infrastructure Object Storage pre-authenticated URL for the dumpfile. When you
use a pre-authenticated URL, setting the
DEFAULT_CREDENTIAL is required and
impdp ignores the
DEFAULT_CREDENTIAL. When you use a
pre-authenticated URL for the dumpfile, you can use
a NULL value for the
DEFAULT_CREDENTIAL you set in the next
step. See Using
Pre-Authenticated Requests for more
information.
Data Pump supports using a resource principal credential with
impdp.
Set the credential as the default credential for your Autonomous Database, as the ADMIN user. For example:
ALTER DATABASE PROPERTY SET DEFAULT_CREDENTIAL = 'ADMIN.DEF_CRED_NAME'
The DEFAULT_CREDENTIAL can be an OCI Resource Principal. For
example:
ALTER DATABASE PROPERTY SET DEFAULT_CREDENTIAL = 'OCI$RESOURCE_PRINCIPAL'
The DEFAULT_CREDENTIAL value cannot be an Azure service principal,
Amazon Resource Name (ARN), or a Google service account.
The DEFAULT_CREDENTIAL value can be set to NULL if
you are using a pre-authenticated URL.
Run Data Pump Import with the dumpfile parameter set to the list of file URLs on your Cloud Object Storage, and set the default_credential keyword. For example:
If during the export with expdp you used the
encryption_pwd_prompt=yes parameter then use
encryption_pwd_prompt=yes and input the same password at the
impdp prompt that you specified during the export.
The
dumpfile parameter supports the
%L and %l wildcards in
addition to the legacy %U and %u
wildcards. For example, dumpfile=export%L.dmp. Use
the %L or %l wildcard for exports
from Oracle Database Release 12.2 and higher. This wildcard expands
the dumpfile file name into a 3-digit to 10-digit, variable-width
incrementing integer, starting at 100 and ending at
2147483646.
Use the legacy
%U or %u wildcard for
exports from Oracle Database prior to Release 12.2. If you use this
option and more than 99 dump files are needed, you must specify
multiple dumpfile names, each with the %U or
%u parameter.
If your bucket and data do not reside in a commercial (OC1) realm, where OCI
Dedicated Endpoints are not supported, use the following URI format for
dumpfile:
For the best import
performance use the HIGH database service for your import connection
and set the parallel parameter
to one quarter the number of ECPUs (.25 x ECPU
count). If you are using OCPU compute model, set
the parallel parameter to the number of OCPUs (1 x OCPU
count).
When you perform Oracle Data Pump export to Object Storage with a Swift
URI you must use Swift credentials to import with Oracle Data Pump import. See Oracle Cloud Infrastructure Object Storage Swift URI Format for more information on Swift URIs.
When you perform Oracle Data Pump export to Object Storage with a native
URI, you can import using either Swift credentials or Signing Key based Credentials. See
Oracle Cloud Infrastructure Object Storage Native URI Format for more information on Native URIs.
The log
files for Data Pump Import operations are stored in the directory you specify with the data pump
impdpdirectory parameter.
To access the log file you need to move the log file to your Cloud Object
Storage using the procedure DBMS_CLOUD.PUT_OBJECT. For example, the following PL/SQL block
moves the file import.log to your Cloud Object Storage:
In this example, namespace-string is the Oracle
Cloud Infrastructure object storage namespace and
bucketname is the bucket
name. See Understanding
Object Storage Namespaces for more
information.
Provides
notes for using Oracle Data Pump import on Autonomous Database.
By default the Oracle Data Pump Import utility imports data with the same
compression type as specified for tables on the source database (the database where you
exported your data from). If you want to leave compression to Autonomous Database, specify the following
parameter when you import your data:
TRANSFORM=TABLE_COMPRESSION_CLAUSE:NONE
The TRANSFORM parameter with this option specifies that
Oracle Data Pump Import should ignore the compression type of your source tables. Using
this option Oracle Data Pump imports the tables into Autonomous Database using the default
compression type, where the default compression type depends on the Autonomous Database workload type:
Data Warehouse: The default table compression
is Hybrid Columnar Compression.
Oracle recommends using this default if your application primarily
uses bulk load operations on your tables, as the loads will compress the data.
Query performance on these tables will benefit from compression as queries need
to do less IO.
If you have staging tables replicated from other systems using Oracle
GoldenGate or other replication tools, or your application primarily uses
row-by-row DML operations on tables, Oracle recommends keeping the tables
uncompressed or using Advanced Row Compression.
Transaction Processing: The default table
compression is no compression.
JSON Database: The default table compression
is no compression.
APEX: The default table compression is no
compression.
See TRANSFORM for more information on
the Oracle Data Pump Import TRANSFORM parameter.