Set Up Source and Target Configurations for Online Migrations for Your GoldenGate Instance 🔗
To use your own GoldenGate instance, Oracle Cloud Infrastructure Database Migration service has a few additional prerequisite tasks, create GoldenGate users on the source database and unlock the GoldenGate user on the target database (optional).
Note
You can have a single user for database connection, if you have the required privileges. For the source database, the user for CDB and PDBs has all the privileges for GoldenGate and Data Pump.
Create GoldenGate Users on the Source Database
On the source database, you must create a GoldenGate administration user, for example ggadmin.
If the source database is multitenant, create the user in the PDB, and
also create a different user in the CDB root, for example
c##ggadmin.
To create ggadmin, connect to the PDB and run the following commands:
CREATE TABLESPACE gg_admin DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER ggadmin IDENTIFIED BY ggadmin_pwd DEFAULT TABLESPACE gg_admin TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON gg_admin;
GRANT CONNECT TO ggadmin;
GRANT RESOURCE TO ggadmin;
GRANT CREATE TO ggadmin;
GRANT SELECT_CATALOG_ROLE TO ggadmin;
GRANT DV_GOLDENGATE_ADMIN TO ggadmin;
GRANT DV_GOLDENGATE_REDO_ACCESS TO ggadmin;
GRANT ALTER SYSTEM TO ggadmin;
GRANT ALTER USER TO ggadmin;
GRANT DATAPUMP_EXP_FULL_DATABASE TO ggadmin;
GRANT DATAPUMP_IMP_FULL_DATABASE TO ggadmin;
GRANT SELECT ANY DICTIONARY TO ggadmin;
GRANT SELECT ANY TRANSACTION TO ggadmin;
GRANT INSERT ANY TABLE TO ggadmin;
GRANT UPDATE ANY TABLE TO ggadmin;
GRANT DELETE ANY TABLE TO ggadmin;
GRANT LOCK ANY TABLE TO ggadmin;
GRANT CREATE ANY TABLE TO ggadmin;
GRANT CREATE ANY INDEX TO ggadmin;
GRANT CREATE ANY CLUSTER TO ggadmin;
GRANT CREATE ANY INDEXTYPE TO ggadmin;
GRANT CREATE ANY OPERATOR TO ggadmin;
GRANT CREATE ANY PROCEDURE TO ggadmin;
GRANT CREATE ANY SEQUENCE TO ggadmin;
GRANT CREATE ANY TRIGGER TO ggadmin;
GRANT CREATE ANY TYPE TO ggadmin;
GRANT CREATE ANY SEQUENCE TO ggadmin;
GRANT CREATE ANY VIEW TO ggadmin;
GRANT ALTER ANY TABLE TO ggadmin;
GRANT ALTER ANY INDEX TO ggadmin;
GRANT ALTER ANY CLUSTER TO ggadmin;
GRANT ALTER ANY INDEXTYPE TO ggadmin;
GRANT ALTER ANY OPERATOR TO ggadmin;
GRANT ALTER ANY PROCEDURE TO ggadmin;
GRANT ALTER ANY SEQUENCE TO ggadmin;
GRANT ALTER ANY TRIGGER TO ggadmin;
GRANT ALTER ANY TYPE TO ggadmin;
GRANT ALTER ANY SEQUENCE TO ggadmin;
GRANT CREATE DATABASE LINK TO ggadmin;
GRANT EXECUTE ON dbms_lock TO ggadmin;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('ggadmin');
To create c##ggadmin, connect to the CDB and run the following commands:
CREATE USER c##ggadmin IDENTIFIED BY cggadmin_pwd CONTAINER=ALL DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS;
GRANT CONNECT TO c##ggadmin CONTAINER=ALL;
GRANT RESOURCE TO c##ggadmin CONTAINER=ALL;
GRANT CREATE TABLE TO c##ggadmin CONTAINER=ALL;
GRANT CREATE VIEW TO c##ggadmin CONTAINER=ALL;
GRANT CREATE SESSION TO c##ggadmin CONTAINER=ALL;
GRANT SELECT_CATALOG_ROLE TO c##ggadmin CONTAINER=ALL;
GRANT DV_GOLDENGATE_ADMIN TO c##ggadmin CONTAINER=ALL;
GRANT DV_GOLDENGATE_REDO_ACCESS TO c##ggadmin CONTAINER=ALL;
GRANT ALTER SYSTEM TO c##ggadmin CONTAINER=ALL;
GRANT ALTER USER TO c##ggadmin CONTAINER=ALL;
GRANT SELECT ANY DICTIONARY TO c##ggadmin CONTAINER=ALL;
GRANT SELECT ANY TRANSACTION TO c##ggadmin CONTAINER=ALL;
GRANT EXECUTE ON dbms_lock TO c##ggadmin CONTAINER=ALL;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('c##ggadmin',CONTAINER=>'ALL');
Create or Unlock the GoldenGate
User on the Target Database
On co-managed targets:
If the target is not Oracle Autonomous Database, create a ggadmin user in the target PDB. This user is similar to the ggadmin user you created on the source database, but will require more privileges. See Establishing Oracle GoldenGate Credentials for information about privileges required for a "Replicat all modes" user.
On Autonomous targets:
Autonomous Database has a pre-created ggadmin user that you must unlock. These commands need to be run on the GoldenGate Target instance.
Oracle Cloud Infrastructure Database Migration Service supports only those scenarios where the Target database and Oracle GoldenGate, both run on private IP addresses.
Database Migration Data Pump
Defaults 🔗
Oracle Cloud Infrastructure Database Migration
automatically sets optimal defaults for Data Pump parameters to achieve better performance
and ensure security of data. There are also Data Pump errors that are ignored by
default.
The following table lists the Data Pump parameters set by Database
Migration, and the values they are set to. If there is a Database Migration migration
setting available to override the default, it is listed in the Optional DMS Setting to
Override column.
Table 10-1 Database Migration Data Pump
Parameter Defaults
COMPRESSION_ALGORITHM is set to BASIC(for Oracle
Database 11.2) and MEDIUM (for Oracle Database 12.1 and later)
COMPRESSION is set to ALL
N/A
ENCRYPTION
ENCRYPTION is set to ALL
ENCRYPTION_ALGORITHM is set to AES128
ENCRYPTION_MODE is set to PASSWORD
N/A
FILESIZE
FILESIZE is set to 5G
N/A
FLASHBACK_SCN
For offline migrations, sets FLASHBACK_TIME System time
now.
For online migrations, uses neither FLASHBACK_SCN nor
FLASHBACK_TIME
N/A
REUSE_DUMPFILES
Always set to YES
N/A
TRANSFORM
Always sets OMIT_ENCRYPTION_CLAUSE:Y for Oracle Database
19c and later target releases
Always sets LOB_STORAGE:SECUREFILE
For an Autonomous Database target, the following
transform is set by default
SEGMENT_ATTRIBUTES:N
DWCS_CVT_IOTS:Y
CONSTRAINT_USE_DEFAULT_INDEX:Y
Allows additional TRANSFORM to be specified
METRICS
Always set to Yes
N/A
LOGTIME
Always set to ALL
N/A
TRACE
Always set to 1FF0b00
N/A
LOGFILE
Always set to Data Pump job name and created under the
specified export or import directory object.
For example, if a Data Pump job is ZDM_2_DP_EXPORT_8417
and directory object used is DATA_PUMP_DIR, then the operation log
is created by name ZDM_2_DP_EXPORT_8417.log under DATA_PUMP_DIR.
N/A
Oracle Data Pump errors that are ignored by default are as follows:
ORA-31684: XXXX already exists
ORA-39111: Dependent object type XXXX skipped, base object type
ORA-39082: Object type ALTER_PROCEDURE: XXXX created with compilation
warnings
This is not configurable.
Oracle migration phases 🔗
A migration job in Oracle Cloud Infrastructure Database Migration runs in operational phases as a work flow.
The phases in Database Migration
are shown in the console with user-friendly names (DMS Phase), and in the REST API
with the codes prefixed with "ODMS_", as shown in the table below.
Note that Database Migration
harnesses the Zero Downtime Migration tool to run the migration job work flow, so in
the logs the migration phase names will have a "ZDM_" prefix. Also note that one Database Migration phase corresponds to one or
more Zero Downtime Migration phases, which will give you a more granular look at the
work flow.
Table 10-2 Database Migration Process Phase
Descriptions
DMS Phase NameConsole (API
Codes)
Description
ZDM Phase Name
Description
Validate (VALIDATE)
Performs validation of the source and target database,
the GoldenGate Hub, and Data Pump configuration.
Validate Source (VALIDATE_SRC)
Validates the source database access credentials, database parameter
settings, ggadmin user privileges, and GoldenGate
capture support for objects in source database
Validate Target (VALIDATE_TGT)
Verifies that the target database exists and that the database type
is Autonomous Database, and validates access credentials, security,
and connectivity.
Validates the export directory object (if applicable), and checks for
sufficient space and permission for specified user in the source
database to export dumps. Checks if the specified Oracle Cloud
Object Store buckets, data bucket, and wallet bucket are accessible
from the source. Also validates the proxy configuration if
applicable.
Verifies that the Data Pump import directory object exists.
If a pre-existing DBLINK was specified, checks if it exists and is
valid, and ensures that the ADB requirements for the DBLINK and
wallet files are met.
Prepare (PREPARE_REPLICATION)
Prepares for and starts the GoldenGate Extract process,
and enables supplemental logging
Prepare GoldenGate Hub (PREPARE_GG_HUB)
Registers database connection details and credentials with GoldenGate
Microservices.
ADD_HEARTBEAT_SRC
Creates GoldenGate heartbeat table in the source database. If the
table already exists, sets update frequency to 60 seconds.
ADD_SCHEMA_TRANDATA_SRC
Prepares the source database schemas for instantiation by enabling
schema level supplemental logging.
Starts the GoldenGate Extract process at the source database
Prepare (PREPARE_INITIAL_LOAD)
Creates any necessary directory objects for
Data Pump, and creates a DBLINK, if applicable.
Prepare Source Datapump (PREPARE_DATAPUMP_SRC)
Creates a new directory object for Data Pump, if required. Creates
OCI Auth Token to access OCI OSS bucket if required.
Prepare Target Datapump (PREPARE_DATAPUMP_TGT)
Creates a new directory object for Data Pump, if required. Stores OCI
Auth token in the database for secure OSS access.
If migrating via DBLINK, and a DBLINK must be created, creates the
necessary database credentials to access the source and create a new
DBLINK.
Ensures Autonomous Database security requirements are met using
DBLINK over SSL.
Datapump Source Export (INITIAL_LOAD_EXPORT)
Starts and monitors the Data Pump Export on the source database.
Datapump Source Export DATAPUMP_EXPORT_SRC)
Starts and monitors the Data Pump Export on the source database.
Upload Source Dump Files (DATA_UPLOAD)
Uploads Data Pump dump files from the source to OCI OSS.
Upload Source Dump Files (UPLOAD_DUMPS_SRC)
Uploads Data Pump dump files from the source to OCI OSS.
Datapump Target Import (INITIAL_LOAD_IMPORT)
Starts import of Data Pump Dumps to the target database, either from
the OCI OSS bucket or via DBLINK, and monitors the Data Pump import
progress.
Datapump Target Import (DATAPUMP_IMPORT_TGT)
Starts import of Data Pump Dumps to the target database, either from
the OCI OSS bucket or via DBLINK, and monitors the Data Pump import
progress.
Post Datapump (POST_INITIAL_LOAD)
Removes directory objects, access credentials, and DBLINK that were
created for Data Pump by Database Migration.
Post Source Datapump (POST_DATAPUMP_SRC)
Removes any Data Pump directory object created by Database
Migration.
Post Target Datapump (POST_DATAPUMP_TGT)
Fixes any invalid objects in the target database. Removes the
database access and OCI OSS access credentials that were created for
the migration. Removes any DBLINK created by Database Migration.
Optionally, removes source database dumps stored in OCI OSS
bucket.
Prepare GoldenGate (PREPARE_REPLICATION_TARGET )
Prepares for GoldenGate replication.
ADD_HEARTBEAT_TGT
Creates the GoldenGate heartbeat table in the target database. If the
table already exists, sets update frequency to 60 seconds.
ADD_CHECKPOINT_TGT
Creates GoldenGate checkpoint table in the target database to track
Replicat progress.
CREATE_GG_REPLICAT_TGT
Creates GoldenGate Replicat process for the target database.
START_GG_REPLICAT_TGT
Starts GoldenGate Replicat process for the target database.
Monitor GoldenGate Lag (MONITOR_REPLICATION_LAG)
Polls the GoldenGate checkpoint and heartbeat data to measure
end-to-end apply lag until lag decreases below desired
threshold.
Monitor GoldenGate Lag (MONITOR_GG_LAG)
Polls the GoldenGate checkpoint and heartbeat data to measure
end-to-end apply lag until lag decreases below desired
threshold.
Switchover App (SWITCHOVER)
When the user is ready to perform the Switchover and resumes the migration job that is currently is in a waiting state after the Monitor Replication Lag phase, then the service stops GoldenGate Extract, waits for GoldenGate Replicat to complete apply, and then stops GoldenGate Replicat.
Switchover App (SWITCHOVER_APP)
When the user is ready to perform the Switchover and resumes the migration job that is currently in a waiting state after the Monitor Replication Lag phase, then the service stops GoldenGate Extract, waits for GoldenGate Replicat to complete apply, and then stops GoldenGate Replicat.
Cleanup (CLEANUP)
Performs cleanup operations such as deleting GoldenGate Extract and GoldenGate Replicat processes and connection details on source and target database respectively, removing Autonomous Database access to wallet, and so on.
Post Switchover (POST_SWITCHOVER_TGT)
Performs post-switchover actions for the target database.
RM_GG_EXTRACT_SRC
Deletes GoldenGate Extract process on source database
RM_GG_REPLICAT_TGT
Deletes GoldenGate Replicat process on target database
DELETE_SCHEMA_TRANDATA_SRC
Disables schema level supplemental logging on source database
RM_HEARTBEAT_SRC
Drops the GoldenGate heartbeat table in source database, if the table
was created by Database Migration. Otherwise, resets update
frequency to original setting.
RM_CHECKPOINT_TGT
Drops the GoldenGate checkpoint table in the target database.
RM_HEARTBEAT_TGT
Drops the GoldenGate heartbeat table in the target database, if the
table was created by Database Migration. Otherwise, resets the
update frequency to the original value.
Clean GoldenGate Hub (CLEAN_GG_HUB)
Deletes the database connection details and credentials saved with
GoldenGate Microservices
Post Actions (POST_ACTIONS)
Removes Autonomous Database access wallet from Database
Migration.
CLEANUP_SRC
Deletes the Cloud Pre-Migration Advisor Tool related binaries on source database servers.
MySQL migration phases 🔗
A migration job in Oracle Cloud Infrastructure Database Migration runs in operational phases as a work flow.
The phases in Database Migration service are shown in the console with user-friendly names (DMS Phase), and in the REST API with the codes prefixed with "ODMS_", as shown in the table below.
Table 10-3 MySQL Validation/Evaluation Job Phases (offline and online)
Phase NameConsole (API Codes)
Phase enum value
Description
Validate source
ODMS_VALIDATE_SRC
Validates the source database connectivity.
Validate target
ODMS_VALIDATE_TGT
Validates target database connectivity
Validate source metadata
ODMS_FETCH_METADATA_SRC
Validates source database basic configuration
Validate target metadata
ODMS_FETCH_METADATA_TGT
Validates target database basic configuration
Run premigration advisor
ODMS_VALIDATE_PREMIGRATION_ADVISOR
Runs MySQL Evaluator (Export/Import and GoldenGate checks)
Table 10-4 MySQL Migration Job Phases (offline)
Phase NameConsole (API Codes)
Phase enum value
Description
Validate
ODMS_VALIDATE
Validates the source and target database connectivity.
Export initial load and upload data
ODMS_INITIAL_LOAD_EXPORT_DATA_UPLOAD
Perform MySQL source dump to object storage.
Import initial load
ODMS_INITIAL_LOAD_IMPORT
Perform MySQL target import from object storage
Validate target configuration
ODMS_FETCH_METADATA_TGT
Validates target database basic configuration
Cleanup
ODMS_CLEANUP
Cleanup temporary artifacts, object storage
Table 10-5 MySQL Migration Job Phases (online GGS)
Phase NameConsole (API Codes)
Phase enum value
Description
Initialize replication infrastructure
ODMS_INITIALIZE_REPLICATION_INFRASTRUCTURE
Setup GGS deployment, connections, networking
Validate
ODMS_VALIDATE
validates source and target database connectivity.
Export initial load and upload data
ODMS_INITIAL_LOAD_EXPORT_DATA_UPLOAD
Perform MySQL source dump to object storage
Import initial load
ODMS_INITIAL_LOAD_IMPORT
Perform MySQL target import from object storage
Prepare
ODMS_PREPARE
Setup GoldenGate replicat to apply to the target
Monitor replication lag
ODMS_MONITOR_REPLICATION_LAG
Monitor GoldenGate replication lag. The replication lag needs to be a value equal or lower than the one defined when creating the migration before moving to the next phase.
Switchover
ODMS_SWITCHOVER
Stop the replication based on the switchover logic. Extract and Replicat are monitored to ensure that there are no pending transactions. After confirmation, Extract and Replicat are stopped.
Cleanup
ODMS_CLEANUP
Cleanup temporary artifacts, object storage
Database Migration Events for OCI Monitoring 🔗
Oracle Cloud Infrastructure Database Migration service emits events in Oracle Cloud Infrastructure (OCI), which are structured messages that indicate state changes in resources.
You can define rules in the OCI Event Service to get notified of events happening in an OCI native service and use the Notification Service (ONS) to send notifications or trigger events.
Table 10-6 Database Migration Service Event Types
Resource
Attributes (Common for Object)
Event name
Notes
Migration
compartmentId
compartmentName
resourceId
resourceName
availabilityDomain
freeformTags
definedTags
Oracle-Tags
CreatedBy
CreatedOn
eventID
Migration Clone Begin
When a migration clone request is received.
Migration Clone End
When a migration clone request is completed.
Migration Create Begin
When a migration create request is received.
Migration Create End
When a migration create request is completed.
Migration Delete Begin
When a migration delete request is received.
Migration Delete End
When a migration delete request is completed.
Migration Evaluate Begin
When a migration evaluate request is received.
Migration Evaluate End
When a migration evaluate request is completed.
Migration Start Begin
When a migration start request is received.
Migration Start End
When a migration start request is completed.
Migration State Change
When a migration state change request is received.
Migration Update Begin
When a migration update request is received.
Migration Update End
When a migration update request is completed.
Job
compartmentId
compartmentName
resourceName
resourceIdavailabilityDomain
eventID
Job Abort Begin
When a job abort request is received.
Job Abort End
When a job is aborted.
Job Delete Begin
When a job delete request is received.
Job Delete End
When a job is deleted.
Job Resume Begin
When a job resume request is received.
Job Resume End
When a job resume is completed.
Job State Changed
When a job state is changed.
Job State Failed
When a job fails.
Job State Successful
When a job state is successful.
Job State Waiting
When a job update request is in waiting state.
Job Update Begin
Called when the state of job is changed.
Job Update End
When a job update request is completed.
Connection
compartmentId
compartmentName
resourceId
resourceName
availabilityDomain
freeformTags
definedTags
Oracle-Tags
CreatedBy
CreatedOn
eventID
Connection Create
Begin
When a connection create request is sent.
Connection Create End
When a connection create request is completed.
Connection Delete Begin
When a connection delete request is sent.
Connection Delete End
When a connection delete request is complete.
Connection Update Begin
When a connection update request is sent.
Connection Update End
When a connection update request is completed.
Phase
compartmentName
compartmentId
migrationName
migrationId
jobName
jobId
phaseName
phaseStatus
Phase Begin
Emitted when the phase starts.
Phase End
Emitted when the phase ends.
Table 10-7 Example Use Cases
Use case
Event
Attribute filters
Action
To start a process when Migration XYZ is started
Migration - Start
migrationName=XYZ
Function or Streaming
To start a function for a given job after the Data Pump file is uploaded and before import starts (You must also configure a pause after phase ZDM_UPLOAD_DATAPUMP_DUMP_FILES, and the function must call the API to resume the job)
Job - State Change
jobId=job_OCID
jobLifecycleState=PAUSED
phaseName=ZDM_UPLOAD_DATAPUMP_DUMP_FILES
Function
To send a notification whenever a Migration in my compartment is failing
The ports required for communication when using Oracle Cloud Infrastructure
Database Migration are described in the following table.
Table 10-8 Database Migration Communication
Ports
Initiator
Target
Protocol
Port
Purpose
Source database servers
Oracle Cloud Object Store Service
SSL
443
This port allows Data Pump dumps to be uploaded to Oracle Cloud Storage
Database Migration
Oracle Autonomous Database Serverless target
TCP
1522
Allow Oracle client connections to the database over Oracle's SQL*Net protocol
Database Migration
Oracle Autonomous Database on Dedicated Exadata Infrastructure target
TCP
2484
Allow Oracle client connections to the database over Oracle's SQL*Net protocol
Database Migration agent service host
Source and target database servers
TCP
22
SSH Authentication-based operations to run Database Migration operational
phases
Source and target database servers should accept incoming connections
from the Database Migration agent service host
Not applicable to Autonomous Database targets
Note
Required only for SSH connection.
Target database servers
Source database servers
TCP
1521 or a database SCAN Listener port applicable
Should allow Oracle client connections to the database over Oracle's SQL*Net
protocol
Allows redo log shipping if source database needs to be in sync with
the new primary on Oracle Cloud after switchover.
Note: If you are using a non-default port number (that is,
something other than port 1521) for the local listener address, then the non-default
port should allow connections.
Note
If you are using a non-default port
number (that is, something other than port 1521) for the local listener address, then the
non-default port should allow connections.
Configuring Network Security
Rules
If you have Oracle Database or Oracle GoldenGate compute instances in private
subnets, ensure their subnet security rules or network security groups allow traffic
required for Database Migration jobs.
Database Migration allows you to specify a subnet to create a Private Endpoint
for Database Migration Connections (Connections). Refer to steps 9 and 10 in Managing Connections. For Autonomous Database Connections, the Console pre-populates the
subnet field using the Autonomous Database (ADB) subnet; however, you can use the dropdown
list to select a different subnet. The corresponding Database Migration API is CreateConnection.
The following EGRESS security rules must be configured for your
subnet specified for privateEndpointDetails when creating Database Migration
connections:
Rule Type
Stateful
Direction
Source Port Range
Protocol
Destination
Destination Port Range
SecurityList
No
Egress
All
TCP
CIDR (Classless Inter-Domain Routing) of subnet hosting Co-managed database or Oracle Autonomous Database Serverless
1521-1523
SecurityList
No
Egress
All
TCP
CIDR of subnet hosting Oracle Autonomous Database on Dedicated Exadata Infrastructure
2484
SecurityList
No
Egress
All
TCP
CIDR of subnet hosting Co-managed database
Note
Required only for SSH connection.
22
SecurityList
No
Egress
All
TCP
CIDR of subnet hosting Oracle GoldenGate compute
instance
443
The following INGRESS security rules must be configured for the
subnets hosting your databases or Oracle GoldenGate compute instances:
Subnet Hosting Co-managed System
Rule Type
Stateful
Direction
Source Port Range
Protocol
Destination
Destination Port Range
SecurityList
No
Ingress
CIDR of subnet specified for PrivateEndpoint for Database
Migration Connection (Connection)
All
TCP
1521-1523
SecurityList
No
Ingress
CIDR of subnet specified for PrivateEndpoint for Database Migration Connection (Connection)
Note
Required only for SSH connection.
All
TCP
22
Subnet Hosting ADB-S
Rule Type
Stateful
Direction
Source Port Range
Protocol
Destination
Destination Port Range
SecurityList
No
Ingress
CIDR of subnet specified for PrivateEndpoint for Database
Migration Connection (Connection)
All
TCP
1521-1523
Subnet Hosting ADB-D
Rule Type
Stateful
Direction
Source Port Range
Protocol
Destination
Destination Port Range
SecurityList
No
Ingress
CIDR of subnet specified for PrivateEndpoint for Database
Migration Connection (Connection)
All
TCP
2484
Subnet Hosting Oracle GoldenGate Compute Instance
Rule Type
Stateful
Direction
Source Port Range
Protocol
Destination
Destination Port Range
SecurityList
No
Ingress
CIDR of subnet specified for PrivateEndpoint for Database
Migration Connection (Connection) for target database
All
TCP
443
Additionally, if you have configured Network Security Groups (NSGs) for
ADB-S or Oracle GoldenGate compute instances, then the following INGRESS
rules must be set for the Network Security Groups:
NSG Associated With ADB-S
Rule Type
Stateful
Direction
Source Port Range
Protocol
Destination
Destination Port Range
NSG rule
No
Ingress
CIDR of subnet specified for PrivateEndpoint for Database
Migration Connection (Connection)
All
TCP
1521-1523
NSG Associated With Oracle GoldenGate Compute Instance
Rule Type
Stateful
Direction
Source Port Range
Protocol
Destination
Destination Port Range
NSG rule
No
Ingress
CIDR of subnet specified for PrivateEndpoint for Database
Migration Connection (Connection) for target database
All
TCP
443
Network security groups (NSGs) are associated with individual virtual network
interface cards (VNIC), ADBs, compute instances, and so on. You can configure INGRESS and
EGRESS NSG rules.
Security lists apply to entire subnet.
You can use both security lists and NSGs. In this case, a union of security
list rules and NSG rules is applied.
CreateConnection supports the following use cases for databases with private IP
addresses:
Database is in subnetA and customer specifies subnetB to create a
PrivateEndpoint:
SubnetA must allow INGRESS from subnetB for relevant ports
SubnetB must allow EGRESS to subnetA for relevant ports
Database is in subnetA and customer selects subnetA to create a
PrivateEndpoint
SubnetA’s INGRESS rules must not prohibit subnetA as source for
relevant ports
SubnetA’s EGRESS rules must not prohibit subnetA as destination for
relevant ports
Migrating Databases from Amazon Web Services RDS to Supported Oracle Targets 🔗
You can migrate an Oracle Database from Amazon Web Services (AWS) RDS to the supported Oracle targets using the Oracle Cloud Infrastructure Database Migration offline and online migration methods.
Configuring Secure Connections
Ensure that the subnet Amazon RDS security policy allows connections from Database
Migration to the DB instance on the specified secure port. See the AWS documentation
for details:
Allowing Database Migration to connect to Amazon RDS Oracle DB instance using
SSL/TLS
Enable Secure Socket Layer (SSL) or Transport Layer Security (TLS) in the
Amazon RDS Oracle Instance to secure the connection from Database Migration
to Amazon RDS Oracle Instance. See Encrypting client connections with
SSL for details.
In OCI Database Migration, create the Connection resource for the
Amazon RDS Oracle source database, using the following guidelines.
In the Database Connections wizard, Database details step, select Manually configure database, choose Amazon RDS in the Database type list, and enter the full connect string with host, port, and service name in the following format:
host:port/db-service-name
In the Connection details step, enter the
database administrator credentials for the Amazon RDS Oracle source
database. The user must have full Data Pump Export privileges.
If you intend to use a database link to transfer the data, also set
the TLS parameters.
Configuring a Migration Resource with an Amazon RDS Source
To transfer the data from AWS, you have the following options:
Amazon Simple Storage Service (Amazon S3) Bucket
Database link
When you create the Migration resource, in the Migration
options step configure one of the initial load settings as
follows.
Datapump via database link: Enable this
option to use a direct SQL*Net connection between the source and target
databases. Note that using Data Pump with a database link to Autonomous
Database targets requires that the source database be set up with SSL
encryption.
To use a database link to migrate Amazon RDS Oracle Database schema to Oracle
Autonomous Database (ADB), you must have direct network connectivity between
the Amazon RDS Oracle instance and the ADB target.
Datapump via object storage: This option
lets you select the Amazon S3 bucket option to let
Data Pump temporarily store the exported database in an Amazon S3 bucket.
Enter the details for the Amazon S3 bucket. This option is only
shown if the source Connection is of type Amazon RDS.
The bucket Name must be between 3 and 63
characters, and can consist only of lower case letters, numbers, dots (.),
and hyphens (-). It must begin and end with a letter or number.
The Region must be in the same region as
the RDS Oracle database. For example us-east-1
Note that you must also configure the OCI Object Storage bucket so that
Database Migration can store Cloud Pre-migration Advisor Tool reports,
Database Migration logs, and Data Pump logs there.
Known issues 🔗
Refer to the Known Issues for OCI Database Migration.