Before you can begin the migration of your data with Oracle Cloud Infrastructure Database Migration Service, you must configure your source and target databases as described here.
Preparing Oracle Databases for Migration 🔗
Prepare your databases using either of the following methods:
Prepare your database by running scripts generated by the database preparation utility (Recommended option).
Manually configure your Oracle databases by following the documentation and running the SQL commands.
Preparing your databases using the database preparation utility:
Before you can migrate data with Oracle Cloud Infrastructure Database Migration, manually configure your source database as described here.
To configure a single-tenant (Non CDB) as a source for migration, run the following SQL commands:
-- Archive Log Mode
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
-- Stream Pool Size
ALTER SYSTEM SET STREAMS_POOL_SIZE=256M;
-- Force Logging
ALTER DATABASE FORCE LOGGING;
-- Supplemental Logging
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
To configure a multi-tenant (CDB) as a source for migration, run the following SQL commands:
-- Connect to CDB and run:
-- Archive Log Mode
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
-- Stream Pool Size
ALTER SYSTEM SET STREAMS_POOL_SIZE=256M SCOPE=BOTH;
-- Force Logging
ALTER DATABASE FORCE LOGGING;
-- Supplemental Logging
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
ALTER SESSION SET CONTAINER=PDB;
ALTER USER SYSTEM ACCOUNT UNLOCK;
To configure Amazon RDS (non-CDB) as a source for migration, run the following SQL commands:
-- Remember to set the following parameters thru the Parameter groups functionality:
-- STREAMS_POOL_SIZE=2147483648
-- GLOBAL_NAMES=FALSE
-- To see how Parameter groups work refer to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/parameter-groups-overview.html
-- Archive Log Mode
EXEC RDSADMIN.RDSADMIN_UTIL.SET_CONFIGURATION('ARCHIVELOG RETENTION HOURS',72);
-- Force Logging
EXEC RDSADMIN.RDSADMIN_UTIL.FORCE_LOGGING(P_ENABLE => TRUE);
-- Supplemental Logging
EXEC RDSADMIN.RDSADMIN_UTIL.ALTER_SUPPLEMENTAL_LOGGING('ADD');
Additional Configurations for Preparing the Source Database for Online Migration 🔗
Before you can migrate data with Oracle Cloud Infrastructure Database Migration, perform additional configurations for your source database for online migration as described here.
To configure a single-tenant (Non CDB) as a source for online migration, run the following SQL commands:
-- Archive Log Mode
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
-- Stream Pool Size
ALTER SYSTEM SET STREAMS_POOL_SIZE=2G;
-- Force Logging
ALTER DATABASE FORCE LOGGING;
-- Enable GoldenGate
ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE;
-- Supplemental Logging
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
-- Create GoldenGate nonCDB user
CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA 100M ON GG_ADMIN;
GRANT CONNECT TO GGADMIN;
GRANT RESOURCE TO GGADMIN;
GRANT CREATE SESSION TO GGADMIN;
GRANT SELECT_CATALOG_ROLE TO GGADMIN;
GRANT DV_GOLDENGATE_ADMIN TO GGADMIN;
GRANT DV_GOLDENGATE_REDO_ACCESS TO GGADMIN;
GRANT ALTER SYSTEM TO GGADMIN;
GRANT ALTER USER TO GGADMIN;
GRANT DATAPUMP_EXP_FULL_DATABASE TO GGADMIN;
GRANT DATAPUMP_IMP_FULL_DATABASE TO GGADMIN;
GRANT SELECT ANY DICTIONARY TO GGADMIN;
GRANT SELECT ANY TRANSACTION TO GGADMIN;
GRANT INSERT ANY TABLE TO GGADMIN;
GRANT UPDATE ANY TABLE TO GGADMIN;
GRANT DELETE ANY TABLE TO GGADMIN;
GRANT LOCK ANY TABLE TO GGADMIN;
GRANT CREATE ANY TABLE TO GGADMIN;
GRANT CREATE ANY INDEX TO GGADMIN;
GRANT CREATE ANY CLUSTER TO GGADMIN;
GRANT CREATE ANY INDEXTYPE TO GGADMIN;
GRANT CREATE ANY OPERATOR TO GGADMIN;
GRANT CREATE ANY PROCEDURE TO GGADMIN;
GRANT CREATE ANY SEQUENCE TO GGADMIN;
GRANT CREATE ANY TRIGGER TO GGADMIN;
GRANT CREATE ANY TYPE TO GGADMIN;
GRANT CREATE ANY SEQUENCE TO GGADMIN;
GRANT CREATE ANY VIEW TO GGADMIN;
GRANT ALTER ANY TABLE TO GGADMIN;
GRANT ALTER ANY INDEX TO GGADMIN;
GRANT ALTER ANY CLUSTER TO GGADMIN;
GRANT ALTER ANY INDEXTYPE TO GGADMIN;
GRANT ALTER ANY OPERATOR TO GGADMIN;
GRANT ALTER ANY PROCEDURE TO GGADMIN;
GRANT ALTER ANY SEQUENCE TO GGADMIN;
GRANT ALTER ANY TRIGGER TO GGADMIN;
GRANT ALTER ANY TYPE TO GGADMIN;
GRANT ALTER ANY SEQUENCE TO GGADMIN;
GRANT CREATE DATABASE LINK TO GGADMIN;
GRANT EXECUTE ON dbms_lock TO GGADMIN;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN');
To configure a multi-tenant (CDB) as a source for online migration, run the following SQL commands:
-- Connect to CDB and run:
-- Archive Log Mode
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE ARCHIVELOG;
ALTER DATABASE OPEN;
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
-- Stream Pool Size
ALTER SYSTEM SET STREAMS_POOL_SIZE=2G SCOPE=BOTH;
-- Force Logging
ALTER DATABASE FORCE LOGGING;
-- Enable GoldenGate
ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;
-- Supplemental Logging
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK CONTAINER=ALL;
-- Create GoldenGate users
-- CDB user
ALTER SESSION SET CONTAINER = CDB$ROOT;
CREATE USER C##GGADMIN IDENTIFIED BY cggadmin_pwd CONTAINER=ALL DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS;
GRANT CONNECT TO C##GGADMIN CONTAINER=ALL;
GRANT RESOURCE TO C##GGADMIN CONTAINER=ALL;
GRANT CREATE TABLE TO C##GGADMIN CONTAINER=ALL;
GRANT CREATE VIEW TO C##GGADMIN CONTAINER=ALL;
GRANT CREATE SESSION TO C##GGADMIN CONTAINER=ALL;
GRANT SELECT_CATALOG_ROLE TO C##GGADMIN CONTAINER=ALL;
GRANT DV_GOLDENGATE_ADMIN TO C##GGADMIN CONTAINER=ALL;
GRANT DV_GOLDENGATE_REDO_ACCESS TO C##GGADMIN CONTAINER=ALL;
GRANT ALTER SYSTEM TO C##GGADMIN CONTAINER=ALL;
GRANT ALTER USER TO C##GGADMIN CONTAINER=ALL;
GRANT SELECT ANY DICTIONARY TO C##GGADMIN CONTAINER=ALL;
GRANT SELECT ANY TRANSACTION TO C##GGADMIN CONTAINER=ALL;
GRANT EXECUTE ON dbms_lock TO C##GGADMIN CONTAINER=ALL;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GGADMIN',CONTAINER=>'ALL');
-- PDB User
ALTER SESSION SET CONTAINER = v_pdb_name;
CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd CONTAINER=CURRENT DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON GG_ADMIN;
GRANT CONNECT TO GGADMIN CONTAINER=CURRENT;
GRANT RESOURCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE SESSION TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT_CATALOG_ROLE TO GGADMIN CONTAINER=CURRENT;
GRANT DV_GOLDENGATE_ADMIN TO GGADMIN CONTAINER=CURRENT;
GRANT DV_GOLDENGATE_REDO_ACCESS TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER SYSTEM TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER USER TO GGADMIN CONTAINER=CURRENT;
GRANT DATAPUMP_EXP_FULL_DATABASE TO GGADMIN CONTAINER=CURRENT;
GRANT DATAPUMP_IMP_FULL_DATABASE TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT ANY DICTIONARY TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT ANY TRANSACTION TO GGADMIN CONTAINER=CURRENT;
GRANT INSERT ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT UPDATE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT DELETE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT LOCK ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT ON V_$SESSION TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT ON V_$TRANSACTION TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT ON V_$DATABASE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY INDEX TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY CLUSTER TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY INDEXTYPE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY OPERATOR TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY PROCEDURE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY TRIGGER TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY TYPE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY VIEW TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY INDEX TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY CLUSTER TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY INDEXTYPE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY OPERATOR TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY PROCEDURE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY TRIGGER TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY TYPE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE DATABASE LINK TO GGADMIN CONTAINER=CURRENT;
GRANT EXECUTE ON dbms_lock TO GGADMIN CONTAINER=CURRENT;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN',CONTAINER=>'CURRENT');
-- Privileges to be granted to the Initial Load user.
ALTER SESSION SET CURRENT_SCHEMA = GGADMIN;
GRANT CREATE ANY TABLE TO <INITIAL_LOAD_USERNAME> WITH ADMIN OPTION;
GRANT ALTER ANY TABLE TO <INITIAL_LOAD_USERNAME> WITH ADMIN OPTION;
GRANT DROP ANY TABLE TO <INITIAL_LOAD_USERNAME> WITH ADMIN OPTION;
GRANT CREATE ANY TRIGGER TO <INITIAL_LOAD_USERNAME> WITH ADMIN OPTION;
GRANT DROP ANY TRIGGER TO <INITIAL_LOAD_USERNAME> WITH ADMIN OPTION;
GRANT ALTER ANY TRIGGER TO <INITIAL_LOAD_USERNAME> WITH ADMIN OPTION;
GRANT DELETE ANY TABLE TO <INITIAL_LOAD_USERNAME> WITH ADMIN OPTION;
GRANT INSERT ANY TABLE TO <INITIAL_LOAD_USERNAME> WITH ADMIN OPTION;
GRANT UPDATE ANY TABLE TO <INITIAL_LOAD_USERNAME> WITH ADMIN OPTION;
-- Create EVENT_TABLE for GGADMIN user
-- Table EVENT_TABLE is required to handle the switchover during the online migration.
CREATE TABLE GGADMIN.EVENT_TABLE (
event_pk NUMBER,
event_desc VARCHAR2(200),
src_event_date DATE,
trg_received_date DATE,
src_commit_scn NUMBER,
src_commit_timestamp DATE,
session_name VARCHAR2(200),
session_serial# NUMBER,
session_process VARCHAR2(50),
session_program VARCHAR2(90),
current_scn NUMBER,
number_of_open_txn NUMBER,
oldest_open_txn_scn NUMBER
);
CREATE OR REPLACE TRIGGER GGADMIN.TRG_INSERT_EVENT_TABLE
BEFORE INSERT ON GGADMIN.EVENT_TABLE
FOR EACH ROW
DECLARE
v_code NUMBER;
v_errm VARCHAR2(64);
BEGIN
:NEW.src_event_date := SYSDATE;
:NEW.session_name := USER;
SELECT sys_context('USERENV','SID') INTO :NEW.session_serial# FROM dual;
SELECT min(start_scnb) INTO :NEW.oldest_open_txn_scn FROM v$transaction;
SELECT count(*) INTO :NEW.number_of_open_txn FROM v$transaction;
SELECT current_scn INTO :NEW.current_scn FROM v$database;
SELECT process INTO :NEW.session_process FROM v$session a
WHERE a.SID = sys_context('USERENV','SID');
SELECT program INTO :NEW.session_program FROM v$session a
WHERE a.SID = sys_context('USERENV','SID');
EXCEPTION
WHEN OTHERS THEN
v_code := SQLCODE;
v_errm := SUBSTR(SQLERRM, 1, 64);
DBMS_OUTPUT.PUT_LINE('Error on GGADMIN.TRG_INSERT_EVENT_TABLE');
DBMS_OUTPUT.PUT_LINE(v_code || ': ' || v_errm);
RAISE;
END;
To configure Amazon RDS (non-CDB) as a source for online migration, run the following SQL commands:
-- Remember to set the following parameters thru the Parameter groups functionality:
-- STREAMS_POOL_SIZE=2147483648
-- ENABLE_GOLDENGATE_REPLICATION=TRUE
-- GLOBAL_NAMES=FALSE
-- To see how Parameter groups work refer to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/parameter-groups-overview.html
-- Archive Log Mode
EXEC RDSADMIN.RDSADMIN_UTIL.SET_CONFIGURATION('ARCHIVELOG RETENTION HOURS',72);
-- Force Logging
EXEC RDSADMIN.RDSADMIN_UTIL.FORCE_LOGGING(P_ENABLE => TRUE);
-- Supplemental Logging
EXEC RDSADMIN.RDSADMIN_UTIL.ALTER_SUPPLEMENTAL_LOGGING('ADD');
-- Create GoldenGate user
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA 100M ON USERS;
GRANT UNLIMITED TABLESPACE TO GGADMIN;
GRANT CONNECT, RESOURCE TO GGADMIN;
GRANT SELECT ANY DICTIONARY TO GGADMIN;
GRANT CREATE VIEW TO GGADMIN;
GRANT EXECUTE ON DBMS_LOCK TO GGADMIN;
GRANT SELECT ON SYS.CCOL$ TO GGADMIN;
GRANT SELECT ON SYS.CDEF$ TO GGADMIN;
GRANT SELECT ON SYS.COL$ TO GGADMIN;
GRANT SELECT ON SYS.CON$ TO GGADMIN;
GRANT SELECT ON SYS.DEFERRED_STG$ TO GGADMIN;
GRANT SELECT ON SYS.ICOL$ TO GGADMIN;
GRANT SELECT ON SYS.IND$ TO GGADMIN;
GRANT SELECT ON SYS.LOB$ TO GGADMIN;
GRANT SELECT ON SYS.LOBFRAG$ TO GGADMIN;
GRANT SELECT ON SYS.OBJ$ TO GGADMIN;
GRANT SELECT ON SYS.SEG$ TO GGADMIN;
GRANT SELECT ON SYS.TAB$ TO GGADMIN;
GRANT SELECT ON SYS.TABCOMPART$ TO GGADMIN;
GRANT SELECT ON SYS.TABPART$ TO GGADMIN;
GRANT SELECT ON SYS.TABSUBPART$ TO GGADMIN;
EXEC RDSADMIN.RDSADMIN_DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE (GRANTEE=>'GGADMIN',PRIVILEGE_TYPE=>'CAPTURE',GRANT_SELECT_PRIVILEGES=>TRUE,DO_GRANTS=>TRUE);
Use Case for Preparing the Source Database for Migration 🔗
Following is a sample use case to prepare your source database for migration. To configure a PDB as a source for your migration, the steps are similar to setting up a classic database as a source, but there are requirements for using the CDBROOT as ggaliassrc.
The steps differ slightly if you're using a PDB as your source database, so make sure you follow the recommendations if your database is in a multitenant environment.
Configure the streams pool with the initialization parameter STREAMS_POOL_SIZE.
For offline logical migrations, for optimal Data Pump performance, it is required that you set STREAMS_POOL_SIZE to a minimum of 256MB-350MB, to have an initial pool allocated, otherwise you might see a significant delay during start up.
For online logical migrations, set STREAMS_POOL_SIZE to at least 2GB.
Check the GLOBAL_NAMES parameter. If it's set to true, change it to false.
sqlplus > show parameter global
NAME TYPE VALUE
------------------------------------ ------- ------------------------------
global_names boolean TRUE
sqlplus > alter system set global_names=false
Enable ARCHIVELOG if it is not already enabled.
Check whether archivelog is enabled:
sqlplus > archive log list
Sample output returned:
Database log mode Archive log Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 33
Next log sequence to archive 35
Current log sequence 35
Enable archivelog mode:
sqlplus > shutdown immediate
sqlplus > startup mount
sqlplus > alter database archivelog;
sqlplus > alter database open;
Disable archivelog mode (for clean up later)
sqlplus > shutdown immediate
sqlplus > startup mount
sqlplus > alter database noarchivelog;
sqlplus > alter database open;
Enable logging:
Check if logging is enabled:
sqlplus > SELECT supplemental_log_data_min, force_logging FROM v$database;
Enable logging:
sqlplus > ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
sqlplus > ALTER DATABASE FORCE LOGGING;
Disable logging (for cleanup later)
sqlplus > ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
sqlplus > ALTER DATABASE NO FORCE LOGGING;
Create a database administrator user that has full Oracle Data Pump privileges for initial load to be performed. A user that has the DATAPUMP_EXP_FULL_DATABASE role is required for the export operation at the source database. This user is selected as database administrator when you create Database connections with the source databases.
See Oracle Data Pump in the Oracle Database Utilities guide for more information.
In the PDB being exported, if there is any dependency created on local objects in the C## user's schema, then they would fail to be imported in the target Autonomous Database. Exclude the problematic schema from the migration job.
If you are using Object Storage as a data transfer medium, ensure that an export Directory Object exists and is usable by Data Pump to store generated dump files.
The directory object is a file path on the source database server file system. The name needs to comply with Oracle Database directory object rules. See CREATE DIRECTORY in Oracle Database SQL Language Reference for details.
The export Directory Object must be owned by same OS user who owns the database Oracle home.
This step is not required if you are using a database link transfer medium.
If you plan to transfer data using a database link, then you must set up SSL encryption on the source database. Using Data Pump with a database link to an Autonomous Database target requires that the source database have SSL encryption enabled. Creating a database link from an Autonomous Database Shared Infrastructure target to a source database with a private IP requires assistance from Oracle Support.
Database PSU 11.2.0.4.210720 includes a fix for Oracle GoldenGate performance bug 28849751 - IE PERFORMANCE DEGRADES WHEN NETWORK LATENCY BETWEEN EXTRACT AND CAPTURE IS MORE THAN 8MS
OGG RDBMS patch 32248879 MERGE REQUEST ON TOP OF DATABASE PSU 11.2.0.4.201020 FOR BUGS 32048478 20448066 - This patch contains mandatory fix for Oracle GoldenGate Microservices bug 20448066 DBMS_XSTREAM_GG APIS SHOULD BE ALLOWED FOR SCA PROCESSES
Before you can migrate data with Oracle Cloud Infrastructure Database Migration, manually configure your target database as described here.
To configure an Autonomous database as a target for migration, run the following SQL commands:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
To configure a non-Autonomous, single-tenant (non-CDB) as a target for migration, run the following SQL commands:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
To configure a non-Autonomous, multi-tenant (CDB) as a target for migration, run the following SQL commands:
-- Connect to CDB and run:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK SCOPE=BOTH;
Additional Configurations for Preparing the Target Database for Online Migration 🔗
Before you can migrate data with Oracle Cloud Infrastructure Database Migration, perform additional configurations for your target database for online migration as described here.
You must unlock the ggadmin user from the Oracle Cloud Infrastructure Console by performing the following steps:
Provide a password with its corresponding confirmation.
Alternatively, to configure an Autonomous database as a target for online migration, run the following SQL commands:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
-- Create GoldenGate user if doesn't exist
CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd CONTAINER=CURRENT DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON GG_ADMIN;
-- Or unlock it if exists
ALTER USER GGADMIN IDENTIFIED BY ggadmin_pwd ACCOUNT UNLOCK;
To configure a non-Autonomous, single-tenant (non-CDB) as a target for online migration, run the following SQL commands:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK;
-- Create GoldenGate nonCDB user
CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA 100M ON GG_ADMIN;
GRANT CONNECT TO GGADMIN;
GRANT RESOURCE TO GGADMIN;
GRANT CREATE SESSION TO GGADMIN;
GRANT SELECT_CATALOG_ROLE TO GGADMIN;
GRANT DV_GOLDENGATE_ADMIN TO GGADMIN;
GRANT DV_GOLDENGATE_REDO_ACCESS TO GGADMIN;
GRANT ALTER SYSTEM TO GGADMIN;
GRANT ALTER USER TO GGADMIN;
GRANT DATAPUMP_EXP_FULL_DATABASE TO GGADMIN;
GRANT DATAPUMP_IMP_FULL_DATABASE TO GGADMIN;
GRANT SELECT ANY DICTIONARY TO GGADMIN;
GRANT SELECT ANY TRANSACTION TO GGADMIN;
GRANT INSERT ANY TABLE TO GGADMIN;
GRANT UPDATE ANY TABLE TO GGADMIN;
GRANT DELETE ANY TABLE TO GGADMIN;
GRANT LOCK ANY TABLE TO GGADMIN;
GRANT CREATE ANY TABLE TO GGADMIN;
GRANT CREATE ANY INDEX TO GGADMIN;
GRANT CREATE ANY CLUSTER TO GGADMIN;
GRANT CREATE ANY INDEXTYPE TO GGADMIN;
GRANT CREATE ANY OPERATOR TO GGADMIN;
GRANT CREATE ANY PROCEDURE TO GGADMIN;
GRANT CREATE ANY SEQUENCE TO GGADMIN;
GRANT CREATE ANY TRIGGER TO GGADMIN;
GRANT CREATE ANY TYPE TO GGADMIN;
GRANT CREATE ANY SEQUENCE TO GGADMIN;
GRANT CREATE ANY VIEW TO GGADMIN;
GRANT ALTER ANY TABLE TO GGADMIN;
GRANT ALTER ANY INDEX TO GGADMIN;
GRANT ALTER ANY CLUSTER TO GGADMIN;
GRANT ALTER ANY INDEXTYPE TO GGADMIN;
GRANT ALTER ANY OPERATOR TO GGADMIN;
GRANT ALTER ANY PROCEDURE TO GGADMIN;
GRANT ALTER ANY SEQUENCE TO GGADMIN;
GRANT ALTER ANY TRIGGER TO GGADMIN;
GRANT ALTER ANY TYPE TO GGADMIN;
GRANT ALTER ANY SEQUENCE TO GGADMIN;
GRANT CREATE DATABASE LINK TO GGADMIN;
GRANT EXECUTE ON dbms_lock TO GGADMIN;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN');
To configure a non-Autonomous, multi-tenant (CDB) as a target for online migration, run the following SQL commands:
-- Connect to CDB and run:
-- Global Names
ALTER SYSTEM SET GLOBAL_NAMES=FALSE;
-- User system for Datapump
ALTER USER SYSTEM IDENTIFIED BY system_pwd ACCOUNT UNLOCK CONTAINER=ALL;
-- Create GoldenGate PDB User
ALTER SESSION SET CONTAINER = v_pdb_name;
CREATE TABLESPACE GG_ADMIN DATAFILE '+DATA/ggadmin_data.dbf' SIZE 100m AUTOEXTEND ON NEXT 100m;
CREATE USER GGADMIN IDENTIFIED BY ggadmin_pwd CONTAINER=CURRENT DEFAULT TABLESPACE GG_ADMIN TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON GG_ADMIN;
GRANT CONNECT TO GGADMIN CONTAINER=CURRENT;
GRANT RESOURCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE SESSION TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT_CATALOG_ROLE TO GGADMIN CONTAINER=CURRENT;
GRANT DV_GOLDENGATE_ADMIN TO GGADMIN CONTAINER=CURRENT;
GRANT DV_GOLDENGATE_REDO_ACCESS TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER SYSTEM TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER USER TO GGADMIN CONTAINER=CURRENT;
GRANT DATAPUMP_EXP_FULL_DATABASE TO GGADMIN CONTAINER=CURRENT;
GRANT DATAPUMP_IMP_FULL_DATABASE TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT ANY DICTIONARY TO GGADMIN CONTAINER=CURRENT;
GRANT SELECT ANY TRANSACTION TO GGADMIN CONTAINER=CURRENT;
GRANT INSERT ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT UPDATE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT DELETE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT LOCK ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY INDEX TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY CLUSTER TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY INDEXTYPE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY OPERATOR TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY PROCEDURE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY TRIGGER TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY TYPE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE ANY VIEW TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY TABLE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY INDEX TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY CLUSTER TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY INDEXTYPE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY OPERATOR TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY PROCEDURE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY TRIGGER TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY TYPE TO GGADMIN CONTAINER=CURRENT;
GRANT ALTER ANY SEQUENCE TO GGADMIN CONTAINER=CURRENT;
GRANT CREATE DATABASE LINK TO GGADMIN CONTAINER=CURRENT;
GRANT EXECUTE ON dbms_lock TO GGADMIN CONTAINER=CURRENT;
EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('GGADMIN',CONTAINER=>'CURRENT');
Use Case for Preparing the Target Database for Migration 🔗
Following is a sample use case for preparing a target database for migration.
Create an Autonomous Database. If the target autonomous database is already present you can skip this step.
Check the GLOBAL_NAMES parameter. If it's set to true, change it to false.
sqlplus > show parameter global
NAME TYPE VALUE
------------------------------------ ------- ------------------------------
global_names boolean TRUE
sqlplus > alter system set global_names=false
Create a database administrator user that has full Oracle Data Pump privileges for initial load to be performed. A user that has the DATAPUMP_IMP_FULL_DATABASE role is required for the export operation at the target database. This user is selected as database administrator when you create Database connections with the target databases.
See Oracle Data Pump in the Oracle Database Utilities guide for more information.
Preparing MySQL Databases for Migration 🔗
If you want to perform online MySQL migrations, prepare your source and target databases for replication as follows:
Source database requirements:
Depending on your source server configuration, following modes are supported:
Single server: To enable binary logging, see The Binary Log.
The above steps are applicable only when you perform an online migration.
Target database requirements:
Note
For online migrations, if you define a different replication user, then ensure that it has the privileges defined here. However, if you do not define the user, ensure that the main user for the connection has these privileges.