Launch the PostgreSQL GoldenGate deployment console:
From the Deployments page, select the PostgreSQL deployment to view its
details.
On the PostgreSQL deployment details page, click Launch
console.
On the deployment console sign in page, enter the GoldenGate admin
credentials provided in Task 1, step 1.
Note
Sign in is required if IAM wasn't selected as the credential store
when creating a deployment.
In GoldenGate 23ai, click DB Connections in the left navigation, then
the source PostgreSQL database, and then Trandata.
On the Trandata page, next to TRANDATA Information, click Add
Trandata (plus icon).
In the Trandata panel, for Schema Name, enter
src_ociggll.*, and then click
Submit.
Note
Use the search field to
search for src_ociggll and verify the tables were
added.
Task 3: Create the Extracts 🔗
Add the Change Data Capture Extract:
In the left navigation, click Extracts,
On the Extracts page, click
Add Extract (plus
icon), and then complete the fields as follows:
On the Extract Information page:
For Extract type, select Change Data
Capture Extract.
For Process Name,
enter a name for the Extract, such as ECDC.
Click Next.
On the Extract Options page:
For Source credentials, select Oracle
GoldenGate from the Domain dropdown
Select the source PostgreSQL database from
the Alias dropdown.
For Extract Trail Name, enter a two-character
trail name, such as C1.
Click Register, and then click
Next.
On the Extract Parameters page,
add:
TABLE src_ociggll.*;
Click Create and Run.
Add the Initial Load Extract:
On the Extracts page, click Add Extract, and then complete the Add Extract form as follows:
On the Extract
Information page:
For Extract type, select Initial Load Extract.
For Process Name,
enter a name, such as EIL.
Click Next.
On the Extract Options
page:
For Source credentials, select
Oracle GoldenGate from the
Domain dropdown.
Select the PostgreSQL database from the
Alias, dropdown.
For Extract Trail Name, enter a
two-character trail name, such as
I1.
Click Next.
On the Extract Parameters page, replace TABLE
*.* with the
following:
TABLE src_ociggll.*;
Click Create and Run.
You return to the Extracts
page, where you can observe the Extract
starting.
Task 4: Create the Distribution Path for Initial Load Extract 🔗
To create a Distribution Path for Initial Load Extract:
In the Oracle Cloud console, on the Deployments page, select the target
Big Data deployment.
On the deployment details page, click Launch Console. Log in with the
admin user details created in task 1, step 2.
If using IAM credential store, proceed to the Create a Distribution
Path step. If using GoldenGate credential store, create a user with which the
source GoldenGate uses to connect to the target GoldenGate.
In the navigation menu, click User Administration.
Click Add New User (plus icon), complete the fields as
follows, and then click Submit:
For Username, enter
ggsnet.
For Role, select
Operator.
Enter the password twice for verification.
In the source PostgreSQL deployment console, create a Path Connection
for the user created in the previous step.
In the navigaton menu, click Path Connections.
Click Add Path Connection (plus icon), complete the fields
as follows, and then click Submit:
For Credential Alias, enter dpuser.
For User ID, enter ggsnet
For Password, enter the same password used in the previous
step.
Create a Distribution Path.
In the service menu bar, click Distribution Service,
and then click Add Distribution Path (plus icon).
Complete the Add Path form as follows:
On the Path Information page:
For Path Name, enter a name for this
path.
Click Next.
On the Source Options page:
For Source Extract, leave
blank.
For Trail Name, enter the Initial Load Extract trail name
(I1).
Click Next.
On the Target Options page:
For Target, select
wss.
For Target Host, enter the target
deployment URL, without the https:// or any trailing slashes.
For Port Number, enter
443.
For Trail Name, enter
I1.
For Target Authentication Method, select
UserID Alias.
For Domain, enter the domain name created
in the previous step.
For Alias, enter the alias created in
the previous step (dpuser).
Click Next.
Click Create and Run.
You return to the Distribution Service page
where you can review the path created.
In the target Big Data deployment console, review the Receiver Path created as a result of the Distribution path:
Click Receiver Service.
Review the Receiver Path details.
Task 5: Add the Replicat for Initial Load 🔗
In the target Big Data deployment console, add the Initial Load Replicat.
In the navigation menu, click Replicats, and then click Add
Replicat (plus icon).
On the Replicats page, then complete the Add
Replicat fields as follows:
On the Replication Information page:
For Replicat
type, select Coordinated Replicat.
For Process Name, enter a name, such
as RIL.
Click Next.
On the Replicat Options page:
For Replicat Trail Name, enter the
name of the Trail from Task 2
(I1).
For Target, select Snowflake.
For Target Credentials, select the
Domain and Alias for the Snowflake connection.
For Available aliases, select an
alias from the dropdown, such as Snowflake.
Select Streaming.
Click Next.
On the Parameter File page, add the following
mapping:
On the Properties page, review the properties, and
add jvm.bootoptions=
-Djdk.lang.processReaperUseDefaultStackSize=true.
Click Create and Run.
You return to the Replicats page, where
you can review the Replicat details.
To verify the Initial Load, connect to Snowflake database and run following queries:
select * from SRCMIRROR_OCIGGLL.SRC_CITY;
select * from SRCMIRROR_OCIGGLL.SRC_CUSTOMER;
The output should return the data that was loaded into the
target database tables as a result of the Initial
Load.
Task 6: Create the Distribution Path for Change Data Capture 🔗
To create a Distribution Path for Change Data Capture:
In the source PostgreSQL deployment console, click Distribution
Service.
Click Add Distribution Path.
Complete the Add Path form as follows:
On the Path Information page:
For Path Name, enter a name.
Click Next.
On the Source Options page:
For Source Extract, select the Integrated Extract (ECDC).
For Trail Name, select the Integrated Extract trail file (C1).
Click Next.
On the Target Options page:
For Target, select
wss.
For Target Host, enter the target deployment
console URL (you can find this on the deployment details page, without the
https:// or any trailing slashes.
For Port Number, enter
443.
For Trail Name, enter
C1.
For Target Authentication Method, select
UserID Alias.
For Domain, enter the domain name.
For Alias, enter the alias.
Click Create Path and Run.
In the target Big Data deployment console, click Receiver
Service, and then review the Receiver
path created.
Task 7: Add a Replicat
for Change Data Capture 🔗
Perform updates to the source PostgreSQL database to verify replication
to Snowflake.
Add the Replicat.
In the target Big Data deployment console, click Administration Service, and then in the navigation
menu, click Replicats.
On the Replicats page, click Add Replicat (plus icon), and then complete
the Add Replicat form as follows:
On the Replicat Information page:
For Replicat
type, select Classic or Coordinated.
For Process Name,
enter a name, such as RCDC.
Click Next.
On the Replicat Options
page:
For Replicat
Trail Name, enter the name of the Trail from
Task 3 (C1).
For Target, select Snowflake.
For Target
Credentials, select the Domain and Alias
for the Snowflake
connection.
Select Streaming.
On the Parameter Files page, add the following
mapping, and then click
Next:
You return to the Replicats page, where
you can review the Replicat details.
Verify Change Data Capture:
Perform updates to the source PostgreSQL database to verify replication
to Snowflake. Run the following script to
perform inserts into the PostgreSQL database:
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1000,'Houston',20,743113);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1001,'Dallas',20,822416);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1002,'San Francisco',21,157574);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1003,'Los Angeles',21,743878);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1004,'San Diego',21,840689);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1005,'Chicago',23,616472);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1006,'Memphis',23,580075);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1007,'New York City',22,124434);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1008,'Boston',22,275581);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1009,'Washington D.C.',22,688002);
In the source PostgreSQL deployment console, select the
RCDC, and then click Statistics. Verify that
src_ociggll.src_city has 10 inserts.
Note
If the Extract captured no inserts, then restart
the ECDCExtract.
In the target Big Data deployment console, select the
RCDC, review its Details and
Statistics to verify the number of Inserts.