Ensure that
you remove the SOURCEDB parameter in front
of USERIDALIAS before you move
on.
Click Create and Run.
You return to the Extracts
page, where you can observe the Extract
starting.
Task 4: Create the Distribution Path for Initial Load Extract 🔗
To create a Distribution Path for Initial Load Extract:
In the Oracle Cloud console, on the Deployments page, select the target
Big Data deployment.
On the deployment details page, click Launch Console. Log in with the
admin user details created in task 1, step 2.
If using IAM credential store, proceed to the Create a Distribution
Path step. If using GoldenGate credential store, create a user with which the
source GoldenGate uses to connect to the target GoldenGate.
In the navigation menu, click User Administration.
Click Add New User (plus icon), complete the fields as
follows, and then click Submit:
For Username, enter
ggsnet.
For Role, select
Operator.
Enter the password twice for verification.
In the source PostgreSQL deployment console, create a Path Connection
for the user created in the previous step.
In the navigaton menu, click Path Connections.
Click Add Path Connection (plus icon), complete the fields
as follows, and then click Submit:
For Credential Alias, enter dpuser.
For User ID, enter ggsnet
For Password, enter the same password used in the previous
step.
Create a Distribution Path.
In the service menu bar, click Distribution Service,
and then click Add Distribution Path (plus icon).
Complete the Add Path form as follows:
On the Path Information page:
For Path Name, enter a name for this
path.
Click Next.
On the Source Options page:
For Source Extract, leave
blank.
For Trail Name, enter the Initial Load Extract trail name
(I1).
Click Next.
On the Target Options page:
For Target, select
wss.
For Target Host, enter the target
deployment URL, without the https:// or any trailing slashes.
For Port Number, enter
443.
For Trail Name, enter
I1.
For Target Authentication Method, select
UserID Alias.
For Domain, enter the domain name created
in the previous step.
For Alias, enter the alias created in
the previous step (dpuser).
Click Next.
Click Create and Run.
You return to the Distribution Service page
where you can review the path created.
In the target Big Data deployment console, review the Receiver Path created as a result of the Distribution path:
Click Receiver Service.
Review the Receiver Path details.
Task 5: Add the Replicat for Initial Load 🔗
In the target Big Data deployment console, add the Initial Load Replicat.
In the navigation menu, click Replicats, and then click Add
Replicat (plus icon).
On the Replicats page, then complete the Add
Replicat fields as follows:
On the Replication Information page:
For Replicat
type, select Classic or
Coordinated.
For Process Name, enter a name, such
as RIL.
Click Next.
On the Replicat Options page:
For Replicat Trail Name, enter the
name of the Trail from Task 2
(I1).
For Target Credentials, select the
Domain and Alias for the Snowflake connection.
For Available aliases, select an
alias from the dropdown, such as Snowflake.
(Optional) Enable external storage
to select an available staging location from the
dropdown.
Click Next.
On the Parameter File page, add the following
mapping:
On the Properties page, review the properties, and then click
Create and Run.
You return to the Replicats page, where
you can review the Replicat details.
To verify the Initial Load, connect to Snowflake database and run following queries:
select * from SRCMIRROR_OCIGGLL.SRC_CITY;
select * from SRCMIRROR_OCIGGLL.SRC_CUSTOMER;
The output should return the data that was loaded into the
target database tables as a result of the Initial
Load.
Task 6: Create the Distribution Path for Change Data Capture 🔗
To create a Distribution Path for Change Data Capture:
In the source PostgreSQL deployment console, click Distribution
Service.
Click Add Distribution Path.
Complete the Add Path form as follows:
On the Path Information page:
For Path Name, enter a name.
Click Next.
On the Source Options page:
For Source Extract, select the Integrated Extract (ECDC).
For Trail Name, select the Integrated Extract trail file (C1).
Click Next.
On the Target Options page:
For Target, select
wss.
For Target Host, enter the target deployment
console URL (you can find this on the deployment details page, without the
https:// or any trailing slashes.
For Port Number, enter
443.
For Trail Name, enter
C1.
For Target Authentication Method, select
UserID Alias.
For Domain, enter the domain name.
For Alias, enter the alias.
Click Create Path and Run.
In the target Big Data deployment console, click Receiver
Service, and then review the Receiver
path created.
Task 7: Add a Replicat
for Change Data Capture 🔗
Perform updates to the source PostgreSQL database to verify replication
to Snowflake.
Add the Replicat.
In the target Big Data deployment console, click Administration Service, and then in the navigation
menu, click Replicats.
On the Replicats page, click Add Replicat (plus icon), and then complete
the Add Replicat form as follows:
On the Replicat Information page:
For Replicat
type, select Classic or Coordinated.
For Process Name,
enter a name, such as RCDC.
Click Next.
On the Replicat Options
page:
For Replicat
Trail Name, enter the name of the Trail from
Task 3 (C1).
For Target
Credentials, select the Domain and Alias
for the Snowflake
connection.
(Optional) Enable external storage
to select an available staging location from the
dropdown.
On the Parameter Files page, add the following mapping, and then
click
Next:
On the Properties page, review the properties, and then click
Create and Run.
You return to the Replicats page, where
you can review the Replicat details.
Verify Change Data Capture:
Perform updates to the source PostgreSQL database to verify replication
to Snowflake. Run the following script to
perform inserts into the PostgreSQL database:
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1000,'Houston',20,743113);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1001,'Dallas',20,822416);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1002,'San Francisco',21,157574);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1003,'Los Angeles',21,743878);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1004,'San Diego',21,840689);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1005,'Chicago',23,616472);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1006,'Memphis',23,580075);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1007,'New York City',22,124434);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1008,'Boston',22,275581);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1009,'Washington D.C.',22,688002);
In the source PostgreSQL deployment console, select the
RCDC, and then click Statistics. Verify that
src_ociggll.src_city has 10 inserts.
Note
If the Extract captured no inserts, then restart
the ECDCExtract.
In the target Big Data deployment console, select the
RCDC, review its Details and
Statistics to verify the number of Inserts.