Locate and uncomment listen_addresses =
'localhost' and change localhost to an asterisk
(*):
listen_addresses = '*'
Set the following parameters as follows:
wal_level = logical
max_replication_slots =
1
max_wal_senders = 1
track_commit_timestamp =
on
Note
Configure
/var/lib/pgsql/data/pg_hba.conf to ensure that
client authentication is set to allow connections from an Oracle
GoldenGate host. For example, add the following:
#Allow connections from remote hosts
host all all 0.0.0.0/0 md5
On the Extracts page, click Add
Extract (plus icon), and
then complete the fields as follows:
On the Extract Information page:
For Extract type, select Integrated Extract.
For Process Name, enter a name
for the Extract, such
as ECDC.
Click Next.
On the Extract
Options page:
For Source credentials, select Oracle
GoldenGate from the Domain dropdown
Select the source PostgreSQL database from the
Alias dropdown.
For Extract Trail Name, enter a two-character trail name,
such as C1.
Click Next.
On the Extract
Parameters page, replace MAP *.*, TARGET *.*; with the
following:
TABLE SRC_OCIGGLL.*;
Click Create and Run.
You're returned to the Extracts page,
where you can observe the Extracts starting.
Task 4: Create the Distribution Path for Change Data
Capture 🔗
To create a Distribution Path for Change Data Capture, complete the following:
In the Oracle Cloud console, on the Deployments page, select the target
Big Data deployment.
On the deployment details page, click Launch Console. Log in with the
admin user details created in task 1, step 2.
If using IAM credential store, proceed to the Create a Distribution
Path step. If using GoldenGate credential store, create a user with which the
source GoldenGate uses to connect to the target GoldenGate.
In the navigation menu, click User Administration.
Click Add New User (plus icon), complete the fields as
follows, and then click Submit:
For Username, enter
ggsnet.
For Role, select
Operator.
Enter the password twice for verification.
In the source PostgreSQL deployment console, create a Path connection for the
user created in the previous step.
In the navigaton menu, click Path Connections.
Click Add Path Connection (plus icon), complete the fields
as follows, and then click Submit:
For Credential Alias, enter dpuser.
For User ID, enter ggsnet
For Password, enter the same password used in the previous
step.
Click Add Distribution Path.
Complete the Add Path form as follows:
On the Path Information page:
For Path Name, enter a name.
Click Next.
On the Source Options page:
For Source Extract, select the Integrated Extract (ECDC).
For Trail Name, select the Integrated Extract trail file (C1).
Click Next.
On the Target Options page:
For Target, select
wss.
For Target Host, enter the target deployment
console URL (you can find this on the deployment details page, without the
https:// or any trailing slashes.
For Port Number, enter
443.
For Trail Name, enter
C1.
For Target Authentication Method, select
UserID Alias.
For Domain, enter the domain name.
For Alias, enter the alias.
Click Create Path and Run.
In the target Big Data deployment console, click Receiver Service, and then review the Receiver Path created.
Task 5: Add a Replicat 🔗
In the target Big Data deployment console navigation menu, click Replicats, and then click Add Replicat (plus icon).
On the Replicats page, click Add Replicat (plus icon), and then complete the
Add Replicat form as follows:
On the Replicat Information page:
For Replicat type,
select Parallel or Coordinated Replicat.
For Process Name, enter a
name, such as RCDC.
Click Next.
On the Replicat Options page:
For Replicat Trail
Name, enter the name of the Trail from Task 3
(C1).
For Target Credentials,
select the Domain and Alias for the Google Big
Query connection.
For Available staging locations, select
Google Cloud Storage from the dropdown.
For via staging alias, select Google Cloud
Storage connection from the dropdown.
On the Parameter File page, add the following mapping, and then click
Next:
MAP *.*, TARGET *.*;
On the Properties File page, configure the required properties as
needed. Look for the ones marked as #TODO, and then
click Next.
Some properties to consider modifying include:
gg.eventhandler.gcs.bucketMappingTemplate:
provide the name of the bucket that will be used as staging
storage
Click Create and Run.
You return to the Replicats page, where
you can review the Replicat details.
Task 6: Verify Change Data
Capture 🔗
Perform updates to the source PostgreSQL database to verify replication
to Google BigQuery.
Run the following script to perform inserts into the PostgreSQL database:
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1000,'Houston',20,743113);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1001,'Dallas',20,822416);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1002,'San Francisco',21,157574);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1003,'Los Angeles',21,743878);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1004,'San Diego',21,840689);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1005,'Chicago',23,616472);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1006,'Memphis',23,580075);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1007,'New York City',22,124434);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1008,'Boston',22,275581);
Insert into src_ociggll.src_city (CITY_ID,CITY,REGION_ID,POPULATION) values (1009,'Washington D.C.',22,688002);
In the source PostgreSQL deployment console, select the Extract name (EDCD), and then click
Statistics. Verify that src_ociggll.src_city
has 10 inserts.
Note
If the Extract captured no inserts, then restart the
EDCDExtract.
In the target Big Data deployment console, select the Replicat name, view its Details, and check Statistics to
verify the number of inserts.