Manage VM Clusters
Learn how to manage your VM clusters on Exadata Database Service on Cloud@Customer.
- About Managing VM Clusters on Exadata Database Service on Cloud@Customer
The VM cluster provides a link between your Exadata Database Service on Cloud@Customer infrastructure and Oracle Databases you deploy. - Overview of VM Cluster Node Subsetting
VM Cluster Node Subsetting enables you to allocate a subset of database servers to new and existing VM clusters to enable maximum flexibility in the allocation of compute (CPU, memory, local storage) resources. - Introduction to Scale Up or Scale Down Operations
With the Multiple VMs per Exadata system (MultiVM) feature release, you can scale up or scale down your VM cluster resources. - Using the Console to Manage VM Clusters on Exadata Cloud@Customer
Learn how to use the console to create, edit, and manage your VM Clusters on Oracle Exadata Cloud@Customer. - Using the API to Manage Exadata Cloud@Customer VM Clusters
Review the list of API calls to manage your Exadata Database Service on Cloud@Customer VM cluster networks and VM clusters.
Parent topic: How-to Guides
About Managing VM Clusters on Exadata Database Service on Cloud@Customer
The VM cluster provides a link between your Exadata Database Service on Cloud@Customer infrastructure and Oracle Databases you deploy.
The VM cluster contains an installation of Oracle Clusterware, which supports databases in the cluster. In the VM cluster definition, you also specify the number of enabled CPU cores, which determines the amount of CPU resources that are available to your databases
Before you can create any databases on your Exadata Cloud@Customer infrastructure, you must create a VM cluster network, and you must associate it with a VM cluster.
Avoid entering confidential information when assigning descriptions, tags, or friendly names to your cloud resources through the Oracle Cloud Infrastructure Console, API, or CLI.
Parent topic: Manage VM Clusters
Overview of VM Cluster Node Subsetting
VM Cluster Node Subsetting enables you to allocate a subset of database servers to new and existing VM clusters to enable maximum flexibility in the allocation of compute (CPU, memory, local storage) resources.
- Create a smaller VM cluster to host databases that have low resource and scalability requirements or to host a smaller number of databases that require isolation from the rest of the workload.
- Expand or shrink an existing VM cluster by adding and removing nodes to ensure optimal utilization of available resources.
- VM Cluster Node Subsetting capability is available for new and existing VM clusters in Gen2 Exadata Cloud@Customer service.
- All VMs across a VM cluster will have the same resource allocation per VM irrespective of whether the VM was created during cluster provisioning or added later by extending an existing VM cluster.
- Any VM cluster should have a minimum of 2 VMs even with the node subsetting capability. We currently do not support clusters with a single VM.
- Each VM cluster network is pre-provisioned with IP addresses for every DB Server in the infrastructure. One cluster network can only be used by a single VM cluster and is validated to ensure the IP addresses do not overlap with other cluster networks. Adding or removing VMs to the cluster does not impact the pre-provisioned IP addresses assigned to each DB server in the associated cluster network.
- You can host a maximum of 8 VMs on X8M and above generation of DB Servers. X7 and X8 generations can only support a maximum of 6 and 5 VMs per DB Server, respectively.
- Exadata Infrastructures with X8M and above
generation of DB Servers can support a maximum of 16 VM
clusters across all DB Servers. X7 and X8 generation Exadata
Infrastructure DB Servers can only support a maximum of 12
and 10 VM clusters, respectively.
Maximum number of clusters across the infrastructure depends on the resources available per DB server and is subject to the per DB Server maximum VM limit.
Introduction to Scale Up or Scale Down Operations
With the Multiple VMs per Exadata system (MultiVM) feature release, you can scale up or scale down your VM cluster resources.
- Scaling Up or Scaling Down the VM Cluster Resources
You can scale up or scale down the memory, local disk size (/u02
), ASM Storage, and CPUs. - Calculating the Minimum Required Memory
- Calculating the ASM Storage
- Estimating How Much Local Storage You Can Provision to Your VMs
- Scaling Local Storage Down
Parent topic: Manage VM Clusters
Scaling Up or Scaling Down the VM Cluster Resources
You can scale up or scale down the memory, local disk size
(/u02
), ASM Storage, and CPUs.
Oracle doesn't stop billing when a VM or VM Cluster is stopped. To stop billing for a VM Cluster, lower the OCPU count to zero.
Scaling up or down of these resources requires thorough auditing of existing usage and capacity management by the customer DB administrator. Review the existing usage to avoid failures during or after a scale down operation. While scaling up, consider how much of these resources are left for the next VM cluster you are planning to create. Exadata Cloud@Customer Cloud tooling calculates the current usage of memory, local disk, and ASM storage in the VM cluster, adds headroom to it, and arrives at a "minimum" value below which you cannot scale down, and expects that you specify the value below this minimum value.
- When creating or scaling a VM Cluster, setting the number of OCPUs to zero will shut down the VM Cluster and eliminate any billing for that VM Cluster, but the hypervisor will still reserve the minimum 2 OCPUs for each VM. These reserved OCPUs cannot be allocated to any other VMs, even though the VM to which they are allocated is shut down. The Control Plane does not account for reserved OCPUs when showing maximum available OCPU, so you should account for these reserved OCPU when performing any subsequent scaling operations to ensure the operation can acquire enough OCPUs to successfully complete the operation.
- For memory and
/u02
scale up or scale down operations, if the difference between the current value and the new value is less than 2%, then no change will be made to that VM. This is because memory change involves rebooting the VM, and/u02
change involves bringing down the Oracle Grid Infrastructure stack and un-mounting/u02
. Productions customers will not resize for such a small increase or decrease, and hence such requests are a no-op. - You can scale the VM Cluster resources even if any of the DB servers in the VM
Cluster are down:
- If a DB server is down and scaling is performed, the VMs on that server will not be automatically scaled to the new OCPUs when the DB server and the VMs come back online. It's your responsibility to ensure that all the VMs in the cluster have the same OCPU values.
- Even if the DB server is down, billing does not stop for the VM Cluster that has the VMs on that DB server.
Parent topic: Introduction to Scale Up or Scale Down Operations
Calculating the Minimum Required Memory
Cloud tooling provides dbaasapi
to identify the minimum
required memory. As root
user, you have to run
dbaasapi
and pass a JSON file with sample content
as follows. The only parameter that you need to update in the
input.json
is new_mem_size
,
which is the new memory to which you want the VM cluster to be re-sized.
cat input.json
{
"object": "db",
"action": "get",
"operation": "precheck_memory_resize",
"params": {
"dbname": "grid",
"new_mem_size" : "30 gb",
"infofile": "/tmp/result.json"
},
"outputfile": "/tmp/info.out",
"FLAGS": ""
}
dbaasapi -i input.json
cat /tmp/result.json
{
"is_new_mem_sz_allowed" : 0,
"min_req_mem" : 167
}
The result indicates that 30 GB is not sufficient and the minimum required memory is 167 GB, and that is the maximum you can reshape down to. On a safer side, you must choose a value greater than 167 GB, as there could be fluctuations of that order between this calculation and the next reshape attempt.
Parent topic: Introduction to Scale Up or Scale Down Operations
Calculating the ASM Storage
Use the following formula to calculate the minimum required ASM storage:
- For each disk group, for example,
DATA
,RECO
, note the total size and free size by running theasmcmd lsdg
command on any Guest VM of the VM cluster. - Calculate the used size as (Total size - Free size) / 3 for each disk group. The /3 is used because the disk groups are triple mirrored.
-
DATA:RECO ratio is:
80:20 if Local Backups option was NOT selected in the user interface.
40:60 if Local Backups option was selected in the user interface.
- Ensure that the new total size as given in the user interface passes
the following conditions:
Used size for DATA * 1.15 <= (New Total size * DATA % )
Used size for RECO * 1.15 <= (New Total size * RECO % )
Example 5-3 Calculating the ASM Storage
- Run the
asmcmd lsdg
command in the Guest VM:- Without
SPARSE:
/u01/app/19.0.0.0/grid/bin/asmcmd lsdg ASMCMD> State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED HIGH N 512 512 4096 4194304 12591936 10426224 1399104 3009040 0 Y DATAC5/ MOUNTED HIGH N 512 512 4096 4194304 3135456 3036336 348384 895984 0 N RECOC5/ ASMCMD>
- With
SPARSE:
/u01/app/19.0.0.0/grid/bin/asmcmd lsdg ASMCMD> State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED HIGH N 512 512 4096 4194304 12591936 10426224 1399104 3009040 0 Y DATAC5/ MOUNTED HIGH N 512 512 4096 4194304 3135456 3036336 348384 895984 0 N RECOC5/ MOUNTED HIGH N 512 512 4096 4194304 31354560 31354500 3483840 8959840 0 N SPRC5/ ASMCMD>
Note
The listed values of all attributes for SPARSE diskgroup (SPRC5) present the virtual size. In Exadata DB Systems and Exadata Cloud@Customer, we use the ratio of 1:10 for
physicalSize
:virtualSize
. Hence, for all purposes of our calculation we must use 1/10th of the values displayed above in case of SPARSE for those attributes. - Without
SPARSE:
- Used size for a disk group = (Total_MB - Free_MB) /3
- Without SPARSE:
Used size for DATAC5 = (12591936 - 10426224 ) / 3 = 704.98 GB
Used size for RECO5 = (3135456 - 3036336 ) / 3 = 32.26 GB
- With SPARSE:
Used size for DATAC5 = (12591936 - 10426224 ) / 3 ~= 704.98 GB
Used size for RECO5 = (3135456 - 3036336 ) /3 ~= 32.26 GB
Used size for SPC5 = (1/10 * (31354560 - 31354500)) / 3 ~= 0 GB
- Without SPARSE:
- Storage distribution among diskgroups
- Without SPARSE:
DATA:RECO ratio is 80:20 in this example.
- With SPARSE:
DATA RECO: SPARSE ratio is 60:20:20 in this example.
- Without SPARSE:
- New requested size should pass the following conditions:
- Without SPARSE: (For example, 5 TB in user interface.)
5 TB = 5120 GB ; 5120 *.8 = 4096 GB; 5120 *.2 = 1024 GB
For DATA: (704.98 * 1.15 ) <= 4096 GB
For RECO: (32.36 * 1.15) <= 1024 GB
- With SPARSE: (For example, 8 TB in the user interface.)
8 TB = 8192 GB; 8192 *.6 = 4915 GB; 8192 *.2 = 1638 GB; 8192 *.2 = 1638 GB
For DATA: (704.98 * 1.15 ) <= 4915 GB
For RECO: (32.36 * 1.15) <= 1638 GB
For SPR: (0 * 1.15) <= 1638 GB
- Without SPARSE: (For example, 5 TB in user interface.)
Above resize will go through. If above conditions are not met by the new size, then resize will fail the precheck.
Parent topic: Introduction to Scale Up or Scale Down Operations
Estimating How Much Local Storage You Can Provision to Your VMs
X8-2 and X7-2 Systems
You specify how much space is provisioned from local storage to each VM.
This space is mounted at location /u02
, and is used primarily
for Oracle Database homes. The amount of local storage available will vary with the
number of virtual machines running on each physical node, as each VM requires a
fixed amount of storage for the root
file systems, GI homes, and
diagnostic log space. Refer to the tables below to see the maximum amount of space
available to provision to local storage (/u02
) across all
VMs.
- Total space available for VM images (X7 All Systems): 1237 GB
- Total space available for VM images (X8 All Systems): 1037 GB
- Fixed storage per VM: 137 GB
Table 5-5 Space allocated to VMs
#VMs | Fixed Storage All VMs (GB) | X8-2 Space for ALL /u02 (GB) | X7-2 Space for ALL /u02 (GB) |
---|---|---|---|
1 |
137 |
900 |
1100 |
2 |
274 |
763 |
963 |
3 |
411 |
626 |
826 |
4 |
548 |
489 |
689 |
5 |
685 |
352 |
552 |
6 |
822 |
N/A |
415 |
For an X8-2, to get the max space available for the nth VM, take the
number in the table above and subtract anything previously allocated for
/u02
to the other VMs. So if you allocated 60 GB to VM1, 70
GB to VM2, 80 GB to VM3, 60 GB to VM4 (total 270 GB) in an X8-2, the maximum
available for VM 5 would be 352 - 270 = 82 GB.
In ExaC@C Gen 2, we require a minimum of 60 GB per
/u02
, so with that minimum size there is a maximum of 5 VMs
in X8-2 and 6 VMs in X7-2.
X8M-2 Systems
The maximum number of VMs for an X8M-2 will be 8, regardless of whether there is local disk space or other resources available.
For an X8M-2 system, the fixed consumption per VM is 160 GB.
Total space available to all VMs on an ExaC@C X8M databases node is 2500 GB. Although there is 2500 GB per database node, with a single VM, you can allocate a maximum of 900 GB local storage. Similarly, for the second VM, there is 1800 GB local storage available given the max limit of 900 GB per VM. With the third VM, the amount of space available is 2500 - (160Gb * 3) = 2020 GB. And so on for 4 and more VMs.
- Total space available for VM images (X8M Base System): 1237 GB
- Total space available for VM images (X8M Qtr/Half/Full Racks): 2500 GB
- Fixed storage per VM: 160 GB
Table 5-6 Space allocated to VMs
#VMs | Fixed Storage All VMs (GB) | X8M-2 Base System Space for All /u02 (GB) | X8M-2 Quarter/Half/Full Rack Space for All /u02 (GB) |
---|---|---|---|
1 |
160 |
900 |
900* |
2 |
320 |
740 |
1800* |
3 |
480 |
580 |
2020 |
4 |
640 |
420 |
1860 |
5 |
800 |
N/A |
1700 |
6 |
960 |
N/A |
1540 |
7 |
1120 |
N/A |
1380 |
8 |
1280 |
N/A |
1220 |
*Space is limited by 900 GB max per VM
For an X8M-2, to get the max space available for the nth VM, take the
number in the table above and subtract anything previously allocated for
/u02
to the other VMs. So, for a quarter and larger rack, if
you allocated 60 GB to VM1, 70 GB to VM2, 80 GB to VM3, 60 GB to VM4 (total 270 GB)
in an X8M-2, the maximum available for VM 5 would be 1700 - 270 = 1430 GB. However,
the per VM maximum is 900 GB, so that would take precedent and limits VM5 to 900
GB.
X9M-2 Systems
- Total Available for VM Images (Base System): 1077 GB
- Total Available for VM Images (Qtr/Half/Full Racks): 2243 GB
- Fixed overhead per VM: 184 GB
Table 5-7 Space allocated to VMs
#VMs | Fixed Storage All VMs (GB) | X9M-2 Base System Space All /u02 (GB) | X9M-2 Qtr/Half/Full Racks All /u02 (GB) |
---|---|---|---|
1 |
184 |
892 |
900* |
2 |
368 |
708 |
1800* |
3 |
552 |
524 |
1691 |
4 |
736 |
340 |
1507 |
5 |
920 |
N/A |
1323 |
6 |
1104 |
N/A |
1139 |
7 |
1288 |
N/A |
955 |
8 |
1472 |
N/A |
771 |
*Space is limited by 900 GB max per VM
Parent topic: Introduction to Scale Up or Scale Down Operations
Scaling Local Storage Down
Scale Down Local Space Operation Guidelines
Scale down operation expects you to input local space value that you want each node to scale down to.
- Resource Limit Based On Recommended Minimums
Scale down operation must meet 60 GB recommended minimum size requirement for local storage.
- Resource Limit Based On Current Utilization
The scale down operation must leave 15% buffer on top of highest local space utilization across all nodes in the cluster.
The lowest local space per node allowed is higher of the above two limits.
Run df –kh
command on each node to find out the node with the highest
local storage.
You can also use the utility like cssh
to issue the same command from
all hosts in a cluster by typing it just once.
Lowest value of local storage each node can be scaled down to would be = 1.15x (highest value of local space used among all nodes).
Parent topic: Introduction to Scale Up or Scale Down Operations
Using the Console to Manage VM Clusters on Exadata Cloud@Customer
Learn how to use the console to create, edit, and manage your VM Clusters on Oracle Exadata Cloud@Customer.
- Using the Console to Create a VM Cluster
To create your VM cluster, be prepared to provide values for the fields required for configuring the infrastructure. - Using the Console to Enable or Disable Diagnostics Notification
You can enable or disable diagnostics collection for your Guest VMs after provisioning the VM cluster. - Using the Console to Add VMs to a Provisioned Cluster
To add virtual machines to a provisioned cluster, use this procedure. - Using the Console to View a List of DB Servers on an Exadata Infrastructure
To view a list of database server hosts on an Oracle Exadata Cloud@Customer system, use this procedure. - Using the Console to Remove a VM from a VM Cluster
To remove a virtual machine from a provisioned cluster, use this procedure. - Using the Console to Update the License Type on a VM Cluster
To modify licensing, be prepared to provide values for the fields required for modifying the licensing information. - Using the Console to Add SSH Keys After Creating a VM Cluster
- Using the Console to Scale the Resources on a VM Cluster
Starting in Exadata Database Service on Cloud@Customer Gen2, you can scale up or down multiple resources at the same time. You can also scale up or down resources one at a time. - Using the Console to Stop, Start, or Reboot a VM Cluster Virtual Machine
Use the console to stop, start, or reboot a virtual machine. - Using the Console to Check the Status of a VM Cluster Virtual Machine
Review the health status of a VM cluster virtual machine. - Using the Console to Move a VM Cluster to Another Compartment
To change the compartment that contains your VM cluster on Exadata Database Service on Cloud@Customer, use this procedure. - Using the Console to Terminate a VM Cluster
Before you can terminate a VM cluster, you must first terminate the databases that it contains.
Parent topic: Manage VM Clusters
Using the Console to Create a VM Cluster
To create your VM cluster, be prepared to provide values for the fields required for configuring the infrastructure.
To create a VM cluster, ensure that you have:
- Active Exadata infrastructure is available to host the VM cluster.
- A validated VM cluster network is available for the VM cluster to use.
Related Topics
- Oracle Exadata Database Service on Cloud@Customer Service Description
- Using the Console to Scale the Resources on a VM Cluster
- Introduction to Scale Up or Scale Down Operations
- Estimating How Much Local Storage You Can Provision to Your VMs
- Resource Tags
- Oracle PaaS/IaaS Cloud Service Description documents
- Oracle Platform as a Service and Infrastructure as a Service – Public Cloud Service DescriptionsMetered & Non-Metered
- Getting Started with Events
- Overview of Database Service Events
Using the Console to Enable or Disable Diagnostics Notification
You can enable or disable diagnostics collection for your Guest VMs after provisioning the VM cluster.
Enabling diagnostics collection at the VM cluster level applies the configuration to all the resources such as DB home, Database, and so on under the VM cluster.
- Open the navigation menu. Under Oracle Database, click Exadata Database Service on Cloud@Customer.
- Choose the Region that contains your Exadata infrastructure.
- Click VM Clusters.
- Click the name of the VM cluster you want to enable or disable diagnostic data collection.
- On the VM Cluster Details page, under General Information, enable or disable Diagnostic Notification.
Related Topics
Using the Console to Add VMs to a Provisioned Cluster
To add virtual machines to a provisioned cluster, use this procedure.
- The same Guest OS Image version running on the existing provisioned VMs in the cluster is used to provision new VMs added to extend the VM cluster. However, any customizations made to the Guest OS Image on the existing VMs must be manually applied to the newly added VM.
- For VM clusters running a Guest OS Image version older than a year, you must update the Guest OS Image version before adding a VM to extend the cluster.
- Adding a VM to a cluster will not automatically extend any database which is part of a Data Guard configuration (either primary or standby) to the newly provisioned VM.
- For databases not part of a Data Guard configuration, only databases that are running on all VMs in the existing cluster will be added to the newly provisioned VM. Any database running on a subset of VMs will not extend automatically to run on the newly added VM.
Related Topics
Using the Console to View a List of DB Servers on an Exadata Infrastructure
To view a list of database server hosts on an Oracle Exadata Cloud@Customer system, use this procedure.
Using the Console to Remove a VM from a VM Cluster
To remove a virtual machine from a provisioned cluster, use this procedure.
Terminating a VM from a cluster requires the removal of any database which is part of a Data Guard configuration (either primary or standby) from the VM to proceed with the terminate flow. For more information on manual steps, see My Oracle Support note 2811352.1.
Related Topics
Using the Console to Update the License Type on a VM Cluster
To modify licensing, be prepared to provide values for the fields required for modifying the licensing information.
Using the Console to Scale the Resources on a VM Cluster
Starting in Exadata Database Service on Cloud@Customer Gen2, you can scale up or down multiple resources at the same time. You can also scale up or down resources one at a time.
- Use Case 1: If you have allocated all of the resources to one VM cluster, and if you want to create multiple VM clusters, then there wouldn't be any resources available to allocate to the new clusters. Therefore, scale down the resources as needed to then create additional VM clusters.
- Use Case 2: If you want to allocate different resources based on the workload, then scale down or scale up accordingly. For example, you may want to run nightly batch jobs for reporting/ETL and scale down the VM once the job is over.
- OCPU
- Memory
- Local storage
- Exadata storage
Each scale down operation can take approximately 15 minutes to complete. If you run multiple scale down operations, then all the operations are performed in a series. For example, if you scale down memory and local storage from the Console, then scaling down happens one after the other. Scaling down local storage and memory takes more time than the time taken to scale down OCPU and Exadata storage.
Using the Console to Stop, Start, or Reboot a VM Cluster Virtual Machine
Use the console to stop, start, or reboot a virtual machine.
Using the Console to Check the Status of a VM Cluster Virtual Machine
Review the health status of a VM cluster virtual machine.
Using the Console to Move a VM Cluster to Another Compartment
To change the compartment that contains your VM cluster on Exadata Database Service on Cloud@Customer, use this procedure.
When you move a VM cluster, the compartment change is also applied to the virtual machines and databases that are associated with the VM cluster. However, the compartment change does not affect any other associated resources, such as the Exadata infrastructure, which remains in its current compartment.
Using the API to Manage Exadata Cloud@Customer VM Clusters
Review the list of API calls to manage your Exadata Database Service on Cloud@Customer VM cluster networks and VM clusters.
For information about using the API and signing requests, see "REST APIs" and "Security Credentials". For information about SDKs, see "Software Development Kits and Command Line Interface".
Use these API operations to manage Exadata Database Service on Cloud@Customer VM cluster networks and VM clusters:
GenerateRecommendedVmClusterNetwork
CreateVmClusterNetwork
DeleteVmClusterNetwork
GetVmClusterNetwork
ListVmClusterNetworks
UpdateVmClusterNetwork
ValidateVmClusterNetwork
CreateVmCluster
DeleteVmCluster
GetVmCluster
ListVmClusters
UpdateVmCluster
For the complete list of APIs, see "Database Service API".
Related Topics
- REST APIs
- Security Credentials
- Software Development Kits and Command Line Interface
- GenerateRecommendedVmClusterNetwork
- CreateVmClusterNetwork
- DeleteVmClusterNetwork
- GetVmClusterNetwork
- ListVmClusterNetworks
- UpdateVmClusterNetwork
- ValidateVmClusterNetwork
- CreateVmCluster
- DeleteVmCluster
- GetVmCluster
- ListVmClusters
- UpdateVmCluster
- Database Service API
Parent topic: Manage VM Clusters