Billing Autonomous Database on Dedicated Exadata Infrastructure
Oracle Autonomous Database on Dedicated Exadata Infrastructure uses specific algorithms to allocate and bill for usage of the compute used by Autonomous Databases. Understanding these algorithms can help you determine how best to create and configure your Autonomous Databases to meet performance goals in the most cost-effective fashion.
- CPU Billing Details
- Elastic Pool Billing
An elastic pool allows you to consolidate your Autonomous Database instances in terms of their compute resource billing. - Autonomous Data Guard Standby Database Billing in Elastic Pools
The elastic pool leader or members can enable either local or cross-region Autonomous Data Guard. This topic describes how a standby database is billed in an elastic pool with Autonomous Data Guard configuration.
Parent topic: Overview
CPU Billing Details
Oracle Autonomous Database on Dedicated Exadata Infrastructure computes CPU billing as follows:
- CPU usage for each Autonomous Database is measured each second in units of whole ECPU or OCPU.
- A stopped Autonomous Database uses zero ECPU or OCPU. When an Autonomous Database is stopped, you are not billed.
- A running Autonomous Database uses its allocated number of ECPUs or OCPUs plus any additional ECPUs or OCPUs due to auto-scaling. When an Autonomous Database is running, you are billed for the number of CPUs currently allocated to the database, whether specified at initial creation or later by a manual scaling operation. Additionally, if auto-scaling is enabled for the database, you are billed for each second for any additional CPUs the database is using as the result of being automatically scaled up.
Note
Creating AVMC and ACD resources does not initiate billing. So, even though you assign a total CPU count to an AVMC and each ACD consumes 8 ECPUs or 2 OCPUs per node when created, these CPUs are not billed. Only once you provision Autonomous Databases in an AVMC and an underlying ACD, and that database is actively running, will the CPUs used be billed. As a result, you can create ACDs within AVMCs to organize and group your databases according to your lines of business, functional areas, or some other technique without worrying about incurring costs. - When you create an Autonomous Database, by default Oracle reserves additional CPUs to ensure that the database can run with at least 50% capacity even in case of any node failures. You can change the percentage of CPUs reserved across nodes to 0% or 25% while provisioning an ACD. See Node failover reservation in Create an Autonomous Container Database for instructions. These additional CPUs are not included in the billing.
- The per-second measurements are averaged across each hour interval for each Autonomous Database.
- The per-hour averages for the Autonomous Databases are added together to determine the CPU usage per hour across the entire Autonomous VM Cluster resource.
- Compute Management in Autonomous Database to learn how CPUs move between total, available, and reclaimable CPU categories with usage and how they are billed.
- CPU Allocation When Auto-Scaling to understand how the CPUs allocated with auto-scaling impact billing, with specific examples.
Elastic Pool Billing
An elastic pool allows you to consolidate your Autonomous Database instances in terms of their compute resource billing.
You can think of an elastic pool like a mobile phone service "family plan," except this applies to your Autonomous Database instances. Instead of paying individually for each database, the databases are grouped into a pool in which one instance, the leader, is charged for the compute usage associated with the entire pool. See Consolidate Autonomous Database Instances Using Elastic Pools for complete details about elastic resource pools.
Elastic resource pool usage:
- Is billed to the pool leader, and billing is based on the elastic resource pool size and the actual hourly ECPU usage of the pool leader and the members.
- Can exceed the pool size (pool capacity can be up to four times greater than the pool size).
- Billing consists of only compute resources, that is, ECPU usage, and all compute usage is charged to the Autonomous Database instance that is the pool leader.
Using an elastic pool, you can provision up to four times the number of ECPUs over your selected pool size, and you can provision database instances that are in the elastic pool with as little as 1 ECPU per database instance. Outside of an elastic pool the minimum number of ECPUs per database instance is 2 ECPUs. For example, with a pool size of 128, you can provision 512 Autonomous Database instances (when each instance has 1 ECPU). In this example, you are billed for the pool size compute resources, based on the pool size of 128 ECPUs, while you have access to 512 Autonomous Database instances. In contrast, when you individually provision 512 Autonomous Database instances without using an elastic pool, you must allocate a minimum of 2 ECPUs for each Autonomous Database instance. In this example, you would pay for 1024 ECPUs. Using an elastic pool provides up to 87% compute cost savings.
After creating an elastic pool, the total ECPU usage for a given hour is charged to the Autonomous Database instance, that is the pool leader. Except for the pool leader, individual Autonomous Database instances that are pool members are not charged for ECPU usage while they are members of an elastic pool.
Elastic pool billing is as follows:
- If the total aggregated peak ECPU utilization is equal to or below the pool size for a given hour, you are charged for the pool size number of ECPUs (one times the pool size).
- After an elastic pool is created, ECPU billing continues at a minimum of one-time the pool size rate, even when pool member databases and the pool leader are stopped.
- In other words, if the aggregated peak ECPU utilization of the pool is less than or equal to the pool size for a given hour, you are charged for the pool size number of ECPUs (one times the pool size). This represents up to 87% compute cost savings over the case in which these databases are billed separately without using elastic pools.
- If the aggregated peak ECPU utilization of the pool leader and the members exceeds the pool size at any point in time in a given billing hour:
- Aggregated peak ECPU utilization of the pool is equal to or less than two times the pool size number of ECPUs: For usage that is greater than one times the pool size number of ECPUs and up to and including two times the number of ECPUs in a given billing hour: Hourly billing is two times the pool size number of ECPUs. In other words, if the aggregated peak ECPU utilization of the pool exceeds the pool size but is less than or equal to two times the pool size for a given hour, you are charged twice the pool size number of ECPUs (two times the pool size). This represents up to 75% compute cost savings over the case in which these databases are billed separately without using elastic pools.
- Aggregated peak ECPU utilization of the pool is equal to or less than four times the pool size number of ECPUs: For usage that is greater than two times the pool size number of ECPUs and up and including to four times the pool size number of ECPUs in a given billing hour, hourly billing is four times the pool size number of ECPUs. In other words, if the aggregated peak ECPU utilization of the pool exceeds twice the pool size for a given hour, you are charged for four times the pool size number of ECPUs (four times the pool size). This represents up to 50% compute cost savings over the case in which these databases are billed separately without using elastic pools.
- For example, consider an elastic pool with a pool size of 128 ECPUs and a pool capacity of 512 ECPUs:
- Case-1: The aggregated peak ECPU utilization of the pool leader and the members is 40 ECPUs between 2:00 pm and 2:30 pm and 128 ECPUs between 2:30 pm and 3:00 pm.
- The elastic pool is billed 128 ECPUs, one-time the pool size, for this billing hour (2–3 pm). This case applies when the peak aggregated ECPU usage of the elastic pool for the billing hour is less than or equal to 128 ECPUs.
- Case 2: The aggregated peak ECPU utilization of the pool leader and the members is 40 ECPUs between 2:00 pm and 2:30 pm and 250 ECPUs between 2:30 pm and 3:00 pm.
- The elastic pool is billed 256 ECPUs twice the pool size for this billing hour (2–3 pm). This case applies when the peak aggregated ECPU usage of the elastic pool for the billing hour is less than or equal to 256 ECPUs and greater than 128 ECPUs.
- Case-3: The aggregated peak ECPU utilization of the pool leader and the members is 80 ECPUs between 2:00 pm and 2:30 pm, and 509 ECPUs between 2:30 pm and 3:00 pm.
- The elastic pool is billed 512 ECPUs, four times the pool size, for this billing hour (2–3 pm). This case applies when the peak aggregated ECPU usage of the elastic pool for the billing hour is less than or equal to 512 ECPUs and greater than 256 ECPUs.
- Case-1: The aggregated peak ECPU utilization of the pool leader and the members is 40 ECPUs between 2:00 pm and 2:30 pm and 128 ECPUs between 2:30 pm and 3:00 pm.
For more details, see How to Achieve up to 87% Compute Cost Savings with Elastic Resource Pools on an Autonomous Database.
Elastic Pool Billing when a Pool is Created or Terminated
When an elastic pool is created or terminated, the leader is billed for the entire hour for the elastic pool. In addition, individual instances that are either added or removed from the pool are billed for any compute usage that occurs while the instance is not in the elastic pool (in this case, the billing applies to the individual Autonomous Database instance).
- Pool Creation Example: Assume an Autonomous Database instance with 4 ECPUs is not part of any elastic pool. At 2:15 pm, if you create an elastic pool with this instance with a pool size of 128 ECPUs, the instance becomes a pool leader. Assuming the Autonomous Database idles between 2–3 pm, and there are no other Autonomous Database instances in the pool, billing for the hour between 2–3 pm is as follows:
- The bill for the period 2–3 pm is: (4 * 0.25) + 128 = 129 ECPUs.
- Where (4 * 0.25) is the billing for compute for the fifteen minutes before the Autonomous Database instance created the elastic pool (during which period from 2 to 2:15 pm, the instance will be billed against the VM cluster), and 128 ECPUs is the billing for the elastic pool for the hour when it is created.
- Pool Termination Example: Assume an Autonomous Database instance with 4 ECPUs is the leader of an elastic pool, and the pool size is 128 ECPUs. At 4:30 pm, if you terminate the elastic pool, the database becomes a standalone Autonomous Database instance, not part of any elastic pool. Assuming the Autonomous Database idles between 4–5 pm, and there are no other Autonomous Database instances in the pool, billing for the hour between 4–5 pm is as follows:
- The bill for 4–5 pm is: (4 * 0.5) + 128 = 130 ECPUs.
- Where the (4 * 0.5) is the billing for compute for the thirty minutes after the Autonomous Database instance terminates the elastic pool, and 128 ECPUs is the billing for the elastic pool for the hour when the elastic pool was terminated.
- Once the Autonomous Database instance leaves the pool, it becomes part of the VM cluster again, and gets billed against the VM cluster.
Elastic Pool Billing when a Pool Member or Leader Leaves the Pool
- If a pool member with 2 ECPUs or more leaves the pool, the individual instance's ECPU allocation remains, and the instance is billed for that number of ECPUs.
- If a pool member with 1 ECPU leaves the pool, the ECPU allocation is automatically set to 2 ECPUs, and the instance is billed for 2 ECPUs going forward unless it's scaled up.
Autonomous Data Guard Standby Database Billing in Elastic Pools
The elastic pool leader or members can enable either local or cross-region Autonomous Data Guard. This topic describes how a standby database is billed in an elastic pool with Autonomous Data Guard configuration.
When you add a local or cross-region standby, a total of two times (2x) the primary's ECPU allocation is counted towards the pool capacity (1x for the primary and 1x for the standby). The total peak ECPU usage is calculated by multiplying the Primary's peak usage by 2.
For example, if you create an elastic pool with a pool size of 128 ECPUs, with a pool capacity of 512 ECPUs, adding the following Autonomous Database instance uses the elastic pool capacity:
-
1 instance with 256 ECPUs with Autonomous Data Guard enabled, for a total of 512 ECPUs allocation from the pool.
When the primary instance is at 100% CPU utilization using 256 ECPUs, however the overall peak ECPU utilization will be reported as 512 because of the standby database 2x multiplication factor. And the billing is based on 4x the pool size (512 ECPUs).
Similarly, if you create an elastic pool with a pool size of 128 ECPUs, with a pool capacity of 512 ECPUs, adding the following Autonomous Database instances uses the elastic pool capacity as follows:
-
128 instances with 2 ECPUs each, with Autonomous Data Guard enabled, for a total of 512 ECPUs allocation from the pool.
When all of these databases are running, peak 100% ECPU utilization, you get 256 ECPUs as your peak (128 *2 ECPUs per instance). However, the overall peak ECPU utilization of the pool will be reported as 512 because of the 2x factor for the standby databases. Billing in this case is based on 4x the pool size, or 512 ECPUs.