About Elastic Pools

Elastic pools help you improve operating efficiency and reduce costs by bringing all of your databases to the Cloud. This also supports consolidating resources and simplifying administration and operations by using Autonomous Database.

Elastic pools are only available for Autonomous Database instances that use the ECPU compute model.

When you need a large number of databases that can scale up and down elastically without downtime, you can benefit by creating and using elastic pools. Elastic pools have the following advantages:

  • Enable operating with a fixed budget for a group of databases, while delivering performance elasticity for each individual database.

  • Allow for easy migration from on-prem Oracle environments that include oversubscription, to provide a cost effective way to move to Autonomous Database.

  • Support SaaS vendors with a large number of individual customer databases.

  • Provide resources for using a microservices architecture, where the ability to supply of large number of databases is required.

  • The pool members in an elastic pool are not billed individually (the pool leader is billed based on the pool shape). You can allocate additional ECPUs per instance for pool members, without worrying about the cost associated with the ECPU usage for the individual members. Autonomous Database IO capacity and memory allocation is directly correlated with the ECPU count, so by selecting a greater number of ECPUs for an instance, this allows you to run with greater IO capacity and more memory without having to pay for the additional resources. This means, using a larger number of ECPUs per instance allows you to use more IO capacity and more memory per instance, where the cost is based on the pool shape and is not based on an individual instance's ECPU count.

When you create an elastic pool you select a pool size from a predefined set of pool sizes. Pool size determines how much you pay for compute as well as how many ECPUs you can provision in a given pool.

There are several terms to use when you work with elastic pools:

  • Pool Leader: Is the Autonomous Database instance that creates an elastic pool.

  • Pool Member: Is an Autonomous Database instance that is added to an elastic pool.

  • Pool Size: Is a value that you set when you create an elastic pool. The pool size must be one of the available elastic pool shapes.

  • Pool Shape: A pool shape is one of the valid pool sizes that you select when you create an elastic pool. The pool shape must be one of: 128, 256, 512, 1024, 2048, or 4096 ECPUs.

    Note

    By default each instance in an elastic pool is automatically assigned a maintenance window. By selecting a pool shape that is 1024 ECPUs or greater, you have the option of assigning a custom 2-hour maintenance window during which the leader and all elastic pool members are patched together. To select a custom maintenance window for your elastic pool, file a Service Request at Oracle Cloud Support.
  • Pool Capacity: The pool capacity is the maximum number of ECPUs that an elastic pool can use, and is four times (x4) the pool size.

Requirements to Create an Elastic Pool

The following are the requirements for an Autonomous Database instance to create an elastic pool and become a pool leader:

  • The instance must use the ECPU compute model.

  • The instance must be an Autonomous Database instance with the Transaction Processing workload type. This only applies for the pool leader. An elastic pool can hold a mix of databases with Transaction Processing, Data Warehouse, JSON Database, or APEX workloads.

  • Auto scaling must be disabled.

  • The instance must not be a member of an existing elastic pool.

  • The maximum allowed individual ECPU count for an Autonomous Database instance that creates an elastic pool is 4 times the pool size specified when you create the pool.

  • The instance that creates an elastic pool is subject to tenancy limits. To create an elastic pool you must have a sufficient number of ECPUs available, below the tenancy limit, to accommodate the size of the elastic pool.

Requirements to Join an Elastic Pool

The following are the requirements for an Autonomous Database instance to join an elastic pool:

  • The instance must use the ECPU compute model.

  • An elastic pool can contain Autonomous Database instances with Transaction Processing, Data Warehouse, JSON Database, or APEX workload types.

  • An elastic pool can hold a mix of databases with Transaction Processing, Data Warehouse, JSON Database, and APEX workloads.

  • Auto scaling must be disabled.

  • The instance must not be a member of an elastic pool.

  • The maximum allowed individual ECPU count for an Autonomous Database instance is the available pool capacity. When an instance has an ECPU count greater than the available pool capacity, it is not allowed to join that elastic pool.

Pool Leader and Member Instance ECPU Allocation

When an Autonomous Database instance is part of an elastic pool, the minimum allowed individual ECPU allocation for an instance is 1 ECPU.

When an Autonomous Database instance is part of an elastic pool, increments of 1 ECPU are allowed for individual Autonomous Database instance ECPU allocation.

Pool Capacity for an Elastic Pool

An elastic pool has a pool capacity of 4 times the pool size. For example, a pool with pool size of 128 ECPUs can hold up to 512 ECPUs for its leader and the members.

Note

In these examples Autonomous Data Guard is not enabled. See About Elastic Pools with Autonomous Data Guard Enabled for information on using elastic pools with Autonomous Data Guard.

The following are examples of Autonomous Database instances that could be in an elastic pool with a pool size of 128 and a pool capacity of 512:

  • Each of these are valid for pool members in an elastic pool with a pool size of 128:
    • 1 instance with 512 ECPUs, for a total of 512 ECPUs

    • 128 instances with 4 ECPUs, for a total of 512 ECPUs

    • 256 instances with 2 ECPUs, for a total of 512 ECPUs

    • 50 instances with 10 ECPUs and 3 instances with 4 ECPUs, for a total of 512 ECPUs

  • Similarly, each of the following are valid for pool members in an elastic pool with a pool size of 128:
    • 1 instance with 128 ECPUs, 2 instances with 64 ECPUs, 32 instances with 4 ECPUs, and 64 instances with 2 ECPUs, for a total of 512 ECPUs

    • 256 instances with 1 ECPU, 64 instances with 2 ECPUs, for a total of 384 ECPUs, which is less than the pool capacity of 512 ECPUs.

    • 100 instances with 4 ECPUs and 50 instances with 2 ECPUs, which is less than the pool capacity of 512 ECPUs.

These are examples, you can add pool members to a pool to match the number of instances and the number ECPUs per instance to meet your needs, based on the pool size you select.

Topics

About Elastic Pools with Autonomous Data Guard Enabled

The elastic pool leader or members can enable either local or cross-region Autonomous Data Guard, or both local and cross-region Autonomous Data Guard.

Local Autonomous Data Guard Standby Database Billing

When you add a local standby, a total of two times (2 x) the primary's ECPU allocation is counted towards the pool capacity (1 x for the primary and 1 x for the standby). Meaning a local standby multiplies the primary's peak usage by 2.

For example, if you create an elastic pool with a pool size of 128 ECPUs, with a pool capacity of 512 ECPUs, adding the following Autonomous Database instance uses the elastic pool capacity:

  • 1 instance with 256 ECPUs with local Autonomous Data Guard enabled, for a total of 512 ECPUs allocation from the pool.

    When using this instance the CPU utilization is 256 ECPUs, however the overall peak ECPU utilization will be reported as 512 because of the local standby database 2 x multiplication factor. And the billing is based on 4 x the pool size (512 ECPUs).

Similarly, if you create an elastic pool with a pool size of 128 ECPUs, with a pool capacity of 512 ECPUs, adding the following Autonomous Database instances uses the elastic pool capacity as follows:
  • 128 instances with 2 ECPUs each, with local Autonomous Data Guard enabled, for a total of 512 ECPUs allocation from the pool.

    When all of these databases are running, peak 100% ECPU utilization, you get 256 ECPUs as your peak (128 *2 ECPUs per instance). However, the overall peak ECPU utilization of the pool will be reported as 512 because of the 2 x factor for the standby databases. Billing in this case is based on 4 x the pool size, or 512 ECPUs.

Cross-Region Autonomous Data Guard Standby Database Billing

Enabling Cross-region Autonomous Data Guard for a leader or for member has no effect on the elastic pool capacity. A cross-region Autonomous Data Guard peer database has its own OCID and the cross-region peer is billed independently from the elastic pool.

Note the following:

  • Cross-region Autonomous Data Guard peer ECPUs do not use pool capacity and billing for Autonomous Data Guard cross-region peer databases happens on the peer instance.

  • When the leader of an elastic pool enables cross-region Autonomous Data Guard, the cross-region peer database ECPU allocation does not count towards the elastic pool capacity. Billing for cross-region Autonomous Data Guard is on the cross-region instance, which is not part of the elastic pool (elastic pools do not operate across regions).

  • When a member of an elastic pool enables cross-region Autonomous Data Guard, the cross-region peer ECPU allocation does not count towards the pool capacity. Billing for cross-region Autonomous Data Guard is on the cross-region instance, which is not part of the elastic pool (elastic pools do not operate across regions).

For example, if you create an elastic pool with a pool size of 128 ECPUs (with a pool capacity of 512 ECPUs), adding the following Autonomous Database instances of different sizes uses the entire elastic pool capacity:

  • A pool that contains the following instances:
    • 1 instance with 128 ECPUs with cross-region Autonomous Data Guard enabled (using a total of 128 ECPUs from the pool).

    • 64 instances with 2 ECPUs each with both local and cross-region Autonomous Data Guard enabled (using a total of 256 ECPUs from the pool).

    • 128 instances with 1 ECPU, each with cross-region Autonomous Data Guard enabled (using 128 ECPUs from the pool).

About Elastic Pool Leader and Member Operations

The Autonomous Database instance that creates an elastic pool is the pool leader. Autonomous Database instances that are added to an existing pool are pool members. Depending on your role, either leader or member, you can perform operations on an elastic pool.

Pool Leader Operations

The following operations are valid only for the pool leader:

Operation Description

Create an elastic pool

The Autonomous Database instance that creates an elastic pool is the pool leader. See Create an Elastic Pool for more information.

Remove an elastic pool member

An elastic pool leader can remove a member from the elastic pool. See As Pool Leader Remove Members from an Elastic Pool for more information.

Terminate an elastic pool

When an elastic pool has no pool members, the pool leader can terminate the elastic pool. See Terminate an Elastic Pool for more information.

Modify elastic pool size

An elastic pool leader can modify the pool size. See Change the Elastic Pool Shape for more information.

List pool members

A pool leader can list pool members.

See List Elastic Pool Members for more information.

Pool Member Operations

The following operations are valid for a pool member or for the pool leader:

Operation Description

Add instance to elastic pool

An Autonomous Database instance can be added as a pool member as long as the instance is one of the supported workload types, the instance uses the ECPU compute model, and the instance is not a pool member of a different pool. The supported workload types are: Transaction Processing, Data Warehouse, JSON Database, or APEX.

See Join an Existing Elastic Pool for more information.

Remove an elastic pool member

An elastic pool member can remove themselves from the elastic pool.

See Remove Pool Members from an Elastic Pool for more information.