Managing Patches for ODH Clusters

With Big Data Service, you can patch ODH clusters with new components that might include security fixes and other minor improvements.

ODH patch implementation involves patching with downtime impacting running application. Big Data Service 3.0.28 and later includes Host Order Upgrade cluster patch strategy to support minimal downtime.

Host Order Upgrade

In this approach, Big Data Service patches and restarts services simultaneously on batches/groups of hosts. The patch orchestrator provides options to select the batch size for the upgrade. Host Order Upgrade provides an optimum strategy that's faster than other upgrade strategies and suitable for environments that can't take downtime.

Supported ODH Patch Installation Strategies
  • Installation with downtime:

    The downtime-based patch strategy is the same as Express Upgrade, and it's suitable for clusters that can afford downtime. This strategy is the default ODH patch installation strategy.

  • AD/FD Installation:

    The Availability Domain(AD) based patch strategy is a variation of the Host Order Upgrade patch, where host group sets are created based on AD, and all the hosts in the set are upgraded sequentially before moving to the next set. You can provide sleep duration between ADs.

    Note

    Order of Preference: AD containing maximum type of nodes. For example, master, utility, Edge, and worker node.
  • Batch manner:

    Batching based patching strategy is a variation of Host Ordered Patch where host group sets are created based on the batch size you provide, except for First Batch selected by Big Data Service. The First Batch is a special batch irrespective of the batch size you provide.

    • Big Data Service picks the First Batch with all available node types on a cluster across all ADs/FDs to ensure the patching succeeds on all types of nodes.
    • The provided batch size is less than or equal to the number of nodes in an AD.
    • You can provide inter-batch patching pause duration.
    Note

    • If the batch size isn't passed, the default value of the batch size is the minimum of the number of nodes in an AD.
    • For multi-AD regions, except first the batch, all batches are prepared within each AD. The sequence of batches start from the AD where First Batch is created.

For more information on Big Data Service patches, see:

You can list patches, install patches, and view patch history for a cluster on the Cluster details page.

Planning ODH Patch Installation

Installation with Downtime

Plan required downtime for Big Data Service ODH cluster patch installation. The following information explains how to quantify the downtime and how to measure when it's complete. For information on patch installation stages and required downtime, see Monitoring ODH Patching Workflow Steps.

Gauging Impact

For HA and secure ODH cluster with 7 to 25 nodes, downtime is expected to start 40 to 50 minutes after starting the patch.

AD/FD Installation or Batch Manner

Note

Each patch supports backward and forward compatibility. For example, if one AD is patched with a newer version of components, the components running on the other ADs will be compatible with the newer version.

Gauging Impact

Prerequisites to patch ODH with minimal downtime:

  • Big Data Service 3.0.28 or later.
  • The cluster must be HA-enabled to avoid impact on applications.
  • Enable HA for various components, such as Hive, Oozie, Ranger, and Schema Registry, to avoid downtime. ODH patching doesn't explicitly check for HA for any of the components. Downtime occurs when HA isn't configured for supported components.
Limitations
  • Trino and Hue don't support HA, and upgrade can incur downtime for these services.
  • When an ODH patch is in progress, no LCM operations are allowed.
  • When an ODH patch has failed, no LCM operations are allowed except re-triggering an ODH patch or deleting a cluster.

Expected Behavior of Components

The following table lists the expected behavior for components during ODH patching.

Component Expected Behavior
Hadoop

On an HA cluster, there's no downtime during patching of the HDFS component.

Assumes that HDFS components are backward/forward compatible.

Note: Wait time is configured based on how much time the datanode takes to sync data and the data ingestion rate.

Yarn

As part of the AD/FD Installation or Batch manner patch process, the following happens:

  • Nodemanager is decommissioned before starting upgrade on a host.
  • When Nodemanager is decommissioned on a host, containers running on this node are killed. Tasks running on a killed container are spawned on another available Nodemanager. No new container is launched on decommissioned node.
  • After the upgrade is complete on a host Nodemanager is recommissioned.

Note: All components using Yarn as the resource schedule, for example, Flink, Spark, Hive, and MapReduce, are expected to have similar behaviour.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
Spark

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
Kafka

Prerequisites:

  • You must have multiple Kafka brokers on the cluster.
  • There can't be multiple replicas of a topic in the same AD.
  • After the broker is upgraded, it's expected that the replica on the upgraded broker can lag. However, it catches up when the broker is upgraded.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component

Note: The wait time between (Batches / AD) are decided based on data ingestion / processing.

Hbase

Prerequisite:

For bulk load to an HBase table, you must set yarn.timeline-service.enabled = false to avoid job failure.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
Hive

No expected downtime if Hive HA is enabled.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
JupyterHub

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
Oozie

No expected downtime if Oozie HA is enabled.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
Ranger

No expected downtime if Ranger HA is enabled.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
Trino

No expected downtime.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
Hue

No expected downtime.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
Flume

Data processed by the Flume agent is stopped. There's no data loss as the checkpoint is maintained based on the source of data. If a checkpoint is present, after the agent starts, it resumes work from saved checkpoint.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component
Schema Registry

No expected downtime if Schema Registry is enabled.

For upgrade, the following steps are followed:

  1. Stop Component
  2. Start Component

Monitoring ODH Patching Workflow Steps

After you start the Installation of the ODH patch, you can view work requests in the Console. In the work request log of the ODH patch, there are messages describing the last completed stage. A series of steps corresponds to the completing stages.
Note

When you see PREPARE_UPGRADE, the patch installation and downtime are about to begin.
Step Time Line Time Line Patch Stage Installation with Downtime AD/FD Installation or Batch manner
1 T0 DOWNLOAD patch No downtime required No downtime required
2 T1 PROCESS_PATCH_METADATA patch No downtime required No downtime required
3 T2 PATCH_AMBARI_SERVER_JAR patch Requires no downtime/Ambari restart Requires no downtime/Ambari restart
4 T2 REGISTER_PATCH patch No downtime required No downtime required
5 T2 CREATE_PATCH_REPO patch No downtime required No downtime required
6 T2 APPLY_CUSTOM_PATCH patch Requires no downtime/Ambari restart Requires no downtime/Ambari restart
7 T2 INSTALL_PATCH patch No downtime required No downtime required
8 T3 PREPARE_UPGRADE patch Required downtime See Planning ODH Patch Installation
9 T3 APPLY_UPGRADE patch Required downtime See Planning ODH Patch Installation
10 T4 Patching complete No downtime required No downtime required

High-level patch time line

  • T0: Click Apply Patch in OCI Console.
  • T1: The cluster health is checked for patch readiness, and the ODH patch bundle is downloaded to the cluster nodes (no downtime) (stages 1 and 2 in the previous table).
  • T2: While the ODH patch is being prepared on the cluster, the Ambari Server can be restarted. If you're signed in to Ambari, you must sign out and sign back in. (no downtime for your Hadoop jobs) (stages 3 to 7 in the previous table).

If patched with the Installation with Downtime strategy:

  • T3: Downtime starts: All ODH/Hadoop services are stopped. The ODH patch is applied to all the nodes in the cluster, and all Hadoop services are started. (stages 8 and 9 in the previous table)
  • T4: Patch application is completed, and downtime ends. (stage 9 in the previous table)

If patched with the AD/FD Installation or Batch manner strategy:

  • T3: Occurs in two phases:
    • Phase1: Patch an initial collection of nodes in the cluster (First Batch).

      An initial collection following node types are picked and patched one node at a time.

      • 1 Utility node in AD-X
      • 1 master node in AD-Y
      • 1 Storage worker node from AD-Z
      • 1 Compute only worker node from any AD
      • 1 Edge node from any AD
      • If a node of specific type is unavailable in an AD, pick the node of the same type from any other AD at random

      Patching in this collection must succeed for further progress to occur. Otherwise, patching fails and the cluster is rolled back to its initial state.

    • Phase2: Patch the remaining clusters in batches.

      Irrespective of the size of the batch, patching progresses with all nodes in AD-X, followed by AD-Y, and then by AD-Z. For example:

      • For a cluster of 100 nodes in a multi-AD region
      • Distributed as 33 nodes in AD-X, 33 nodes in AD-Y, and 34 nodes in AD-Z
      • With batch size of 20
      • Patching progresses in the following order:
        • 20 nodes in AD-X, 13 nodes in AD-X
        • 20 nodes in AD-Y, 13 nodes in AD-Y
        • 20 nodes in AD-Z, 14 nodes in AD-Z
Choosing a larger batch size or one AD at a time is preferred. This helps minimize the total end-to-end patch duration.
  • T4: Patch application is completed. (Stage 9 in the previous table):

Time between patch stages

For HA and secure ODH cluster with 7 to 25 nodes:

Patch Strategy Time Between T0 to T3 Time Between T3 to T4
Installation with Downtime ~ 30 to 40 minutes ~ 40 to 50 minutes of downtime
AD/FD Installation ~ 30 to 40 minutes ~ 70 to 250 minutes of minimal impact
Batch Manner ~ 30 to 40 minutes ~ 100 to 250 minutes of minimal impact (with batch size 1)

Rollback Scenarios

For AD/FD Installation or Batch manner strategies, if an initial collector of nodes (First Batch) upgrade fails, it rolls back the ODH patch. However, if the First Batch has passed and there's a failure in the later batches, the rollback isn't called, and the ODH patch moves to a failed state. The cluster state is active, and only re-triggering the ODH patch or deleting the cluster is allowed. No other LCM operations are allowed.