Service Mesh uses the Envoy proxy server with all mesh resources.
Service Mesh and the
Envoy Proxy
Envoy is an open source proxy server designed for use with cloud native applications. Written in C++, the Envoy proxy is a high-performance proxy server designed for large microservice and service mesh architectures.
A PDB specifies the number of replicas that an application can tolerate having,
relative to how many replicas the application intends to have. For example, a
deployment that has a setting of .spec.replicas: 5 should have 5
pods at any given time. If the PDB sets the minAvailable = 3, then
the Eviction API allows voluntary disruption of not more than 2 pods at a time.
The "intended" number of pods is computed from the .spec.replicas
setting of the workload resource that is managing those pods. The control plane
discovers the owning workload resource by examining the
.metadata.ownerReferences of the pod.
Service Mesh honors the PDB settings when meshifying
or injecting proxies.
Kubernetes suggests you have a deployment strategy as rolling updates to minimize the
down time of an application. To understand better, let's review the internals of
K8s.
Recreate: All existing Pods are evicted before new ones are created.
RollingUpdate: Replace old pods with new ones gradually, while continuing
to serve traffic without incurring downtime.
Note
This option is preferable and
the default.
Kubernetes provides the following parameters that you leverage when the strategy type
is RollingUpdate.
.spec.strategy.rollingUpdate.maxUnavailable: Maximum
number of pods that can be unavailable during the update process. This value can
be an absolute number or percentage of the replicas count. The absolute number
is calculated from percentage by rounding down. The default is 25%.
.spec.strategy.rollingUpdate.maxSurge: Maximum number of
pods that can be created over the desired number of pods. This value can be an
absolute number or a percentage of the replicas count. The absolute number is
calculated from the percentage by rounding up. The default is 25%.
Note
Both maxSurge and maxUnavailable can't be 0 at the same time.
Also, configure the readinessProbe in the service container
for Kubernetes to determine whether to send the traffic to the pods or not.
Note
Rolling update strategy isn't considered in the pod eviction process.
Auto Proxy Upgrades 🔗
Proxies injected to the application pods are updated automatically by default, with
the release of a new proxy version.
During the upgrade process, the system evicts the pods and re-creates them with the
new version of proxy sidecar. This pod eviction process respects the PDBs defined
for the pod/deployment.
Disable Automatic Proxy Upgrades 🔗
For some applications, you might not want to automatically update your proxies. You
have the option to disable Automatic Proxy updates by setting adding the
AUTO_UPGRADE_PROXY_VERSION property and setting it to
false in the ConfigMap named
oci-service-operator-servicemesh-config present in the
<oci-service-mesh-operator> namespace.
Or, to generate and apply this configuration, use the following script to make the
changes.
Copy
# Get the current config map
kubectl get configmap oci-service-operator-servicemesh-config -n ${OPERATOR_NAMESPACE} -o yaml > oci-service-operator-configmap.yaml
# get current sidecar_image
SIDECAR_IMAGE=$(cat oci-service-operator-configmap.yaml | grep "SIDECAR_IMAGE:" | sed "s/.*SIDECAR_IMAGE: //")
# update the sidecar image and auto update property
echo -e "data:\n AUTO_UPDATE_PROXY_VERSION: \"false\"\n SIDECAR_IMAGE: \""${SIDECAR_IMAGE}"\"" >> oci-service-operator-configmap.yaml
# re-upload the file
kubectl replace -f oci-service-operator-configmap.yaml
Manual Proxy Updates 🔗
If you have disabled automatic upgrades, and would like to manually update your
proxies, perform the following steps.
Note
Use the outdatedProxyConnection metric to determine if your are
running an older version of the proxies. The metric emits a count of proxies running
older versions.
Set the AUTO_UPGRADE_PROXY_VERSION property to
true. To generate and apply this configuration, use the
following script.
Copy
# Get the current config map
kubectl get configmap oci-service-operator-servicemesh-config -n ${OPERATOR_NAMESPACE} -o yaml > oci-service-operator-configmap.yaml
# get current sidecar_image
SIDECAR_IMAGE=$(cat oci-service-operator-configmap.yaml | grep "SIDECAR_IMAGE:" | sed "s/.*SIDECAR_IMAGE: //")
# update the sidecar image and auto update property
echo -e "data:\n AUTO_UPDATE_PROXY_VERSION: \"true\"\n SIDECAR_IMAGE: \""${SIDECAR_IMAGE}"\"" >> oci-service-operator-configmap.yaml
# re-upload the file
kubectl replace -f oci-service-operator-configmap.yaml
Observe the pods evicting and re-creating with the new version of proxy sidecar.
This pod eviction process respects the PDBs defined for the pod/deployment.
After all the pods are re-created with the latest version, you see the
OutdatedProxyConnection metric go to zero.
Disable the AUTO_UPDATE_PROXY_VERSION property by setting it to
false to turn off automatic updates.
Important
Service Mesh supports a proxy
version for 90 days from the day that a user disables auto proxy updates. As
a best practice, set alarms on the OutdatedProxyConnection metric
and monitor it accordingly.
Caution
Service Mesh Operator tries to
evict pods during the proxy upgrade process. As a best practice for your
application, set PDBs up on the pods to avoid downtime.
Proxy Version Monitoring 🔗
Service Mesh emits two metrics to Oracle Cloud Infrastructure Monitoring
ActiveProxyConnections and
OutdatedProxyConnections under the
oci_servicemesh namespace. Customers can monitor the
OutdatedProxyConnections metric to view how many proxies/pods
are running outdated proxy software.
Proxy logs are output to /stdout by default, which makes them
accessible through the standard Kubernetes log reading mechanisms. To ensure that
any startup or proxy configuration issues can be diagnosed, the system enables proxy
logs at the error level by default. If users want to alter their
proxy logging, the system requires a manual configuration change. This configuration
provides logs on the container that the user can easily access. Valid log levels are
debug, info, warn,
error, off.
Change the servicemesh.oci.oracle.com/proxy-log-level annotation to
set the logging level for the proxies. For example, use the following command to
change the proxy log level to info in pods.
Copy
# Patch the deployment with the desired log level annotations
kubectl patch deployment <deployment-name> -n <your-namespace> -p '{"spec": {"template":{"metadata":{"annotations":{"servicemesh.oci.oracle.com/proxy-log-level":"info"}}}} }'
Use the following commands to change the proxy log level to info in
ingress gateway deployment pods.
Copy
# Annotate the ingressgatewaydeployment resource with the desired log level
kubectl annotate ingressgatewaydeployments <ingress-gateway-deployment-resource-name> -n <your-namespace> servicemesh.oci.oracle.com/proxy-log-level='info'
# Patch the corresponding deployment created for the ingressgatewaydeployment resource with the desired log level annotations
kubectl patch deployment <deployment-name> -n <your-namespace> -p '{"spec": {"template":{"metadata":{"annotations":{"servicemesh.oci.oracle.com/proxy-log-level":"info"}}}} }'
To set proxy logs levels, replace info with another valid log level
like debug, warn, error, or
off.