Karmada A Multi-Cloud, Multi-Cluster Kubernetes Orchestration

Karmada A Multi-Cloud, Multi-Cluster Kubernetes Orchestration

0 0
Read Time:5 Minute, 24 Second

Here is a brief description on what is multi-cluster setup, why we need this and how Karmada allows us to run containerized applications across multiple Kubernetes clusters and clouds.

What is multi-cluster Kubernetes?

Multi-cluster is a strategy for deploying an application on or across multiple Kubernetes clusters. This helps us to improve the availability, isolation, and scalability of applications. Multi-cluster can also be important to ensure compliance with different and conflicting regulations, as individual clusters can be adapted to comply with geographic regulations. The speed and safety of software delivery can also be increased, with individual development teams deploying applications to isolated clusters and selectively exposing which services are available for testing and release.

Multi-cluster application architecture

Multi-cluster applications can be architected in two fundamental ways:


In this model, each cluster runs a full copy of the application. This simple but powerful approach enables an application to scale globally, as the application can be replicated into multiple regions or clouds and user traffic routed to the closest or most appropriate cluster. Coupled with a health-aware global load balancer, this architecture also enables failover.


In this model, the application is divided into multiple components or services and distributed them across multiple clusters. This approach provides stronger isolation between parts of the application at the expense of greater complexity.

Benefits of multi-cluster Kubernetes

  • Increased scalability & availability
  • Application isolation
  • Security and Compliance

Till now we got the idea of what is multi-cluster kubernetes, why we need this and how it helps us to deploy applications with scalability, availability, isolation, security and compliance. Now we will understand how Karmada helps us to orchestrate multi-cluster Kubernetes into multi-clouds.

Introduction to Karmada

Karmada (Kubernetes Armada) is a Kubernetes management system that enables us to run our cloud-native applications across multiple Kubernetes clusters and clouds, with no changes to our applications. By speaking Kubernetes-native APIs and providing advanced scheduling capabilities, Karmada enables truly open, multi-cloud Kubernetes.

Karmada aims to provide turnkey automation for multi-cluster application management in multi-cloud and hybrid cloud scenarios, with key features such as centralized multi-cloud management, high availability, failure recovery, and traffic scheduling.

Architecture of Karmada

The architecture of Karmada is similar to that of a single Kubernetes cluster in many ways. Both of them have a control plane, an API server, a scheduler, and a group of controllers.

The Karmada Control Plane consists of the following components:

  • Karmada API Server provides Kubernetes native APIs and policy APIs extended by Karmada.
  • Karmada Scheduler focuses on fault domains, cluster resources, Kubernetes versions, and add-ons enabled in the cluster to implement multi-dimensional, multi-weight, and multi-cluster scheduling policies.
  • Karmada Controller Manager runs various controllers, which watch karmada objects and then talk to the underlying clusters’ API servers to create regular Kubernetes resources.
  • ETCD stores the karmada API objects, the API Server is the REST endpoint all other components talk to, and the Karmada Controller Manager performs operations based on the API objects you created through the API server.

Karmada concepts

Here we will discuss the key concepts in Karmada.

Resource Template: Karmada uses the Kubernetes Native API definition for the federated resource template, to make it easy to integrate with existing tools that have already been adopted by Kubernetes.

Propagation Policy: Karmada offers a standalone Propagation (placement) Policy API to define multi-cluster scheduling and spreading requirements. It supports 1:n mapping of policy: workload. Users don’t need to indicate scheduling constraints every time a federated application is created.

Override Policy: Karmada provides a standalone Override Policy API for specializing in the automation of cluster-related configuration. For example, override the image prefix based on the member cluster region.

Key features of Karmada

Cross-cloud multi-cluster multi-mode management

  1. Safe isolation by Creating a namespace for each cluster, prefixed with karmada-es-*
  2. Karmada supports multi modes (Push and Pull mode) connection with Target Clusters. In Push mode, Karmada is directly connected to the target cluster’s kube-apiserver while in Pull mode there is an agent component in the target clusters, Karmada delegates tasks to the agent component.
  3. Multi-cloud support (Only if compliant with Kubernetes specifications).

Multi-policy multi-cluster scheduling

  1. Karmada has various distribution capabilities of workloads into various clusters under different scheduling strategies like ClusterAffinity, Tolerations, SpreadConstraint and ReplicasScheduling.
  2. Karmada supports having a different configuration of applications per cluster by leveraging Override Policies.
  3. Karmada has a re-scheduling feature that triggers workload rescheduling based on instance state changes in member clusters.

Much like k8s scheduling, Karmada supports different scheduling policies. The overall scheduling process is shown in the figure below:

Cross-cluster failover of applications

  • Cluster Failover: Karmada supports users to set distribution policies, and automatically migrates the faulty cluster replicas in a centralized or decentralized manner after a cluster failure.
  • Cluster taint: When the user sets a taint for the cluster and the resource distribution strategy cannot tolerate the taint, Karmada will also automatically trigger the migration of the cluster replicas.
  • Uninterrupted service: During the replicas migration process, Karmada can ensure that the service replicas do not drop to zero, thereby ensuring that the service will not be interrupted.

Karmada supports failover for clusters, one cluster failure will cause a failover of replicas as follows:

The user has joined three clusters in Karmada: member1, member2, and member3. A Deployment named Foo, which has 6 replicas, is deployed on the karmada control plane. The deployment is distributed to cluster member1 and member2 by using PropagationPolicy.

When cluster member1 fails, pod instances on the cluster are evicted and migrated to cluster member2 or the new cluster member3. This different migration behaviour can be controlled by the replica scheduling policy ReplicaSchedulingStrategy of PropagationPolicy/ClusterPropagationPolicy.

Cross-cluster service governance

  • Multi-cluster service discovery: With ServiceExport and ServiceImport, achieving cross-cluster service discovery.
  • Multi-cluster network support: Use Submariner or other related open-source projects to open up the container network between clusters.

Users can enable service governance for cross-cluster with Karmada:

Application foo and its service svc-foo are deployed to the member1 cluster along with the ServiceExport resource. In the member2 cluster, the ServiceImport resource is created and now karmada will import the service svc-foo into member2 named derived-svc-foo. Now any application in the member2 cluster can call this service endpoint internally for accessing the service svc-foo of member1.

Source: Expedia Group

0 %
0 %
0 %
0 %
0 %
0 %